-
公开(公告)号:US20240355109A1
公开(公告)日:2024-10-24
申请号:US18746977
申请日:2024-06-18
Applicant: Google LLC
Inventor: Michael Sahngwon Ryoo , Anthony Jacob Piergiovanni , Mingxing Tan , Anelia Angelova
IPC: G06V10/82 , G06N3/045 , G06T1/20 , G06T3/4046 , G06T7/207 , G06V10/776
CPC classification number: G06V10/82 , G06N3/045 , G06T1/20 , G06T3/4046 , G06T7/207 , G06V10/776 , G06T2207/10016 , G06T2207/20081 , G06T2207/20084
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for determining one or more neural network architectures of a neural network for performing a video processing neural network task. In one aspect, a method comprises: at each of a plurality of iterations: selecting a parent neural network architecture from a set of neural network architectures; training a neural network having the parent neural network architecture to perform the video processing neural network task, comprising determining trained values of connection weight parameters of the parent neural network architecture; generating a new neural network architecture based at least in part on the trained values of the connection weight parameters of the parent neural network architecture; and adding the new neural network architecture to the set of neural network architectures.
-
22.
公开(公告)号:US20240355101A1
公开(公告)日:2024-10-24
申请号:US18761065
申请日:2024-07-01
Applicant: Google LLC
Inventor: Mingxing Tan , Quoc V. Le
IPC: G06V10/774 , G06V10/776
CPC classification number: G06V10/7747 , G06V10/776
Abstract: Systems and methods of the present disclosure can include a computer-implemented method for efficient machine-learned model training. The method can include obtaining a plurality of training samples for a machine-learned model. The method can include, for one or more first training iterations, training, based at least in part on a first regularization magnitude configured to control a relative effect of one or more regularization techniques, the machine-learned model using one or more respective first training samples of the plurality of training samples. The method can include, for one or more second training iterations, training, based at least in part on a second regularization magnitude greater than the first regularization magnitude, the machine-learned model using one or more respective second training samples of the plurality of training samples.
-
公开(公告)号:US20240005129A1
公开(公告)日:2024-01-04
申请号:US18029849
申请日:2021-10-01
Applicant: Google LLC
Inventor: Yanqi Zhou , Amir Yazdanbakhsh , Berkin Akin , Daiyi Peng , Yuxiong Zhu , Mingxing Tan , Xuanyi Dong
IPC: G06N3/045 , G06N3/092 , G06N3/063 , G06N3/044 , G06N3/0464
CPC classification number: G06N3/045 , G06N3/092 , G06N3/0464 , G06N3/044 , G06N3/063
Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for jointly determining neural network architectures and hardware accelerator architectures. In one aspect, a method includes: generating, using a controller policy, a batch of one or more output sequences, each output sequence in the batch defining a respective architecture of a child neural network and a respective architecture of a hardware accelerator; for each output sequence in the batch: training a respective instance of the child neural network having the architecture defined by the output sequence; evaluating a network performance of the trained instance of the child neural; and evaluating an accelerator performance of a respective instance of the hardware accelerator having the architecture defined by the output sequence to determine an accelerator performance metric for the instance of the hardware accelerator; and using the network performance metrics and the accelerator performance metrics to adjust the controller policy.
-
公开(公告)号:US20220383069A1
公开(公告)日:2022-12-01
申请号:US17827130
申请日:2022-05-27
Applicant: Google LLC
Inventor: Zihang Dai , Hanxiao Liu , Mingxing Tan , Quoc V. Le
Abstract: A computer-implemented method for performing computer vision with reduced computational cost and improved accuracy can include obtaining, by a computing system including one or more computing devices, input data comprising an input tensor having one or more dimensions, providing, by the computing system, the input data to a machine-learned convolutional attention network, the machine-learned convolutional attention network including two or more network stages, and, in response to providing the input data to the machine-learned convolutional attention network, receiving, by the computing system, a machine-learning prediction from the machine-learned convolutional attention network. The convolutional attention network can include at least one attention block, wherein the attention block includes a relative attention mechanism, the relative attention mechanism including the sum of a static convolution kernel with an adaptive attention matrix. This provides for improved generalization, capacity, and efficiency of the convolutional attention network relative to some existing models.
-
公开(公告)号:US20220230048A1
公开(公告)日:2022-07-21
申请号:US17175029
申请日:2021-02-12
Applicant: Google LLC
Inventor: Andrew Li , Sheng Li , Mingxing Tan , Ruoming Pang , Liqun Cheng , Quoc V. Le , Norman Paul Jouppi
Abstract: Methods, systems, and apparatus, including computer-readable media, for scaling neural network architectures on hardware accelerators. A method includes receiving training data and information specifying target computing resources, and performing using the training data, a neural architecture search over a search space to identify an architecture for a base neural network. A plurality of scaling parameter values for scaling the base neural network can be identified, which can include repeatedly selecting a plurality of candidate scaling parameter values, and determining a measure of performance for the base neural network scaled according to the plurality of candidate scaling parameter values, in accordance with a plurality of second objectives including a latency objective. An architecture for a scaled neural network can be determined using the architecture of the base neural network scaled according to the plurality of scaling parameter values.
-
公开(公告)号:US20220108204A1
公开(公告)日:2022-04-07
申请号:US17061355
申请日:2020-10-01
Applicant: Google LLC
Inventor: Xianzhi Du , Yin Cui , Tsung-Yi Lin , Quoc V. Le , Pengchong Jin , Mingxing Tan , Golnaz Ghiasi , Xiaodan Song
Abstract: A computer-implemented method of generating scale-permuted models can generate models having improved accuracy and reduced evaluation computational requirements. The method can include defining, by a computing system including one or more computing devices, a search space including a plurality of candidate permutations of a plurality of candidate feature blocks, each of the plurality of candidate feature blocks having a respective scale. The method can include performing, by the computing system, a plurality of search iterations by a search algorithm to select a scale-permuted model from the search space, the scale-permuted model based at least in part on a candidate permutation of the plurality of candidate permutations.
-
公开(公告)号:US20210383237A1
公开(公告)日:2021-12-09
申请号:US17337812
申请日:2021-06-03
Applicant: Google LLC
Inventor: Mingxing Tan , Cihang Xie , Boqing Gong , Quoc V. Le
Abstract: Generally, the present disclosure is directed to the training of robust neural network models by using smooth activation functions. Systems and methods according to the present disclosure may generate and/or train neural network models with improved robustness without incurring a substantial accuracy penalty and/or increased computational cost, or without any such penalty at all. For instance, in some examples, the accuracy may improve. A smooth activation function may replace an original activation function in a machine-learned model when backpropagating a loss function through the model. Optionally, one activation function may be used in the model at inference time, and a replacement activation function may be used when backpropagating a loss function through the model. The replacement activation function may be used to update learnable parameters of the model and/or to generate adversarial examples for training the model.
-
公开(公告)号:US12046025B2
公开(公告)日:2024-07-23
申请号:US17605783
申请日:2020-05-22
Applicant: Google LLC
Inventor: Michael Sahngwon Ryoo , Anthony Jacob Piergiovanni , Mingxing Tan , Anelia Angelova
IPC: G06V10/82 , G06N3/045 , G06T1/20 , G06T3/4046 , G06T7/207 , G06V10/776
CPC classification number: G06V10/82 , G06N3/045 , G06T1/20 , G06T3/4046 , G06T7/207 , G06V10/776 , G06T2207/10016 , G06T2207/20081 , G06T2207/20084
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for determining one or more neural network architectures of a neural network for performing a video processing neural network task. In one aspect, a method comprises: at each of a plurality of iterations: selecting a parent neural network architecture from a set of neural network architectures; training a neural network having the parent neural network architecture to perform the video processing neural network task, comprising determining trained values of connection weight parameters of the parent neural network architecture; generating a new neural network architecture based at least in part on the trained values of the connection weight parameters of the parent neural network architecture; and adding the new neural network architecture to the set of neural network architectures.
-
公开(公告)号:US11755883B2
公开(公告)日:2023-09-12
申请号:US17827130
申请日:2022-05-27
Applicant: Google LLC
Inventor: Zihang Dai , Hanxiao Liu , Mingxing Tan , Quoc V. Le
Abstract: A computer-implemented method for performing computer vision with reduced computational cost and improved accuracy can include obtaining, by a computing system including one or more computing devices, input data comprising an input tensor having one or more dimensions, providing, by the computing system, the input data to a machine-learned convolutional attention network, the machine-learned convolutional attention network including two or more network stages, and, in response to providing the input data to the machine-learned convolutional attention network, receiving, by the computing system, a machine-learning prediction from the machine-learned convolutional attention network. The convolutional attention network can include at least one attention block, wherein the attention block includes a relative attention mechanism, the relative attention mechanism including the sum of a static convolution kernel with an adaptive attention matrix. This provides for improved generalization, capacity, and efficiency of the convolutional attention network relative to some existing models.
-
公开(公告)号:US20230244904A1
公开(公告)日:2023-08-03
申请号:US18154321
申请日:2023-01-13
Applicant: Google LLC
Inventor: Mingxing Tan , Quoc Le , Bo Chen , Vijay Vasudevan , Ruoming Pang
Abstract: The present disclosure is directed to an automated neural architecture search approach for designing new neural network architectures such as, for example, resource-constrained mobile CNN models. In particular, the present disclosure provides systems and methods to perform neural architecture search using a novel factorized hierarchical search space that permits layer diversity throughout the network, thereby striking the right balance between flexibility and search space size. The resulting neural architectures are able to be run relatively faster and using relatively fewer computing resources (e.g., less processing power, less memory usage, less power consumption, etc.), all while remaining competitive with or even exceeding the performance (e.g., accuracy) of current state-of-the-art mobile-optimized models.
-
-
-
-
-
-
-
-
-