-
公开(公告)号:US20200234132A1
公开(公告)日:2020-07-23
申请号:US16751081
申请日:2020-01-23
Applicant: Google LLC
Inventor: Mingxing Tan , Quoc V. Le
Abstract: A method for determining a final architecture for a neural network to perform a particular machine learning task is described. The method includes receiving a baseline architecture for the neural network, wherein the baseline architecture has a network width dimension, a network depth dimension, and a resolution dimension; receiving data defining a compound coefficient that controls extra computational resources used for scaling the baseline architecture; performing a search to determine a baseline width, depth and resolution coefficient that specify how to assign the extra computational resources to the network width, depth and resolution dimensions of the baseline architecture, respectively; determining a width, depth and resolution coefficient based on the baseline width, depth, and resolution coefficient and the compound coefficient; and generating the final architecture that scales the network width, network depth, and resolution dimensions of the baseline architecture based on the corresponding width, depth, and resolution coefficients.
-
12.
公开(公告)号:US12062227B2
公开(公告)日:2024-08-13
申请号:US17943880
申请日:2022-09-13
Applicant: Google LLC
Inventor: Mingxing Tan , Quoc V. Le
IPC: G06V10/00 , G06V10/774 , G06V10/776
CPC classification number: G06V10/7747 , G06V10/776
Abstract: Systems and methods of the present disclosure can include a computer-implemented method for efficient machine-learned model training. The method can include obtaining a plurality of training samples for a machine-learned model. The method can include, for one or more first training iterations, training, based at least in part on a first regularization magnitude configured to control a relative effect of one or more regularization techniques, the machine-learned model using one or more respective first training samples of the plurality of training samples. The method can include, for one or more second training iterations, training, based at least in part on a second regularization magnitude greater than the first regularization magnitude, the machine-learned model using one or more respective second training samples of the plurality of training samples.
-
公开(公告)号:US20240211764A1
公开(公告)日:2024-06-27
申请号:US18400767
申请日:2023-12-29
Applicant: Google LLC
Inventor: Mingxing Tan , Quoc V. Le
Abstract: A method for determining a final architecture for a neural network to perform a particular machine learning task is described. The method includes receiving a baseline architecture for the neural network, wherein the baseline architecture has a network width dimension, a network depth dimension, and a resolution dimension; receiving data defining a compound coefficient that controls extra computational resources used for scaling the baseline architecture; performing a search to determine a baseline width, depth and resolution coefficient that specify how to assign the extra computational resources to the network width, depth and resolution dimensions of the baseline architecture, respectively; determining a width, depth and resolution coefficient based on the baseline width, depth, and resolution coefficient and the compound coefficient; and generating the final architecture that scales the network width, network depth, and resolution dimensions of the baseline architecture based on the corresponding width, depth, and resolution coefficients.
-
公开(公告)号:US11928574B2
公开(公告)日:2024-03-12
申请号:US18154321
申请日:2023-01-13
Applicant: Google LLC
Inventor: Mingxing Tan , Quoc Le , Bo Chen , Vijay Vasudevan , Ruoming Pang
Abstract: The present disclosure is directed to an automated neural architecture search approach for designing new neural network architectures such as, for example, resource-constrained mobile CNN models. In particular, the present disclosure provides systems and methods to perform neural architecture search using a novel factorized hierarchical search space that permits layer diversity throughout the network, thereby striking the right balance between flexibility and search space size. The resulting neural architectures are able to be run relatively faster and using relatively fewer computing resources (e.g., less processing power, less memory usage, less power consumption, etc.), all while remaining competitive with or even exceeding the performance (e.g., accuracy) of current state-of-the-art mobile-optimized models.
-
公开(公告)号:US20230359862A1
公开(公告)日:2023-11-09
申请号:US18355243
申请日:2023-07-19
Applicant: Google LLC
Inventor: Zihang Dai , Mingxing Tan , Quoc V. Le , Hanxiao Liu
Abstract: A computer-implemented method for performing computer vision with reduced computational cost and improved accuracy can include obtaining, by a computing system including one or more computing devices, input data comprising an input tensor having one or more dimensions, providing, by the computing system, the input data to a machine-learned convolutional attention network, the machine-learned convolutional attention network including two or more network stages, and, in response to providing the input data to the machine-learned convolutional attention network, receiving, by the computing system, a machine-learning prediction from the machine-learned convolutional attention network. The convolutional attention network can include at least one attention block, wherein the attention block includes a relative attention mechanism, the relative attention mechanism including the sum of a static convolution kernel with an adaptive attention matrix. This provides for improved generalization, capacity, and efficiency of the convolutional attention network relative to some existing models.
-
公开(公告)号:US20210383223A1
公开(公告)日:2021-12-09
申请号:US17337834
申请日:2021-06-03
Applicant: Google LLC
Inventor: Mingxing Tan , Xuanyi Dong , Wei Yu , Quoc V. Le , Daiyi Peng
Abstract: The present disclosure provides a differentiable joint hyper-parameter and architecture search approach, with some implementations including the idea of discretizing the continuous space into a linear combination of multiple categorical basis. One example element of the proposed approach is the use of weight sharing across all architecture- and hyper-parameters which enables it to search efficiently over the large joint search space. Experimental results on MobileNet/ResNet/EfficientNet/BERT show that the proposed systems significantly improve the accuracy by up to 2% on ImageNet and the F1 by up to 0.4 on SQuAD, with search cost comparable to training a single model. Compared to other AutoML methods, such as random search or Bayesian method, the proposed techniques can achieve better accuracy with 10× less compute cost.
-
公开(公告)号:US10909457B2
公开(公告)日:2021-02-02
申请号:US16751081
申请日:2020-01-23
Applicant: Google LLC
Inventor: Mingxing Tan , Quoc V. Le
Abstract: A method for determining a final architecture for a neural network to perform a particular machine learning task is described. The method includes receiving a baseline architecture for the neural network, wherein the baseline architecture has a network width dimension, a network depth dimension, and a resolution dimension; receiving data defining a compound coefficient that controls extra computational resources used for scaling the baseline architecture; performing a search to determine a baseline width, depth and resolution coefficient that specify how to assign the extra computational resources to the network width, depth and resolution dimensions of the baseline architecture, respectively; determining a width, depth and resolution coefficient based on the baseline width, depth, and resolution coefficient and the compound coefficient; and generating the final architecture that scales the network width, network depth, and resolution dimensions of the baseline architecture based on the corresponding width, depth, and resolution coefficients.
-
公开(公告)号:US12293276B2
公开(公告)日:2025-05-06
申请号:US18430483
申请日:2024-02-01
Applicant: Google LLC
Inventor: Mingxing Tan , Quoc Le , Bo Chen , Vijay Vasudevan , Ruoming Pang
Abstract: The present disclosure is directed to an automated neural architecture search approach for designing new neural network architectures such as, for example, resource-constrained mobile CNN models. In particular, the present disclosure provides systems and methods to perform neural architecture search using a novel factorized hierarchical search space that permits layer diversity throughout the network, thereby striking the right balance between flexibility and search space size. The resulting neural architectures are able to be run relatively faster and using relatively fewer computing resources (e.g., less processing power, less memory usage, less power consumption, etc.), all while remaining competitive with or even exceeding the performance (e.g., accuracy) of current state-of-the-art mobile-optimized models.
-
公开(公告)号:US20250077833A1
公开(公告)日:2025-03-06
申请号:US18821971
申请日:2024-08-30
Applicant: Google LLC
Inventor: Sheng Li , Norman Paul Jouppi , Quoc V. Le , Mingxing Tan , Ruoming Pang , Liqun Cheng , Andrew Li
Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for determining an architecture for a task neural network that is configured to perform a particular machine learning task on a target set of hardware resources. When deployed on a target set of hardware, such as a collection of datacenter accelerators, the task neural network may be capable of performing the particular machine learning task with enhanced accuracy and speed.
-
公开(公告)号:US12131244B2
公开(公告)日:2024-10-29
申请号:US17039178
申请日:2020-09-30
Applicant: Google LLC
Inventor: Sheng Li , Norman Paul Jouppi , Quoc V. Le , Mingxing Tan , Ruoming Pang , Liqun Cheng , Andrew Li
Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for determining an architecture for a task neural network that is configured to perform a particular machine learning task on a target set of hardware resources. When deployed on a target set of hardware, such as a collection of datacenter accelerators, the task neural network may be capable of performing the particular machine learning task with enhanced accuracy and speed.
-
-
-
-
-
-
-
-
-