-
公开(公告)号:US11875262B2
公开(公告)日:2024-01-16
申请号:US17701778
申请日:2022-03-23
Applicant: Google LLC
Inventor: Ofir Nachum , Ariel Gordon , Elad Eban , Bo Chen
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training neural networks. In one aspect, a system includes a neural network shrinking engine that is configured to receive a neural network being trained and generate a reduced neural network by a shrinking process. The shrinking process includes training the neural network based on a shrinking engine loss function that includes terms penalizing active neurons of the neural network and removing inactive neurons from the neural network. The system includes a neural network expansion engine that is configured to receive the neural network being trained and generate an expanded neural network by an expansion process including adding new neurons to the neural network and training the neural network based on an expanding engine loss function. The system includes a training subsystem that generates reduced neural networks and expanded neural networks.
-
公开(公告)号:US20220101090A1
公开(公告)日:2022-03-31
申请号:US17495398
申请日:2021-10-06
Applicant: Google LLC
Inventor: Mingxing Tan , Quoc V. Le , Bo Chen , Vijay Vasudevan , Ruoming Pang
Abstract: The present disclosure is directed to an automated neural architecture search approach for designing new neural network architectures such as, for example, resource-constrained mobile CNN models. In particular, the present disclosure provides systems and methods to perform neural architecture search using a novel factorized hierarchical search space that permits layer diversity throughout the network, thereby striking the right balance between flexibility and search space size. The resulting neural architectures are able to be run relatively faster and using relatively fewer computing resources (e.g., less processing power, less memory usage, less power consumption, etc.), all while remaining competitive with or even exceeding the performance (e.g., accuracy) of current state-of-the-art mobile-optimized models.
-
公开(公告)号:US20190147339A1
公开(公告)日:2019-05-16
申请号:US15813961
申请日:2017-11-15
Applicant: Google LLC
Inventor: Ofir Nachum , Ariel Gordon , Elad Eban , Bo Chen
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training neural networks. In one aspect, a system includes a neural network shrinking engine that is configured to receive a neural network being trained and generate a reduced neural network by a shrinking process. The shrinking process includes training the neural network based on a shrinking engine loss function that includes terms penalizing active neurons of the neural network and removing inactive neurons from the neural network. The system includes a neural network expansion engine that is configured to receive the neural network being trained and generate an expanded neural network by an expansion process including adding new neurons to the neural network and training the neural network based on an expanding engine loss function. The system includes a training subsystem that generates reduced neural networks and expanded neural networks.
-
公开(公告)号:US20240273336A1
公开(公告)日:2024-08-15
申请号:US18430483
申请日:2024-02-01
Applicant: Google LLC
Inventor: Mingxing Tan , Quoc Le , Bo Chen , Vijay Vasudevan , Ruoming Pang
Abstract: The present disclosure is directed to an automated neural architecture search approach for designing new neural network architectures such as, for example, resource-constrained mobile CNN models. In particular, the present disclosure provides systems and methods to perform neural architecture search using a novel factorized hierarchical search space that permits layer diversity throughout the network, thereby striking the right balance between flexibility and search space size. The resulting neural architectures are able to be run relatively faster and using relatively fewer computing resources (e.g., less processing power, less memory usage, less power consumption, etc.), all while remaining competitive with or even exceeding the performance (e.g., accuracy) of current state-of-the-art mobile-optimized models.
-
公开(公告)号:US12293276B2
公开(公告)日:2025-05-06
申请号:US18430483
申请日:2024-02-01
Applicant: Google LLC
Inventor: Mingxing Tan , Quoc Le , Bo Chen , Vijay Vasudevan , Ruoming Pang
Abstract: The present disclosure is directed to an automated neural architecture search approach for designing new neural network architectures such as, for example, resource-constrained mobile CNN models. In particular, the present disclosure provides systems and methods to perform neural architecture search using a novel factorized hierarchical search space that permits layer diversity throughout the network, thereby striking the right balance between flexibility and search space size. The resulting neural architectures are able to be run relatively faster and using relatively fewer computing resources (e.g., less processing power, less memory usage, less power consumption, etc.), all while remaining competitive with or even exceeding the performance (e.g., accuracy) of current state-of-the-art mobile-optimized models.
-
6.
公开(公告)号:US11157815B2
公开(公告)日:2021-10-26
申请号:US16524410
申请日:2019-07-29
Applicant: Google LLC
Inventor: Andrew Gerald Howard , Bo Chen , Dmitry Kalenichenko , Tobias Christoph Weyand , Menglong Zhu , Marco Andreetto , Weijun Wang
Abstract: The present disclosure provides systems and methods to reduce computational costs associated with convolutional neural networks. In addition, the present disclosure provides a class of efficient models termed “MobileNets” for mobile and embedded vision applications. MobileNets are based on a straight-forward architecture that uses depthwise separable convolutions to build light weight deep neural networks. The present disclosure further provides two global hyper-parameters that efficiently trade-off between latency and accuracy. These hyper-parameters allow the entity building the model to select the appropriately sized model for the particular application based on the constraints of the problem. MobileNets and associated computational cost reduction techniques are effective across a wide range of applications and use cases.
-
公开(公告)号:US11928574B2
公开(公告)日:2024-03-12
申请号:US18154321
申请日:2023-01-13
Applicant: Google LLC
Inventor: Mingxing Tan , Quoc Le , Bo Chen , Vijay Vasudevan , Ruoming Pang
Abstract: The present disclosure is directed to an automated neural architecture search approach for designing new neural network architectures such as, for example, resource-constrained mobile CNN models. In particular, the present disclosure provides systems and methods to perform neural architecture search using a novel factorized hierarchical search space that permits layer diversity throughout the network, thereby striking the right balance between flexibility and search space size. The resulting neural architectures are able to be run relatively faster and using relatively fewer computing resources (e.g., less processing power, less memory usage, less power consumption, etc.), all while remaining competitive with or even exceeding the performance (e.g., accuracy) of current state-of-the-art mobile-optimized models.
-
公开(公告)号:US20220215263A1
公开(公告)日:2022-07-07
申请号:US17701778
申请日:2022-03-23
Applicant: Google LLC
Inventor: Ofir Nachum , Ariel Gordon , Elad Eban , Bo Chen
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training neural networks. In one aspect, a system includes a neural network shrinking engine that is configured to receive a neural network being trained and generate a reduced neural network by a shrinking process. The shrinking process includes training the neural network based on a shrinking engine loss function that includes terms penalizing active neurons of the neural network and removing inactive neurons from the neural network. The system includes a neural network expansion engine that is configured to receive the neural network being trained and generate an expanded neural network by an expansion process including adding new neurons to the neural network and training the neural network based on an expanding engine loss function. The system includes a training subsystem that generates reduced neural networks and expanded neural networks.
-
公开(公告)号:US11315019B2
公开(公告)日:2022-04-26
申请号:US15813961
申请日:2017-11-15
Applicant: Google LLC
Inventor: Ofir Nachum , Ariel Gordon , Elad Eban , Bo Chen
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training neural networks. In one aspect, a system includes a neural network shrinking engine that is configured to receive a neural network being trained and generate a reduced neural network by a shrinking process. The shrinking process includes training the neural network based on a shrinking engine loss function that includes terms penalizing active neurons of the neural network and removing inactive neurons from the neural network. The system includes a neural network expansion engine that is configured to receive the neural network being trained and generate an expanded neural network by an expansion process including adding new neurons to the neural network and training the neural network based on an expanding engine loss function. The system includes a training subsystem that generates reduced neural networks and expanded neural networks.
-
公开(公告)号:US20230244904A1
公开(公告)日:2023-08-03
申请号:US18154321
申请日:2023-01-13
Applicant: Google LLC
Inventor: Mingxing Tan , Quoc Le , Bo Chen , Vijay Vasudevan , Ruoming Pang
Abstract: The present disclosure is directed to an automated neural architecture search approach for designing new neural network architectures such as, for example, resource-constrained mobile CNN models. In particular, the present disclosure provides systems and methods to perform neural architecture search using a novel factorized hierarchical search space that permits layer diversity throughout the network, thereby striking the right balance between flexibility and search space size. The resulting neural architectures are able to be run relatively faster and using relatively fewer computing resources (e.g., less processing power, less memory usage, less power consumption, etc.), all while remaining competitive with or even exceeding the performance (e.g., accuracy) of current state-of-the-art mobile-optimized models.
-
-
-
-
-
-
-
-
-