-
1.
公开(公告)号:US20220036191A1
公开(公告)日:2022-02-03
申请号:US17298853
申请日:2019-01-10
Applicant: Google LLC
Inventor: Yair Movshovitz-Attias , Andrew Poon , Ariel Gordon , Elad Edwin Tzvi Eban
Abstract: A computer-implemented method for reducing the resource consumption of a convolutional neural network can include obtaining data descriptive of the convolutional neural network. The convolutional neural network can include a plurality of convolutional layers configured to perform convolutions using a plurality of kernels that each includes a plurality of kernel elements. The method can include training, for one or more training iterations, the convolutional neural network using a loss function that includes a group sparsifying regularizer term configured to sparsify a respective subset of the kernel elements of the kernel(s); following at least one training iteration, determining, for each of the kernel(s), whether to modify such kernel to remove the respective subset of the kernel elements based at least in part on respective values of the respective subset of kernel elements; and modifying at least one of the kernel(s) to remove the respective subset of the kernel elements.
-
公开(公告)号:US11544498B2
公开(公告)日:2023-01-03
申请号:US17194090
申请日:2021-03-05
Applicant: Google LLC
Inventor: Ariel Gordon , Soeren Pirk , Anelia Angelova , Vincent Michael Casser , Yao Lu , Anthony Brohan , Zhao Chen , Jan Dlabal
Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network using consistency measures. One of the methods includes processing a particular training example from a mediator training data set using a first neural network to generate a first output for a first machine learning task; processing the particular training example in the mediator training data set using each of one or more second neural networks, wherein each second neural network is configured to generate a second output for a respective second machine learning task; determining, for each second machine learning task, a consistency target output for the first machine learning task; determining, for each second machine learning task, an error between the first output and the consistency target output corresponding to the second machine learning task; and generating a parameter update for the first neural network from the determined errors.
-
公开(公告)号:US20210117728A1
公开(公告)日:2021-04-22
申请号:US16657042
申请日:2019-10-18
Applicant: Google LLC
Inventor: Joonseok Lee , Balakrishnan Varadarajan , Ariel Gordon , Apostol Ivanov Natsev , Seong Jae Hwang
Abstract: A MapReduce-based training framework exploits both data parallelism and model parallelism to scale training of complex models. Particular model architectures facilitate and benefit from use of such training framework. As one example, a machine-learned model can include a shared feature extraction portion configured to receive and process a data input to produce an intermediate feature representation and a plurality of prediction heads that are configured to receive and process the intermediate feature representation to respectively produce a plurality of predictions. For example, the data input can be a video and the plurality of predictions can be a plurality of classifications for content of the video (e.g., relative to a plurality of classes).
-
公开(公告)号:US20220215263A1
公开(公告)日:2022-07-07
申请号:US17701778
申请日:2022-03-23
Applicant: Google LLC
Inventor: Ofir Nachum , Ariel Gordon , Elad Eban , Bo Chen
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training neural networks. In one aspect, a system includes a neural network shrinking engine that is configured to receive a neural network being trained and generate a reduced neural network by a shrinking process. The shrinking process includes training the neural network based on a shrinking engine loss function that includes terms penalizing active neurons of the neural network and removing inactive neurons from the neural network. The system includes a neural network expansion engine that is configured to receive the neural network being trained and generate an expanded neural network by an expansion process including adding new neurons to the neural network and training the neural network based on an expanding engine loss function. The system includes a training subsystem that generates reduced neural networks and expanded neural networks.
-
公开(公告)号:US11315019B2
公开(公告)日:2022-04-26
申请号:US15813961
申请日:2017-11-15
Applicant: Google LLC
Inventor: Ofir Nachum , Ariel Gordon , Elad Eban , Bo Chen
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training neural networks. In one aspect, a system includes a neural network shrinking engine that is configured to receive a neural network being trained and generate a reduced neural network by a shrinking process. The shrinking process includes training the neural network based on a shrinking engine loss function that includes terms penalizing active neurons of the neural network and removing inactive neurons from the neural network. The system includes a neural network expansion engine that is configured to receive the neural network being trained and generate an expanded neural network by an expansion process including adding new neurons to the neural network and training the neural network based on an expanding engine loss function. The system includes a training subsystem that generates reduced neural networks and expanded neural networks.
-
公开(公告)号:US11295171B2
公开(公告)日:2022-04-05
申请号:US16657042
申请日:2019-10-18
Applicant: Google LLC
Inventor: Joonseok Lee , Balakrishnan Varadarajan , Ariel Gordon , Apostol Ivanov Natsev , Seong Jae Hwang
Abstract: A MapReduce-based training framework exploits both data parallelism and model parallelism to scale training of complex models. Particular model architectures facilitate and benefit from use of such training framework. As one example, a machine-learned model can include a shared feature extraction portion configured to receive and process a data input to produce an intermediate feature representation and a plurality of prediction heads that are configured to receive and process the intermediate feature representation to respectively produce a plurality of predictions. For example, the data input can be a video and the plurality of predictions can be a plurality of classifications for content of the video (e.g., relative to a plurality of classes).
-
公开(公告)号:US11875262B2
公开(公告)日:2024-01-16
申请号:US17701778
申请日:2022-03-23
Applicant: Google LLC
Inventor: Ofir Nachum , Ariel Gordon , Elad Eban , Bo Chen
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training neural networks. In one aspect, a system includes a neural network shrinking engine that is configured to receive a neural network being trained and generate a reduced neural network by a shrinking process. The shrinking process includes training the neural network based on a shrinking engine loss function that includes terms penalizing active neurons of the neural network and removing inactive neurons from the neural network. The system includes a neural network expansion engine that is configured to receive the neural network being trained and generate an expanded neural network by an expansion process including adding new neurons to the neural network and training the neural network based on an expanding engine loss function. The system includes a training subsystem that generates reduced neural networks and expanded neural networks.
-
公开(公告)号:US11769269B2
公开(公告)日:2023-09-26
申请号:US17878535
申请日:2022-08-01
Applicant: Google LLC
Inventor: Guy Satat , Michael Quinlan , Sean Kirmani , Anelia Angelova , Ariel Gordon
CPC classification number: G06T7/593 , B25J13/089 , G05D1/0231 , G06T3/20 , H04N13/128 , G06T2207/10028 , H04N2013/0081
Abstract: A method includes receiving a first depth map that includes a plurality of first pixel depths and a second depth map that includes a plurality of second pixel depths. The first depth map corresponds to a reference depth scale and the second depth map corresponds to a relative depth scale. The method includes aligning the second pixel depths with the first pixel depths. The method includes transforming the aligned region of the second pixel depths such that transformed second edge pixel depths of the aligned region are coextensive with first edge pixel depths surrounding the corresponding region of the first pixel depths. The method includes generating a third depth map. The third depth map includes a first region corresponding to the first pixel depths and a second region corresponding to the transformed and aligned region of the second pixel depths.
-
公开(公告)号:US20210279511A1
公开(公告)日:2021-09-09
申请号:US17194090
申请日:2021-03-05
Applicant: Google LLC
Inventor: Ariel Gordon , Soeren Pirk , Anelia Angelova , Vincent Michael Casser , Yao Lu , Anthony Brohan , Zhao Chen , Jan Dlabal
Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network using consistency measures. One of the methods includes processing a particular training example from a mediator training data set using a first neural network to generate a first output for a first machine learning task; processing the particular training example in the mediator training data set using each of one or more second neural networks, wherein each second neural network is configured to generate a second output for a respective second machine learning task; determining, for each second machine learning task, a consistency target output for the first machine learning task; determining, for each second machine learning task, an error between the first output and the consistency target output corresponding to the second machine learning task; and generating a parameter update for the first neural network from the determined errors.
-
公开(公告)号:US20190147339A1
公开(公告)日:2019-05-16
申请号:US15813961
申请日:2017-11-15
Applicant: Google LLC
Inventor: Ofir Nachum , Ariel Gordon , Elad Eban , Bo Chen
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training neural networks. In one aspect, a system includes a neural network shrinking engine that is configured to receive a neural network being trained and generate a reduced neural network by a shrinking process. The shrinking process includes training the neural network based on a shrinking engine loss function that includes terms penalizing active neurons of the neural network and removing inactive neurons from the neural network. The system includes a neural network expansion engine that is configured to receive the neural network being trained and generate an expanded neural network by an expansion process including adding new neurons to the neural network and training the neural network based on an expanding engine loss function. The system includes a training subsystem that generates reduced neural networks and expanded neural networks.
-
-
-
-
-
-
-
-
-