-
公开(公告)号:US11922288B2
公开(公告)日:2024-03-05
申请号:US18114333
申请日:2023-02-27
Applicant: Google LLC
Inventor: Francois Chollet , Andrew Gerald Howard
IPC: G06N3/045 , G06F18/2413 , G06N3/0464 , G06N3/08 , G06V10/44 , G06V10/82 , G06V40/16
CPC classification number: G06N3/045 , G06F18/2413 , G06N3/0464 , G06N3/08 , G06V10/44 , G06V10/454 , G06V10/82 , G06V40/169
Abstract: A neural network system is configured to receive an input image and to generate a classification output for the input image. The neural network system includes: a separable convolution subnetwork comprising a plurality of separable convolutional neural network layers arranged in a stack one after the other, in which each separable convolutional neural network layer is configured to: separately apply both a depthwise convolution and a pointwise convolution during processing of an input to the separable convolutional neural network layer to generate a layer output.
-
公开(公告)号:US20230091374A1
公开(公告)日:2023-03-23
申请号:US17802060
申请日:2020-02-24
Applicant: Google LLC
Inventor: Qifei Wang , Alexander Kuznetsov , Alec Michael Go , Grace Chu , Eunyoung Kim , Feng Yang , Andrew Gerald Howard , Jeffrey M. Gilbert
IPC: G06V30/413 , G06V10/22
Abstract: The present disclosure is directed to object and/or character recognition for use in applications such as computer vision. Advantages of the present disclosure include lightweight functionality that can be used on devices such as smart phones. Aspects of the present disclosure include a sequential architecture where a lightweight machine-learned model can receive an image, detect whether an object is present in one or more regions of the image, and generate an output based on the detection. This output can be applied as a filter to remove image data that can be neglected for more memory intensive machine-learned models applied downstream.
-
公开(公告)号:US20190147318A1
公开(公告)日:2019-05-16
申请号:US15898566
申请日:2018-02-17
Applicant: Google LLC
Inventor: Andrew Gerald Howard , Mark Sandler , Liang-Chieh Chen , Andrey Zhmoginov , Menglong Zhu
Abstract: The present disclosure provides directed to new, more efficient neural network architectures. As one example, in some implementations, the neural network architectures of the present disclosure can include a linear bottleneck layer positioned structurally prior to and/or after one or more convolutional layers, such as, for example, one or more depthwise separable convolutional layers. As another example, in some implementations, the neural network architectures of the present disclosure can include one or more inverted residual blocks where the input and output of the inverted residual block are thin bottleneck layers, while an intermediate layer is an expanded representation. For example, the expanded representation can include one or more convolutional layers, such as, for example, one or more depthwise separable convolutional layers. A residual shortcut connection can exist between the thin bottleneck layers that play a role of an input and output of the inverted residual block.
-
公开(公告)号:US20240256833A1
公开(公告)日:2024-08-01
申请号:US18431300
申请日:2024-02-02
Applicant: Google LLC
Inventor: Francois Chollet , Andrew Gerald Howard
IPC: G06N3/045 , G06F18/2413 , G06N3/0464 , G06N3/08 , G06V10/44 , G06V10/82 , G06V40/16
CPC classification number: G06N3/045 , G06F18/2413 , G06N3/0464 , G06N3/08 , G06V10/44 , G06V10/454 , G06V10/82 , G06V40/169
Abstract: A neural network system is configured to receive an input image and to generate a classification output for the input image. The neural network system includes: a separable convolution subnetwork comprising a plurality of separable convolutional neural network layers arranged in a stack one after the other, in which each separable convolutional neural network layer is configured to: separately apply both a depthwise convolution and a pointwise convolution during processing of an input to the separable convolutional neural network layer to generate a layer output.
-
公开(公告)号:US20230267307A1
公开(公告)日:2023-08-24
申请号:US18014314
申请日:2020-07-23
Applicant: Google LLC
Inventor: Qifei Wang , Junjie Ke , Grace Chu , Gabriel Mintzer Bender , Luciano Sbaiz , Feng Yang , Andrew Gerald Howard , Alec Michael Go , Jeffrey M. Gilbert , Peyman Milanfar , Joshua William Charles Greaves
Abstract: Systems and methods of the present disclosure are directed to a method for generating a machine-learned multitask model configured to perform tasks. The method can include obtaining a machine-learned multitask search model comprising candidate nodes. The method can include obtaining tasks and machine-learned task controller models associated with the tasks. As an example, for a task, the method can include using the task controller model to route a subset of the candidate nodes in a machine-learned task submodel for the corresponding task. The method can include inputting task input data to the task submodel to obtain a task output. The method can include generating, using the task output, a feedback value based on an objective function. The method can include adjusting parameters of the task controller model based on the feedback value.
-
公开(公告)号:US11676008B2
公开(公告)日:2023-06-13
申请号:US16577698
申请日:2019-09-20
Applicant: Google LLC
Inventor: Mark Sandler , Andrey Zhmoginov , Andrew Gerald Howard , Pramod Kaushik Mudrakarta
Abstract: The present disclosure provides systems and methods that enable parameter-efficient transfer learning, multi-task learning, and/or other forms of model re-purposing such as model personalization or domain adaptation. In particular, as one example, a computing system can obtain a machine-learned model that has been previously trained on a first training dataset to perform a first task. The machine-learned model can include a first set of learnable parameters. The computing system can modify the machine-learned model to include a model patch, where the model patch includes a second set of learnable parameters. The computing system can train the machine-learned model on a second training dataset to perform a second task that is different from the first task, which may include learning new values for the second set of learnable parameters included in the model patch while keeping at least some (e.g., all) of the first set of parameters fixed.
-
公开(公告)号:US11823024B2
公开(公告)日:2023-11-21
申请号:US17382503
申请日:2021-07-22
Applicant: Google LLC
Inventor: Andrew Gerald Howard , Mark Sandler , Liang-Chieh Chen , Andrey Zhmoginov , Menglong Zhu
Abstract: The present disclosure provides directed to new, more efficient neural network architectures. As one example, in some implementations, the neural network architectures of the present disclosure can include a linear bottleneck layer positioned structurally prior to and/or after one or more convolutional layers, such as, for example, one or more depthwise separable convolutional layers. As another example, in some implementations, the neural network architectures of the present disclosure can include one or more inverted residual blocks where the input and output of the inverted residual block are thin bottleneck layers, while an intermediate layer is an expanded representation. For example, the expanded representation can include one or more convolutional layers, such as, for example, one or more depthwise separable convolutional layers. A residual shortcut connection can exist between the thin bottleneck layers that play a role of an input and output of the inverted residual block.
-
公开(公告)号:US20230267330A1
公开(公告)日:2023-08-24
申请号:US18310638
申请日:2023-05-02
Applicant: Google LLC
Inventor: Mark Sandler , Andrew Gerald Howard , Andrey Zhmoginov , Pramod Kaushik Mudrakarta
Abstract: The present disclosure provides systems and methods that enable parameter-efficient transfer learning, multi-task learning, and/or other forms of model re-purposing such as model personalization or domain adaptation. In particular, as one example, a computing system can obtain a machine-learned model that has been previously trained on a first training dataset to perform a first task. The machine-learned model can include a first set of learnable parameters. The computing system can modify the machine-learned model to include a model patch, where the model patch includes a second set of learnable parameters. The computing system can train the machine-learned model on a second training dataset to perform a second task that is different from the first task, which may include learning new values for the second set of learnable parameters included in the model patch while keeping at least some (e.g., all) of the first set of parameters fixed.
-
公开(公告)号:US20230237314A1
公开(公告)日:2023-07-27
申请号:US18114333
申请日:2023-02-27
Applicant: Google LLC
Inventor: Francois Chollet , Andrew Gerald Howard
IPC: G06N3/0464 , G06V10/82
CPC classification number: G06N3/0464 , G06V10/82
Abstract: A neural network system is configured to receive an input image and to generate a classification output for the input image. The neural network system includes: a separable convolution subnetwork comprising a plurality of separable convolutional neural network layers arranged in a stack one after the other, in which each separable convolutional neural network layer is configured to: separately apply both a depthwise convolution and a pointwise convolution during processing of an input to the separable convolutional neural network layer to generate a layer output.
-
公开(公告)号:US20210027140A1
公开(公告)日:2021-01-28
申请号:US16338963
申请日:2017-10-06
Applicant: Google LLC
Inventor: Francois Chollet , Andrew Gerald Howard
Abstract: A neural network system is configured to receive an input image and to generate a classification output for the input image. The neural network system includes: a separable convolution subnetwork comprising a plurality of separable convolutional neural network layers arranged in a stack one after the other, in which each separable convolutional neural network layer is configured to: separately apply both a depthwise convolution and a pointwise convolution during processing of an input to the separable convolutional neural network layer to generate a layer output.
-
-
-
-
-
-
-
-
-