-
公开(公告)号:US20230177754A1
公开(公告)日:2023-06-08
申请号:US18085487
申请日:2022-12-20
Applicant: Google LLC
Inventor: Tianhao Zhang , Weilong Yang , Honglak Lee , Hung-Yu Tseng , Irfan Aziz Essa , Lu Jiang
Abstract: A method for generating an output image from an input image and an input text instruction that specifies a location and a modification of an edit applied to the input image using a neural network is described. The neural network includes an image encoder, an image decoder, and an instruction attention network. The method includes receiving the input image and the input text instruction; extracting, from the input image, an input image feature that represents features of the input image using the image encoder; generating a spatial feature and a modification feature from the input text instruction using the instruction attention network; generating an edited image feature from the input image feature, the spatial feature and the modification feature; and generating the output image from the edited image feature using the image decoder.
-
公开(公告)号:US11562518B2
公开(公告)日:2023-01-24
申请号:US17340671
申请日:2021-06-07
Applicant: Google LLC
Inventor: Tianhao Zhang , Weilong Yang , Honglak Lee , Hung-Yu Tseng , Irfan Aziz Essa , Lu Jiang
Abstract: A method for generating an output image from an input image and an input text instruction that specifies a location and a modification of an edit applied to the input image using a neural network is described. The neural network includes an image encoder, an image decoder, and an instruction attention network. The method includes receiving the input image and the input text instruction; extracting, from the input image, an input image feature that represents features of the input image using the image encoder; generating a spatial feature and a modification feature from the input text instruction using the instruction attention network; generating an edited image feature from the input image feature, the spatial feature and the modification feature; and generating the output image from the edited image feature using the image decoder.
-
公开(公告)号:US20220391687A1
公开(公告)日:2022-12-08
申请号:US17338093
申请日:2021-06-03
Applicant: Google LLC
Inventor: John Dalton Co-Reyes , Yingjie Miao , Daiyi Peng , Sergey Vladimir Levine , Quoc V. Le , Honglak Lee , Aleksandra Faust
IPC: G06N3/08 , G06F11/34 , G06F16/901
Abstract: Methods, computer systems, and apparatus, including computer programs encoded on computer storage media, for generating and searching reinforcement learning algorithms. In some implementations, a computer-implemented system generates a sequence of candidate reinforcement learning algorithms. Each candidate reinforcement learning algorithm in the sequence is configured to receive an input environment state characterizing a state of an environment and to generate an output that specifies an action to be performed by an agent interacting with the environment. For each candidate reinforcement learning algorithm in the sequence, the system performs a performance evaluation for a set of a plurality of training environments. For each training environment, the system adjusts a set of environment-specific parameters of the candidate reinforcement learning algorithm by performing training of the candidate reinforcement learning algorithm to control a corresponding agent in the training environment. The system generates an environment-specific performance metric for the candidate reinforcement learning algorithm that measures a performance of the candidate reinforcement learning algorithm in controlling the corresponding agent in the training environment as a result of the training. After performing training in the set of training environments, the system generates a summary performance metric for the candidate reinforcement learning algorithm by combining the environment-specific performance metrics generated for the set of training environments. After evaluating each of the candidate reinforcement learning algorithms in the sequence, the system selects one or more output reinforcement learning algorithms from the sequence based on the summary performance metrics of the candidate reinforcement learning algorithms.
-
公开(公告)号:US20210383584A1
公开(公告)日:2021-12-09
申请号:US17340671
申请日:2021-06-07
Applicant: Google LLC
Inventor: Tianhao Zhang , Weilong Yang , Honglak Lee , Hung-Yu Tseng , Irfan Aziz Essa , Lu Jiang
Abstract: A method for generating an output image from an input image and an input text instruction that specifies a location and a modification of an edit applied to the input image using a neural network is described. The neural network includes an image encoder, an image decoder, and an instruction attention network. The method includes receiving the input image and the input text instruction; extracting, from the input image, an input image feature that represents features of the input image using the image encoder; generating a spatial feature and a modification feature from the input text instruction using the instruction attention network; generating an edited image feature from the input image feature, the spatial feature and the modification feature; and generating the output image from the edited image feature using the image decoder.
-
15.
公开(公告)号:US20210101286A1
公开(公告)日:2021-04-08
申请号:US17053335
申请日:2020-02-28
Applicant: Google LLC
Inventor: Honglak Lee , Xinchen Yan , Soeren Pirk , Yunfei Bai , Seyed Mohammad Khansari Zadeh , Yuanzheng Gong , Jasmine Hsu
Abstract: Implementations relate to training a point cloud prediction model that can be utilized to process a single-view two-and-a-half-dimensional (2.5D) observation of an object, to generate a domain-invariant three-dimensional (3D) representation of the object. Implementations additionally or alternatively relate to utilizing the domain-invariant 3D representation to train a robotic manipulation policy model using, as at least part of the input to the robotic manipulation policy model during training, the domain-invariant 3D representations of simulated objects to be manipulated. Implementations additionally or alternatively relate to utilizing the trained robotic manipulation policy model in control of a robot based on output generated by processing generated domain-invariant 3D representations utilizing the robotic manipulation policy model.
-
16.
公开(公告)号:US12112494B2
公开(公告)日:2024-10-08
申请号:US17053335
申请日:2020-02-28
Applicant: Google LLC
Inventor: Honglak Lee , Xinchen Yan , Soeren Pirk , Yunfei Bai , Seyed Mohammad Khansari Zadeh , Yuanzheng Gong , Jasmine Hsu
CPC classification number: G06T7/55 , B25J9/1605 , B25J9/163 , B25J9/1669 , B25J9/1697 , B25J13/08 , G06F18/2163 , G06T7/50 , G06V20/10 , G06V20/64 , G06T2207/10024 , G06T2207/10028 , G06T2207/20081 , G06T2207/20084 , G06T2207/20132
Abstract: Implementations relate to training a point cloud prediction model that can be utilized to process a single-view two-and-a-half-dimensional (2.5D) observation of an object, to generate a domain-invariant three-dimensional (3D) representation of the object. Implementations additionally or alternatively relate to utilizing the domain-invariant 3D representation to train a robotic manipulation policy model using, as at least part of the input to the robotic manipulation policy model during training, the domain-invariant 3D representations of simulated objects to be manipulated. Implementations additionally or alternatively relate to utilizing the trained robotic manipulation policy model in control of a robot based on output generated by processing generated domain-invariant 3D representations utilizing the robotic manipulation policy model.
-
公开(公告)号:US12014446B2
公开(公告)日:2024-06-18
申请号:US17409249
申请日:2021-08-23
Applicant: Google LLC
Inventor: Jing Yu Koh , Honglak Lee , Yinfei Yang , Jason Michael Baldridge , Peter James Anderson
CPC classification number: G06T11/00 , G06F18/213 , G06N3/045 , G06N3/08 , G06T7/10 , G06T15/00 , G06T15/08 , G06T2207/10028 , G06T2207/20081
Abstract: A computing system for generating predicted images along a trajectory of unseen viewpoints. The system can obtain one or more spatial observations of an environment that may be captured from one or more previous camera poses. The system can generate a three-dimensional point cloud for the environment from the one or more spatial observations and the one or more previous camera poses. The system can project the three-dimensional point cloud into two-dimensional space to form one or more guidance spatial observations. The system can process the one or more guidance spatial observations with a machine-learned spatial observation prediction model to generate one or more predicted spatial observations. The system can process the one or more predicted spatial observations and image data with a machine-learned image prediction model to generate one or more predicted images from the target camera pose. The system can output the one or more predicted images.
-
公开(公告)号:US11992944B2
公开(公告)日:2024-05-28
申请号:US17050546
申请日:2019-05-17
Applicant: Google LLC
Inventor: Honglak Lee , Shixiang Gu , Sergey Levine
IPC: B25J9/16
CPC classification number: B25J9/163
Abstract: Training and/or utilizing a hierarchical reinforcement learning (HRL) model for robotic control. The HRL model can include at least a higher-level policy model and a lower-level policy model. Some implementations relate to technique(s) that enable more efficient off-policy training to be utilized in training of the higher-level policy model and/or the lower-level policy model. Some of those implementations utilize off-policy correction, which re-labels higher-level actions of experience data, generated in the past utilizing a previously trained version of the HRL model, with modified higher-level actions. The modified higher-level actions are then utilized to off-policy train the higher-level policy model. This can enable effective off-policy training despite the lower-level policy model being a different version at training time (relative to the version when the experience data was collected).
-
公开(公告)号:US20230072293A1
公开(公告)日:2023-03-09
申请号:US17409249
申请日:2021-08-23
Applicant: Google LLC
Inventor: Jing Yu Koh , Honglak Lee , Yinfei Yang , Jason Michael Baldridge , Peter James Anderson
Abstract: A computing system for generating predicted images along a trajectory of unseen viewpoints. The system can obtain one or more spatial observations of an environment that may be captured from one or more previous camera poses. The system can generate a three-dimensional point cloud for the environment from the one or more spatial observations and the one or more previous camera poses. The system can project the three-dimensional point cloud into two-dimensional space to form one or more guidance spatial observations. The system can process the one or more guidance spatial observations with a machine-learned spatial observation prediction model to generate one or more predicted spatial observations. The system can process the one or more predicted spatial observations and image data with a machine-learned image prediction model to generate one or more predicted images from the target camera pose. The system can output the one or more predicted images.
-
公开(公告)号:US20210187733A1
公开(公告)日:2021-06-24
申请号:US17050546
申请日:2019-05-17
Applicant: Google LLC
Inventor: Honglak Lee , Shixiang Gu , Sergey Levine
IPC: B25J9/16
Abstract: Training and/or utilizing a hierarchical reinforcement learning (HRL) model for robotic control. The HRL model can include at least a higher-level policy model and a lower-level policy model. Some implementations relate to technique(s) that enable more efficient off-policy training to be utilized in training of the higher-level policy model and/or the lower-level policy model. Some of those implementations utilize off-policy correction, which re-labels higher-level actions of experience data, generated in the past utilizing a previously trained version of the HRL model, with modified higher-level actions. The modified higher-level actions are then utilized to off-policy train the higher-level policy model. This can enable effective off-policy training despite the lower-level policy model being a different version at training time (relative to the version when the experience data was collected).
-
-
-
-
-
-
-
-
-