-
公开(公告)号:US20200338722A1
公开(公告)日:2020-10-29
申请号:US16622309
申请日:2018-06-28
Applicant: Google LLC
Inventor: Eric Jang , Sudheendra Vijayanarasimhan , Peter Pastor Sampedro , Julian Ibarz , Sergey Levine
Abstract: Deep machine learning methods and apparatus related to semantic robotic grasping are provided. Some implementations relate to training a training a grasp neural network, a semantic neural network, and a joint neural network of a semantic grasping model. In some of those implementations, the joint network is a deep neural network and can be trained based on both: grasp losses generated based on grasp predictions generated over a grasp neural network, and semantic losses generated based on semantic predictions generated over the semantic neural network. Some implementations are directed to utilization of the trained semantic grasping model to servo, or control, a grasping end effector of a robot to achieve a successful grasp of an object having desired semantic feature(s).
-
公开(公告)号:US20220105624A1
公开(公告)日:2022-04-07
申请号:US17422260
申请日:2020-01-23
Applicant: Google LLC
Inventor: Mrinal Kalakrishnan , Yunfei Bai , Paul Wohlhart , Eric Jang , Chelsea Finn , Seyed Mohammad Khansari Zadeh , Sergey Levine , Allan Zhou , Alexander Herzog , Daniel Kappler
IPC: B25J9/16
Abstract: Techniques are disclosed that enable training a meta-learning model, for use in causing a robot to perform a task, using imitation learning as well as reinforcement learning. Some implementations relate to training the meta-learning model using imitation learning based on one or more human guided demonstrations of the task. Additional or alternative implementations relate to training the meta-learning model using reinforcement learning based on trials of the robot attempting to perform the task. Further implementations relate to using the trained meta-learning model to few shot (or one shot) learn a new task based on a human guided demonstration of the new task.
-
公开(公告)号:US20180147723A1
公开(公告)日:2018-05-31
申请号:US15881189
申请日:2018-01-26
Applicant: Google LLC
Inventor: Sudheendra Vijayanarasimhan , Eric Jang , Peter Pastor Sampedro , Sergey Levine
CPC classification number: B25J9/163 , B25J9/1612 , B25J9/1697 , G05B13/027 , G05B19/18 , G06N3/008 , G06N3/0454 , G06N3/08 , G06N3/084 , Y10S901/36
Abstract: Deep machine learning methods and apparatus related to manipulation of an object by an end effector of a robot. Some implementations relate to training a semantic grasping model to predict a measure that indicates whether motion data for an end effector of a robot will result in a successful grasp of an object; and to predict an additional measure that indicates whether the object has desired semantic feature(s). Some implementations are directed to utilization of the trained semantic grasping model to servo a grasping end effector of a robot to achieve a successful grasp of an object having desired semantic feature(s).
-
公开(公告)号:US12083678B2
公开(公告)日:2024-09-10
申请号:US17422260
申请日:2020-01-23
Applicant: Google LLC
Inventor: Mrinal Kalakrishnan , Yunfei Bai , Paul Wohlhart , Eric Jang , Chelsea Finn , Seyed Mohammad Khansari Zadeh , Sergey Levine , Allan Zhou , Alexander Herzog , Daniel Kappler
IPC: B25J9/16
CPC classification number: B25J9/163 , G05B2219/40116 , G05B2219/40499
Abstract: Techniques are disclosed that enable training a meta-learning model, for use in causing a robot to perform a task, using imitation learning as well as reinforcement learning. Some implementations relate to training the meta-learning model using imitation learning based on one or more human guided demonstrations of the task. Additional or alternative implementations relate to training the meta-learning model using reinforcement learning based on trials of the robot attempting to perform the task. Further implementations relate to using the trained meta-learning model to few shot (or one shot) learn a new task based on a human guided demonstration of the new task.
-
公开(公告)号:US11717959B2
公开(公告)日:2023-08-08
申请号:US16622309
申请日:2018-06-28
Applicant: Google LLC
Inventor: Eric Jang , Sudheendra Vijayanarasimhan , Peter Pastor Sampedro , Julian Ibarz , Sergey Levine
CPC classification number: B25J9/163 , G06N3/008 , G06N3/045 , G06N3/08 , G05B2219/39536
Abstract: Deep machine learning methods and apparatus related to semantic robotic grasping are provided. Some implementations relate to training a training a grasp neural network, a semantic neural network, and a joint neural network of a semantic grasping model. In some of those implementations, the joint network is a deep neural network and can be trained based on both: grasp losses generated based on grasp predictions generated over a grasp neural network, and semantic losses generated based on semantic predictions generated over the semantic neural network. Some implementations are directed to utilization of the trained semantic grasping model to servo, or control, a grasping end effector of a robot to achieve a successful grasp of an object having desired semantic feature(s).
-
公开(公告)号:US20200215686A1
公开(公告)日:2020-07-09
申请号:US16823947
申请日:2020-03-19
Applicant: Google LLC
Inventor: Sudheendra Vijayanarasimhan , Eric Jang , Peter Pastor Sampedro , Sergey Levine
Abstract: Deep machine learning methods and apparatus related to manipulation of an object by an end effector of a robot. Some implementations relate to training a semantic grasping model to predict a measure that indicates whether motion data for an end effector of a robot will result in a successful grasp of an object; and to predict an additional measure that indicates whether the object has desired semantic feature(s). Some implementations are directed to utilization of the trained semantic grasping model to servo a grasping end effector of a robot to achieve a successful grasp of an object having desired semantic feature(s).
-
公开(公告)号:US12226920B2
公开(公告)日:2025-02-18
申请号:US18233251
申请日:2023-08-11
Applicant: GOOGLE LLC
Inventor: Seyed Mohammad Khansari Zadeh , Eric Jang , Daniel Lam , Daniel Kappler , Matthew Bennice , Brent Austin , Yunfei Bai , Sergey Levine , Alexander Irpan , Nicolas Sievers , Chelsea Finn
Abstract: Implementations described herein relate to training and refining robotic control policies using imitation learning techniques. A robotic control policy can be initially trained based on human demonstrations of various robotic tasks. Further, the robotic control policy can be refined based on human interventions while a robot is performing a robotic task. In some implementations, the robotic control policy may determine whether the robot will fail in performance of the robotic task, and prompt a human to intervene in performance of the robotic task. In additional or alternative implementations, a representation of the sequence of actions can be visually rendered for presentation to the human can proactively intervene in performance of the robotic task.
-
8.
公开(公告)号:US20240100693A1
公开(公告)日:2024-03-28
申请号:US18102053
申请日:2023-01-26
Applicant: GOOGLE LLC
Inventor: Daniel Ho , Eric Jang , Mohi Khansari , Yu Qing Du , Alexander A. Alemi
IPC: B25J9/16
CPC classification number: B25J9/163 , B25J9/1653 , B25J9/1697 , B25J9/162
Abstract: Some implementations relate to using trained robotic action ML models in controlling a robot to perform a robotic task. Some versions of those implementations include (a) a first modality robotic action ML model that is used to generate, based on processing first modality sensor data instances, first predicted action outputs for the robotic task and (b) a second modality robotic action ML model that is used to generate, in parallel and based on processing second modality sensor data instances, second predicted action outputs for the robotic task. In some of those versions, respective weights for each pair of the first and second predicted action outputs are dynamically determined based on analysis of embeddings generated in generating the first and second predicted action outputs. A final predicted action output, for controlling the robot, is determined based on the weights.
-
9.
公开(公告)号:US20230381970A1
公开(公告)日:2023-11-30
申请号:US18233251
申请日:2023-08-11
Applicant: GOOGLE LLC
Inventor: Seyed Mohammad Khansari Zadeh , Eric Jang , Daniel Lam , Daniel Kappler , Matthew Bennice , Brent Austin , Yunfei Bai , Sergey Levine , Alexander Irpan , Nicolas Sievers , Chelsea Finn
CPC classification number: B25J9/1697 , B25J9/163 , B25J9/1661 , B25J9/161 , B25J13/06
Abstract: Implementations described herein relate to training and refining robotic control policies using imitation learning techniques. A robotic control policy can be initially trained based on human demonstrations of various robotic tasks. Further, the robotic control policy can be refined based on human interventions while a robot is performing a robotic task. In some implementations, the robotic control policy may determine whether the robot will fail in performance of the robotic task, and prompt a human to intervene in performance of the robotic task. In additional or alternative implementations, a representation of the sequence of actions can be visually rendered for presentation to the human can proactively intervene in performance of the robotic task.
-
10.
公开(公告)号:US11772272B2
公开(公告)日:2023-10-03
申请号:US17203296
申请日:2021-03-16
Applicant: GOOGLE LLC
Inventor: Seyed Mohammad Khansari Zadeh , Eric Jang , Daniel Lam , Daniel Kappler , Matthew Bennice , Brent Austin , Yunfei Bai , Sergey Levine , Alexander Irpan , Nicolas Sievers , Chelsea Finn
CPC classification number: B25J9/1697 , B25J9/161 , B25J9/163 , B25J9/1661 , B25J13/06
Abstract: Implementations described herein relate to training and refining robotic control policies using imitation learning techniques. A robotic control policy can be initially trained based on human demonstrations of various robotic tasks. Further, the robotic control policy can be refined based on human interventions while a robot is performing a robotic task. In some implementations, the robotic control policy may determine whether the robot will fail in performance of the robotic task, and prompt a human to intervene in performance of the robotic task. In additional or alternative implementations, a representation of the sequence of actions can be visually rendered for presentation to the human can proactively intervene in performance of the robotic task.
-
-
-
-
-
-
-
-
-