Deep machine learning methods and apparatus for robotic grasping

    公开(公告)号:US11548145B2

    公开(公告)日:2023-01-10

    申请号:US17172666

    申请日:2021-02-10

    Applicant: Google LLC

    Abstract: Deep machine learning methods and apparatus related to manipulation of an object by an end effector of a robot. Some implementations relate to training a deep neural network to predict a measure that candidate motion data for an end effector of a robot will result in a successful grasp of one or more objects by the end effector. Some implementations are directed to utilization of the trained deep neural network to servo a grasping end effector of a robot to achieve a successful grasp of an object by the grasping end effector. For example, the trained deep neural network may be utilized in the iterative updating of motion control commands for one or more actuators of a robot that control the pose of a grasping end effector of the robot, and to determine when to generate grasping control commands to effectuate an attempted grasp by the grasping end effector.

    MACHINE LEARNING METHODS AND APPARATUS FOR SEMANTIC ROBOTIC GRASPING

    公开(公告)号:US20200338722A1

    公开(公告)日:2020-10-29

    申请号:US16622309

    申请日:2018-06-28

    Applicant: Google LLC

    Abstract: Deep machine learning methods and apparatus related to semantic robotic grasping are provided. Some implementations relate to training a training a grasp neural network, a semantic neural network, and a joint neural network of a semantic grasping model. In some of those implementations, the joint network is a deep neural network and can be trained based on both: grasp losses generated based on grasp predictions generated over a grasp neural network, and semantic losses generated based on semantic predictions generated over the semantic neural network. Some implementations are directed to utilization of the trained semantic grasping model to servo, or control, a grasping end effector of a robot to achieve a successful grasp of an object having desired semantic feature(s).

    Deep reinforcement learning for robotic manipulation

    公开(公告)号:US12240113B2

    公开(公告)日:2025-03-04

    申请号:US18526443

    申请日:2023-12-01

    Applicant: GOOGLE LLC

    Abstract: Implementations utilize deep reinforcement learning to train a policy neural network that parameterizes a policy for determining a robotic action based on a current state. Some of those implementations collect experience data from multiple robots that operate simultaneously. Each robot generates instances of experience data during iterative performance of episodes that are each explorations of performing a task, and that are each guided based on the policy network and the current policy parameters for the policy network during the episode. The collected experience data is generated during the episodes and is used to train the policy network by iteratively updating policy parameters of the policy network based on a batch of collected experience data. Further, prior to performance of each of a plurality of episodes performed by the robots, the current updated policy parameters can be provided (or retrieved) for utilization in performance of the episode.

    Data-efficient hierarchical reinforcement learning

    公开(公告)号:US11992944B2

    公开(公告)日:2024-05-28

    申请号:US17050546

    申请日:2019-05-17

    Applicant: Google LLC

    CPC classification number: B25J9/163

    Abstract: Training and/or utilizing a hierarchical reinforcement learning (HRL) model for robotic control. The HRL model can include at least a higher-level policy model and a lower-level policy model. Some implementations relate to technique(s) that enable more efficient off-policy training to be utilized in training of the higher-level policy model and/or the lower-level policy model. Some of those implementations utilize off-policy correction, which re-labels higher-level actions of experience data, generated in the past utilizing a previously trained version of the HRL model, with modified higher-level actions. The modified higher-level actions are then utilized to off-policy train the higher-level policy model. This can enable effective off-policy training despite the lower-level policy model being a different version at training time (relative to the version when the experience data was collected).

    Offline Primitive Discovery For Accelerating Data-Driven Reinforcement Learning

    公开(公告)号:US20230367996A1

    公开(公告)日:2023-11-16

    申请号:US18044852

    申请日:2021-09-23

    Applicant: Google LLC

    CPC classification number: G06N3/0455 G06N3/092

    Abstract: A method includes determining a first state associated with a particular task, and determining, by a task policy model, a latent space representation of the first state. The task policy model may have been trained to define, for each respective state of a plurality of possible states associated with the particular task, a corresponding latent space representation of the respective state. The method also includes determining, by a primitive policy model and based on the first state and the latent space representation of the first state, an action to take as part of the particular task. The primitive policy model may have been trained to define a space of primitive policies for the plurality of possible states associated with the particular task and a plurality of possible latent space representations. The method further includes executing the action to reach a second state associated with the particular task.

    NATURAL LANGUAGE CONTROL OF A ROBOT
    9.
    发明公开

    公开(公告)号:US20230311335A1

    公开(公告)日:2023-10-05

    申请号:US18128953

    申请日:2023-03-30

    Applicant: GOOGLE LLC

    CPC classification number: B25J13/003 B25J11/0005 B25J9/163 B25J9/161 G06F40/40

    Abstract: Implementations process, using a large language model, a free-form natural language (NL) instruction to generate to generate LLM output. Those implementations generate, based on the LLM output and a NL skill description of a robotic skill, a task-grounding measure that reflects a probability of the skill description in the probability distribution of the LLM output. Those implementations further generate, based on the robotic skill and current environmental state data, a world-grounding measure that reflects a probability of the robotic skill being successful based on the current environmental state data. Those implementations further determine, based on both the task-grounding measure and the world-grounding measure, whether to implement the robotic skill.

Patent Agency Ranking