Generating a robot control policy from demonstrations collected via kinesthetic teaching of a robot

    公开(公告)号:US11872699B2

    公开(公告)日:2024-01-16

    申请号:US18097153

    申请日:2023-01-13

    Applicant: GOOGLE LLC

    Abstract: Generating a robot control policy that regulates both motion control and interaction with an environment and/or includes a learned potential function and/or dissipative field. Some implementations relate to resampling temporally distributed data points to generate spatially distributed data points, and generating the control policy using the spatially distributed data points. Some implementations additionally or alternatively relate to automatically determining a potential gradient for data points, and generating the control policy using the automatically determined potential gradient. Some implementations additionally or alternatively relate to determining and assigning a prior weight to each of the data points of multiple groups, and generating the control policy using the weights. Some implementations additionally or alternatively relate to defining and using non-uniform smoothness parameters at each data point, defining and using d parameters for stiffness and/or damping at each data point, and/or obviating the need to utilize virtual data points in generating the control policy.

    Robotic grasping prediction using neural networks and geometry aware object representation

    公开(公告)号:US11554483B2

    公开(公告)日:2023-01-17

    申请号:US17094111

    申请日:2020-11-10

    Applicant: Google LLC

    Abstract: Deep machine learning methods and apparatus, some of which are related to determining a grasp outcome prediction for a candidate grasp pose of an end effector of a robot. Some implementations are directed to training and utilization of both a geometry network and a grasp outcome prediction network. The trained geometry network can be utilized to generate, based on two-dimensional or two-and-a-half-dimensional image(s), geometry output(s) that are: geometry-aware, and that represent (e.g., high-dimensionally) three-dimensional features captured by the image(s). In some implementations, the geometry output(s) include at least an encoding that is generated based on a trained encoding neural network trained to generate encodings that represent three-dimensional features (e.g., shape). The trained grasp outcome prediction network can be utilized to generate, based on applying the geometry output(s) and additional data as input(s) to the network, a grasp outcome prediction for a candidate grasp pose.

    ROBOTIC GRASPING PREDICTION USING NEURAL NETWORKS AND GEOMETRY AWARE OBJECT REPRESENTATION

    公开(公告)号:US20210053217A1

    公开(公告)日:2021-02-25

    申请号:US17094111

    申请日:2020-11-10

    Applicant: Google LLC

    Abstract: Deep machine learning methods and apparatus, some of which are related to determining a grasp outcome prediction for a candidate grasp pose of an end effector of a robot. Some implementations are directed to training and utilization of both a geometry network and a grasp outcome prediction network. The trained geometry network can be utilized to generate, based on two-dimensional or two-and-a-half-dimensional image(s), geometry output(s) that are: geometry-aware, and that represent (e.g., high-dimensionally) three-dimensional features captured by the image(s). In some implementations, the geometry output(s) include at least an encoding that is generated based on a trained encoding neural network trained to generate encodings that represent three-dimensional features (e.g., shape). The trained grasp outcome prediction network can be utilized to generate, based on applying the geometry output(s) and additional data as input(s) to the network, a grasp outcome prediction for a candidate grasp pose.

    ROBOTIC GRASPING PREDICTION USING NEURAL NETWORKS AND GEOMETRY AWARE OBJECT REPRESENTATION

    公开(公告)号:US20200094405A1

    公开(公告)日:2020-03-26

    申请号:US16617169

    申请日:2018-06-18

    Applicant: Google LLC

    Abstract: Deep machine learning methods and apparatus, some of which are related to determining a grasp outcome prediction for a candidate grasp pose of an end effector of a robot. Some implementations are directed to training and utilization of both a geometry network and a grasp outcome prediction network. The trained geometry network can be utilized to generate, based on two-dimensional or two-and-a-half-dimensional image(s), geometry output(s) that are: geometry-aware, and that represent (e.g., high-dimensionally) three-dimensional features captured by the image(s). In some implementations, the geometry output(s) include at least an encoding that is generated based on a trained encoding neural network trained to generate encodings that represent three-dimensional features (e.g., shape). The trained grasp outcome prediction network can be utilized to generate, based on applying the geometry output(s) and additional data as input(s) to the network, a grasp outcome prediction for a candidate grasp pose.

Patent Agency Ranking