-
公开(公告)号:US10105847B1
公开(公告)日:2018-10-23
申请号:US15177323
申请日:2016-06-08
Applicant: X DEVELOPMENT LLC
Inventor: Craig Latimer , Umashankar Nagarajan
IPC: B25J9/16
Abstract: Methods, apparatus, systems, and computer-readable media are provided for detecting a geometric change in a robot's configuration and taking responsive action in instances where the geometric change is likely to impact operation of the robot. In various implementations, a geometric model of a robot in a selected pose may be obtained. Image data of the actual robot in the selected pose may also be obtained. The image data may be compared to the geometric model to detect a geometric difference between the geometric model and the actual robot. Output may be provided that is indicative of the geometric difference between the geometric model and the actual robot.
-
12.
公开(公告)号:US11691277B2
公开(公告)日:2023-07-04
申请号:US17379091
申请日:2021-07-19
Applicant: X Development LLC
Inventor: Umashankar Nagarajan , Bianca Homberg
IPC: G05B19/04 , B25J9/16 , G05B19/418
CPC classification number: B25J9/163 , B25J9/1612 , B25J9/1697 , G05B19/41885 , G05B2219/40411 , Y10S901/03 , Y10S901/09 , Y10S901/47
Abstract: Grasping of an object, by an end effector of a robot, based on a grasp strategy that is selected using one or more machine learning models. The grasp strategy utilized for a given grasp is one of a plurality of candidate grasp strategies. Each candidate grasp strategy defines a different group of one or more values that influence performance of a grasp attempt in a manner that is unique relative to the other grasp strategies. For example, value(s) of a grasp strategy can define a grasp direction for grasping the object (e.g., “top”, “side”), a grasp type for grasping the object (e.g., “pinch”, “power”), grasp force applied in grasping the object, pre-grasp manipulations to be performed on the object, and/or post-grasp manipulations to be performed on the object.
-
13.
公开(公告)号:US20190248003A1
公开(公告)日:2019-08-15
申请号:US15862514
申请日:2018-01-04
Applicant: X Development LLC
Inventor: Umashankar Nagarajan , Bianca Homberg
IPC: B25J9/16
CPC classification number: B25J9/163 , B25J9/1612 , B25J9/1697 , G05B19/41885 , G05B2219/40411 , Y10S901/03 , Y10S901/09 , Y10S901/47
Abstract: Grasping of an object, by an end effector of a robot, based on a grasp strategy that is selected using one or more machine learning models. The grasp strategy utilized for a given grasp is one of a plurality of candidate grasp strategies. Each candidate grasp strategy defines a different group of one or more values that influence performance of a grasp attempt in a manner that is unique relative to the other grasp strategies. For example, value(s) of a grasp strategy can define a grasp direction for grasping the object (e.g., “top”, “side”), a grasp type for grasping the object (e.g., “pinch”, “power”), grasp force applied in grasping the object, pre-grasp manipulations to be performed on the object, and/or post-grasp manipulations to be performed on the object.
-
14.
公开(公告)号:US20190196436A1
公开(公告)日:2019-06-27
申请号:US15851622
申请日:2017-12-21
Applicant: X Development LLC
Inventor: Umashankar Nagarajan
IPC: G05B19/12 , G05B13/02 , G05B19/402 , G06K7/00 , B25J9/16
CPC classification number: G05B19/124 , B25J9/161 , B25J9/1612 , B25J9/163 , G05B13/0265 , G05B19/402 , G05B2219/23189 , G05B2219/36371 , G05B2219/39505 , G06K7/0004
Abstract: Techniques described herein relate to using reduced-dimensionality embeddings generated from robot sensor data to identify predetermined semantic labels that guide robot interaction with objects. In various implementations, obtaining, from one or more sensors of a robot, sensor data that includes data indicative of an object observed in an environment in which the robot operates. The sensor data may be processed utilizing a first trained machine learning model to generate a first embedded feature vector that maps the data indicative of the object to an embedding space. Nearest neighbor(s) of the first embedded feature vector may be identified in the embedding space. Semantic label(s) may be identified based on the nearest neighbor(s). A given grasp option may be selected from enumerated grasp options previously associated with the semantic label(s). The robot may be operated to interact with the object based on the pose and using the given grasp option.
-
-
-