Grasp generation for machine tending

    公开(公告)号:US11919161B2

    公开(公告)日:2024-03-05

    申请号:US17502230

    申请日:2021-10-15

    Inventor: Yongxiang Fan

    Abstract: A robotic grasp generation technique for machine tending applications. Part and gripper geometry are provided as inputs, typically from CAD files. Gripper kinematics are also defined as an input. Preferred and prohibited grasp locations on the part may also be defined as inputs, to ensure that the computed grasp candidates enable the robot to load the part into a machining station such that the machining station can grasp a particular location on the part. An optimization solver is used to compute a quality grasp with stable surface contact between the part and the gripper, with no interference between the gripper and the part, and allowing for the preferred and prohibited grasp locations which were defined as inputs. All surfaces of the gripper fingers are considered for grasping and collision avoidance. A loop with random initialization is used to automatically compute many hundreds of diverse grasps for the part.

    AUTOMATIC GRIPPER FINGERTIP DESIGN TO REDUCE LEFTOVER IN RANDOM BIN PICKING APPLICATIONS

    公开(公告)号:US20240198543A1

    公开(公告)日:2024-06-20

    申请号:US18067110

    申请日:2022-12-16

    CPC classification number: B25J19/007 B25J9/1612 B25J9/163 B25J9/1671

    Abstract: An automated technique for robot gripper fingertip design. A workpiece design and a bin shape are provided as inputs, along with a parameterized gripper design. The gripper parameters define the lengths of segments of the fingertips, and the bend angle between fingertip segments. A fingertip shape, defined by selecting parameter values, is used in a simulated picking of parts from many different randomly defined piles of workpieces in the bin. Grasps for the simulated bin picking are pre-defined and provided as input. A score is assigned to the particular fingertip shape based on the average number of leftovers from the many simulated bin picking operations. A new fingertip shape is then defined by selecting new values for the parameters, and the simulations are repeated to assign a score for the new fingertip shape. This process is repeated to suitably sample the parameter range, and a best-performing fingertip shape is identified.

    REGION-BASED GRASP GENERATION
    3.
    发明公开

    公开(公告)号:US20230256602A1

    公开(公告)日:2023-08-17

    申请号:US17651485

    申请日:2022-02-17

    Abstract: A region-based robotic grasp generation technique for machine tending or bin picking applications. Part and gripper geometry are provided as inputs, typically from CAD files, along with gripper kinematics. A human user defines one or more target grasp regions on the part, using a graphical user interface displaying the part geometry. The target grasp regions are identified by the user based on the user's knowledge of how the part may be grasped to ensure that the part can be subsequently placed in a proper destination pose. For each of the target grasp regions, an optimization solver is used to compute a plurality of quality grasps with stable surface contact between the part and the gripper, and no part-gripper interference. The computed grasps for each target grasp region are placed in a grasp database which is used by a robot in actual bin picking operations.

    Efficient and robust line matching approach

    公开(公告)号:US12017371B2

    公开(公告)日:2024-06-25

    申请号:US17654909

    申请日:2022-03-15

    Abstract: A method for line matching during image-based visual servoing control of a robot performing a workpiece installation. The method uses a target image from human demonstration and a current image of a robotic execution phase. A plurality of lines are identified in the target and current images, and an initial pairing of target-current lines is defined based on distance and angle. An optimization computation determines image transposes which minimize a cost function formulated to include both direction and distance between target lines and current lines using 2D data in the camera image plane, and constraint equations which relate the lines in the image plane to the 3D workpiece pose. The rotational and translational transposes which minimize the cost function are used to update the line pair matching, and the best line pairs are used to compute a difference signal for controlling robot motion during visual servoing.

    Grasp learning using modularized neural networks

    公开(公告)号:US12017355B2

    公开(公告)日:2024-06-25

    申请号:US17342069

    申请日:2021-06-08

    Inventor: Yongxiang Fan

    Abstract: A method for modularizing high dimensional neural networks into neural networks of lower input dimensions. The method is suited to generating full-DOF robot grasping actions based on images of parts to be picked. In one example, a first network encodes grasp positional dimensions and a second network encodes rotational dimensions. The first network is trained to predict a position at which a grasp quality is maximized for any value of the grasp rotations. The second network is trained to identify the maximum grasp quality while searching only at the position from the first network. Thus, the two networks collectively identify an optimal grasp, while each network's searching space is reduced. Many grasp positions and rotations can be evaluated in a search quantity of the sum of the evaluated positions and rotations, rather than the product. Dimensions may be separated in any suitable fashion, including three neural networks in some applications.

    OBJECT INTERFERENCE CHECK METHOD
    6.
    发明公开

    公开(公告)号:US20240190002A1

    公开(公告)日:2024-06-13

    申请号:US18588084

    申请日:2024-02-27

    CPC classification number: B25J9/1664 B25J9/1605 G06F30/10

    Abstract: An object interference checking technique using point sets which uses CAD models of objects and obstacles and converts the CAD models to 3D points. The 3D point locations are updated based on object motion. The 3D points are then converted to 3D grid space indices defining space occupied by any point on any object or obstacle. The 3D grid space indices are then converted to 1D indices and the 1D indices are stored as a set per object and per position. Swept volumes for an object are created by computing a union of the 1D index sets across multiple motion steps. Interference checking between objects is performed by computing an intersection of the 1D index sets for a given motion step or position. The 1D indices are converted back to 3D coordinates to define the 3D shapes of the swept volumes and interferences.

    Network modularization to learn high dimensional robot tasks

    公开(公告)号:US11809521B2

    公开(公告)日:2023-11-07

    申请号:US17342122

    申请日:2021-06-08

    Inventor: Yongxiang Fan

    CPC classification number: G06F18/214 G06N3/045 G06N3/08 G06K2207/1016

    Abstract: A method for modularizing high dimensional neural networks into neural networks of lower input dimensions. The method is suited to generating full-DOF robot grasping actions based on images of parts to be picked. In one example, a first network encodes grasp positional dimensions and a second network encodes rotational dimensions. The first network is trained to predict a position at which a grasp quality is maximized for any value of the grasp rotations. The second network is trained to identify the maximum grasp quality while searching only at the position from the first network. Thus, the two networks collectively identify an optimal grasp, while each network's searching space is reduced. Many grasp positions and rotations can be evaluated in a search quantity of the sum of the evaluated positions and rotations, rather than the product. Dimensions may be separated in any suitable fashion, including three neural networks in some applications.

    NETWORK MODULARIZATION TO LEARN HIGH DIMENSIONAL ROBOT TASKS

    公开(公告)号:US20220391638A1

    公开(公告)日:2022-12-08

    申请号:US17342122

    申请日:2021-06-08

    Inventor: Yongxiang Fan

    Abstract: A method for modularizing high dimensional neural networks into neural networks of lower input dimensions. The method is suited to generating full-DOF robot grasping actions based on images of parts to be picked. In one example, a first network encodes grasp positional dimensions and a second network encodes rotational dimensions. The first network is trained to predict a position at which a grasp quality is maximized for any value of the grasp rotations. The second network is trained to identify the maximum grasp quality while searching only at the position from the first network. Thus, the two networks collectively identify an optimal grasp, while each network's searching space is reduced. Many grasp positions and rotations can be evaluated in a search quantity of the sum of the evaluated positions and rotations, rather than the product. Dimensions may be separated in any suitable fashion, including three neural networks in some applications.

    GRASP TEACH BY HUMAN DEMONSTRATION
    9.
    发明公开

    公开(公告)号:US20240109181A1

    公开(公告)日:2024-04-04

    申请号:US17934808

    申请日:2022-09-23

    Abstract: A technique for robotic grasp teaching by human demonstration. A human demonstrates a grasp on a workpiece, while a camera provides images of the demonstration which are analyzed to identify a hand pose relative to the workpiece. The hand pose is converted to a plane representing two fingers of a gripper. The hand plane is used to determine a grasp region on the workpiece which corresponds to the human demonstration. The grasp region and the hand pose are used in an optimization computation which is run repeatedly with randomization to generate multiple grasps approximating the demonstration, where each of the optimized grasps is a stable, high quality grasp with gripper-workpiece surface contact. A best one of the generated grasps is then selected and added to a grasp database. The human demonstration may be repeated on different locations of the workpiece to provide multiple different grasps in the database.

    Point set interference check
    10.
    发明授权

    公开(公告)号:US11878424B2

    公开(公告)日:2024-01-23

    申请号:US17457777

    申请日:2021-12-06

    CPC classification number: B25J9/1664 B25J9/1605 G06F30/10

    Abstract: A robot interference checking motion planning technique using point sets. The technique uses CAD models of robot arms and obstacles and converts the CAD models to 3D point sets. The 3D point set coordinates are updated at each time step based on robot and obstacle motion. The 3D points are then converted to 3D grid space indices indicating space occupied by any point on any part. The 3D grid space indices are converted to 1D indices and the 1D indices are stored as sets per object and per time step. Interference checking is performed by computing an intersection of the 1D index sets for a given time step. Swept volumes are created by computing a union of the 1D index sets across multiple time steps. The 1D indices are converted back to 3D coordinates to define the 3D shapes of the swept volumes and the 3D locations of any interferences.

Patent Agency Ranking