GRASP TEACH BY HUMAN DEMONSTRATION
    21.
    发明公开

    公开(公告)号:US20240109181A1

    公开(公告)日:2024-04-04

    申请号:US17934808

    申请日:2022-09-23

    Abstract: A technique for robotic grasp teaching by human demonstration. A human demonstrates a grasp on a workpiece, while a camera provides images of the demonstration which are analyzed to identify a hand pose relative to the workpiece. The hand pose is converted to a plane representing two fingers of a gripper. The hand plane is used to determine a grasp region on the workpiece which corresponds to the human demonstration. The grasp region and the hand pose are used in an optimization computation which is run repeatedly with randomization to generate multiple grasps approximating the demonstration, where each of the optimized grasps is a stable, high quality grasp with gripper-workpiece surface contact. A best one of the generated grasps is then selected and added to a grasp database. The human demonstration may be repeated on different locations of the workpiece to provide multiple different grasps in the database.

    EFFICIENT AND ROBUST LINE MATCHING APPROACH
    22.
    发明公开

    公开(公告)号:US20230294291A1

    公开(公告)日:2023-09-21

    申请号:US17654909

    申请日:2022-03-15

    Abstract: A method for line matching during image-based visual servoing control of a robot performing a workpiece installation. The method uses a target image from human demonstration and a current image of a robotic execution phase. A plurality of lines are identified in the target and current images, and an initial pairing of target-current lines is defined based on distance and angle. An optimization computation determines image transposes which minimize a cost function formulated to include both direction and distance between target lines and current lines using 2D data in the camera image plane, and constraint equations which relate the lines in the image plane to the 3D workpiece pose. The rotational and translational transposes which minimize the cost function are used to update the line pair matching, and the best line pairs are used to compute a difference signal for controlling robot motion during visual servoing.

    ROBOT PROGRAM GENERATION METHOD FROM HUMAN DEMONSTRATION

    公开(公告)号:US20230120598A1

    公开(公告)日:2023-04-20

    申请号:US17502207

    申请日:2021-10-15

    Abstract: A method for teaching a robot to perform an operation based on human demonstration using force and vision sensors. The method includes a vision sensor to detect position and pose of both the human's hand and optionally a workpiece during teaching of an operation such as pick, move and place. The force sensor, located either beneath the workpiece or on a tool, is used to detect force information. Data from the vision and force sensors, along with other optional inputs, are used to teach both motions and state change logic for the operation being taught. Several techniques are disclosed for determining state change logic, such as the transition from approaching to grasping. Techniques for improving motion programming to remove extraneous motions by the hand are also disclosed. Robot programming commands are then generated from the hand position and orientation data, along with the state transitions.

    ROBOT TEACHING BY HUMAN DEMONSTRATION

    公开(公告)号:US20210316449A1

    公开(公告)日:2021-10-14

    申请号:US16843185

    申请日:2020-04-08

    Abstract: A method for teaching a robot to perform an operation based on human demonstration with images from a camera. The method includes a teaching phase where a 2D or 3D camera detects a human hand grasping and moving a workpiece, and images of the hand and workpiece are analyzed to determine a robot gripper pose and positions which equate to the pose and positions of the hand and corresponding pose and positions of the workpiece. Robot programming commands are then generated from the computed gripper pose and position relative to the workpiece pose and position. In a replay phase, the camera identifies workpiece pose and position, and the programming commands cause the robot to move the gripper to pick, move and place the workpiece as demonstrated. A teleoperation mode is also disclosed, where camera images of a human hand are used to control movement of the robot in real time.

    Machine system
    26.
    发明授权

    公开(公告)号:US10589429B2

    公开(公告)日:2020-03-17

    申请号:US15880844

    申请日:2018-01-26

    Abstract: Provided is a machine system including a machine including a movable part; a control device; a sensor detecting information about the movable part during a predetermined operation of the machine; a transmitting unit wirelessly transmitting the detected information during the predetermined operation; a receiving unit receiving the wirelessly transmitted information; a storage unit storing the received information; a detection unit detecting a loss in the received information; a command unit causing the machine to repeat the predetermined operation, in a case where a loss in the information is detected; a determination unit determining whether or not every lost part of the information detected first is contained in the information detected during the repeated operation; and an complementing unit ending the repeated operation in a case where every lost part is determined to be contained and complementing the information detected first with the information detected during the repeated operation.

    Shape recognition device, shape recognition method, and program

    公开(公告)号:US10521687B2

    公开(公告)日:2019-12-31

    申请号:US15964472

    申请日:2018-04-27

    Abstract: A shape recognition device that recognizes a shape of an object having an indefinite shape and flexibility, and assembled by a robot, the shape recognition device including: an imaging unit that images the object; an image processing unit that recognizes the shape of the object on the basis of the object imaged by the imaging unit; and a simulation processing unit that simulates the shape of the object on the basis of the image of the object imaged by the imaging unit. The simulation processing unit interpolates a recognition result of the shape of the object by the image processing unit, on the basis of a simulation result of the shape of the object.

    Control system having learning control function and control method

    公开(公告)号:US10300600B2

    公开(公告)日:2019-05-28

    申请号:US15860226

    申请日:2018-01-02

    Abstract: A robot control system includes an operation control unit, a learning control processing unit and a storage unit. Whenever the operation control unit performs a single learning control, the learning control processing unit stores the number of learning controls, which indicates how many learning controls have been performed, and obtained time-series vibration data in correspondence with each other in the storage unit. The learning control processing unit calculates a convergence determination value to determine whether or not a vibration of a certain portion of a robot converges based on the time-series vibration data at each number of learning controls stored in the storage unit, and determines the number of learning controls having a minimum convergence determination value, out of the calculated convergence determination values, as the optimal number of learning controls.

    Robot teaching by demonstration with visual servoing

    公开(公告)号:US12172303B2

    公开(公告)日:2024-12-24

    申请号:US17457688

    申请日:2021-12-06

    Abstract: A method for teaching and controlling a robot to perform an operation based on human demonstration with images from a camera. The method includes a demonstration phase where a camera detects a human hand grasping and moving a workpiece to define a rough trajectory of the robotic movement of the workpiece. Line features or other geometric features on the workpiece collected during the demonstration phase are used in an image-based visual servoing (IBVS) approach which refines a final placement position of the workpiece, where the IBVS control takes over the workpiece placement during the final approach by the robot. Moving object detection is used for automatically localizing both object and hand position in 2D image space, and then identifying line features on the workpiece by removing line features belonging to the hand using hand keypoint detection.

    Robot teaching by human demonstration

    公开(公告)号:US11813749B2

    公开(公告)日:2023-11-14

    申请号:US16843185

    申请日:2020-04-08

    Abstract: A method for teaching a robot to perform an operation based on human demonstration with images from a camera. The method includes a teaching phase where a 2D or 3D camera detects a human hand grasping and moving a workpiece, and images of the hand and workpiece are analyzed to determine a robot gripper pose and positions which equate to the pose and positions of the hand and corresponding pose and positions of the workpiece. Robot programming commands are then generated from the computed gripper pose and position relative to the workpiece pose and position. In a replay phase, the camera identifies workpiece pose and position, and the programming commands cause the robot to move the gripper to pick, move and place the workpiece as demonstrated. A teleoperation mode is also disclosed, where camera images of a human hand are used to control movement of the robot in real time.

Patent Agency Ranking