-
公开(公告)号:US20220161422A1
公开(公告)日:2022-05-26
申请号:US17153202
申请日:2021-01-20
Applicant: QINGDAO UNIVERSITY OF TECHNOLOGY
Inventor: Chengjun Chen , Yong Pan , Dongnian Li , Zhengxu Zhao , Jun Hong
Abstract: The present invention relates to a robot teaching system based on image segmentation and surface electromyography and robot teaching method thereof, comprising a RGB-D camera, a surface electromyography sensor, a robot and a computer, wherein the RGB-D camera collects video information of robot teaching scenes and sends to the computer; the surface electromyography sensor acquires surface electromyography signals and inertial acceleration signals of the robot teacher, and sends to the computer; the computer recognizes a articulated arm and a human joint, detects a contact position between the articulated arm and the human joint, and further calculates strength and direction of forces rendered from a human contact position after the human joint contacts the articulated arm, and sends a signal controlling the contacted articulated arm to move along with such a strength and direction of forces and robot teaching is done.
-
2.
公开(公告)号:US20200273177A1
公开(公告)日:2020-08-27
申请号:US16739115
申请日:2020-01-10
Applicant: Qingdao University of Technology
Inventor: Chengjun Chen , Chunlin Zhang , Dongnian Li , Jun Hong
Abstract: The present invention relates to an assembly monitoring method based on deep learning, comprising steps of: creating a training set for a physical assembly body, the training set comprising a depth image set Di and a label image set Li of a 3D assembly body at multiple monitoring angles, wherein i represents an assembly step, the depth image set Di in the ith step corresponds to the label image set Li in the ith step, and in label images in the label image set Li, different parts of the 3D assembly body are rendered by different colors; training a deep learning network model by the training set; and obtaining, by the depth camera, a physical assembly body depth image C in a physical assembly scene, inputting the physical assembly body depth image C into the deep learning network model, and outputting a pixel segmentation image of the physical assembly body, in which different parts are represented by pixel colors to identify all the parts of the physical assembly body. In the present invention, parts in the assembly body can be identified, and the assembly steps, as well as the occurrence of assembly errors and the type of errors, can be monitored for the parts.
-
公开(公告)号:US11504846B2
公开(公告)日:2022-11-22
申请号:US17153202
申请日:2021-01-20
Applicant: QINGDAO UNIVERSITY OF TECHNOLOGY
Inventor: Chengjun Chen , Yong Pan , Dongnian Li , Zhengxu Zhao , Jun Hong
Abstract: The present invention relates to a robot teaching system based on image segmentation and surface electromyography and robot teaching method thereof, comprising a RGB-D camera, a surface electromyography sensor, a robot and a computer, wherein the RGB-D camera collects video information of robot teaching scenes and sends to the computer; the surface electromyography sensor acquires surface electromyography signals and inertial acceleration signals of the robot teacher, and sends to the computer; the computer recognizes a articulated arm and a human joint, detects a contact position between the articulated arm and the human joint, and further calculates strength and direction of forces rendered from a human contact position after the human joint contacts the articulated arm, and sends a signal controlling the contacted articulated arm to move along with such a strength and direction of forces and robot teaching is done.
-
公开(公告)号:US11440179B2
公开(公告)日:2022-09-13
申请号:US16804888
申请日:2020-02-28
Applicant: QINGDAO UNIVERSITY OF TECHNOLOGY
Inventor: Chengjun Chen , Yong Pan , Dongnian Li , Jun Hong
Abstract: A system for robot teaching based on RGB-D images and a teach pendant, including an RGB-D camera, a host computer, a posture teach pendant, and an AR teaching system which includes an AR registration card, an AR module, a virtual robot model, a path planning unit and a posture teaching unit. The RGB-D camera collects RGB images and depth images of a physical working environment in real time. In the path planning unit, path points of a robot end effector are selected, and a 3D coordinates of the path points in the basic coordinate system of the virtual robot model are calculated; the posture teaching unit records the received posture data as the postures of a path point where the virtual robot model is located, so that the virtual robot model is driven to move according to the postures and positions of the path points, thereby completing the robot teaching.
-
5.
公开(公告)号:US10964025B2
公开(公告)日:2021-03-30
申请号:US16739115
申请日:2020-01-10
Applicant: Qingdao University of Technology
Inventor: Chengjun Chen , Chunlin Zhang , Dongnian Li , Jun Hong
Abstract: The present invention relates to an assembly monitoring method based on deep learning, comprising steps of: creating a training set for a physical assembly body, the training set comprising a depth image set Di and a label image set Li of a 3D assembly body at multiple monitoring angles, wherein i represents an assembly step, the depth image set Di in the ith step corresponds to the label image set Li in the ith step, and in label images in the label image set Li, different parts of the 3D assembly body are rendered by different colors; training a deep learning network model by the training set; and obtaining, by the depth camera, a physical assembly body depth image C in a physical assembly scene, inputting the physical assembly body depth image C into the deep learning network model, and outputting a pixel segmentation image of the physical assembly body, in which different parts are represented by pixel colors to identify all the parts of the physical assembly body. In the present invention, parts in the assembly body can be identified, and the assembly steps, as well as the occurrence of assembly errors and the type of errors, can be monitored for the parts.
-
-
-
-