-
1.
公开(公告)号:US20200273177A1
公开(公告)日:2020-08-27
申请号:US16739115
申请日:2020-01-10
Applicant: Qingdao University of Technology
Inventor: Chengjun Chen , Chunlin Zhang , Dongnian Li , Jun Hong
Abstract: The present invention relates to an assembly monitoring method based on deep learning, comprising steps of: creating a training set for a physical assembly body, the training set comprising a depth image set Di and a label image set Li of a 3D assembly body at multiple monitoring angles, wherein i represents an assembly step, the depth image set Di in the ith step corresponds to the label image set Li in the ith step, and in label images in the label image set Li, different parts of the 3D assembly body are rendered by different colors; training a deep learning network model by the training set; and obtaining, by the depth camera, a physical assembly body depth image C in a physical assembly scene, inputting the physical assembly body depth image C into the deep learning network model, and outputting a pixel segmentation image of the physical assembly body, in which different parts are represented by pixel colors to identify all the parts of the physical assembly body. In the present invention, parts in the assembly body can be identified, and the assembly steps, as well as the occurrence of assembly errors and the type of errors, can be monitored for the parts.
-
2.
公开(公告)号:US10964025B2
公开(公告)日:2021-03-30
申请号:US16739115
申请日:2020-01-10
Applicant: Qingdao University of Technology
Inventor: Chengjun Chen , Chunlin Zhang , Dongnian Li , Jun Hong
Abstract: The present invention relates to an assembly monitoring method based on deep learning, comprising steps of: creating a training set for a physical assembly body, the training set comprising a depth image set Di and a label image set Li of a 3D assembly body at multiple monitoring angles, wherein i represents an assembly step, the depth image set Di in the ith step corresponds to the label image set Li in the ith step, and in label images in the label image set Li, different parts of the 3D assembly body are rendered by different colors; training a deep learning network model by the training set; and obtaining, by the depth camera, a physical assembly body depth image C in a physical assembly scene, inputting the physical assembly body depth image C into the deep learning network model, and outputting a pixel segmentation image of the physical assembly body, in which different parts are represented by pixel colors to identify all the parts of the physical assembly body. In the present invention, parts in the assembly body can be identified, and the assembly steps, as well as the occurrence of assembly errors and the type of errors, can be monitored for the parts.
-
公开(公告)号:US11504846B2
公开(公告)日:2022-11-22
申请号:US17153202
申请日:2021-01-20
Applicant: QINGDAO UNIVERSITY OF TECHNOLOGY
Inventor: Chengjun Chen , Yong Pan , Dongnian Li , Zhengxu Zhao , Jun Hong
Abstract: The present invention relates to a robot teaching system based on image segmentation and surface electromyography and robot teaching method thereof, comprising a RGB-D camera, a surface electromyography sensor, a robot and a computer, wherein the RGB-D camera collects video information of robot teaching scenes and sends to the computer; the surface electromyography sensor acquires surface electromyography signals and inertial acceleration signals of the robot teacher, and sends to the computer; the computer recognizes a articulated arm and a human joint, detects a contact position between the articulated arm and the human joint, and further calculates strength and direction of forces rendered from a human contact position after the human joint contacts the articulated arm, and sends a signal controlling the contacted articulated arm to move along with such a strength and direction of forces and robot teaching is done.
-
公开(公告)号:US11440179B2
公开(公告)日:2022-09-13
申请号:US16804888
申请日:2020-02-28
Applicant: QINGDAO UNIVERSITY OF TECHNOLOGY
Inventor: Chengjun Chen , Yong Pan , Dongnian Li , Jun Hong
Abstract: A system for robot teaching based on RGB-D images and a teach pendant, including an RGB-D camera, a host computer, a posture teach pendant, and an AR teaching system which includes an AR registration card, an AR module, a virtual robot model, a path planning unit and a posture teaching unit. The RGB-D camera collects RGB images and depth images of a physical working environment in real time. In the path planning unit, path points of a robot end effector are selected, and a 3D coordinates of the path points in the basic coordinate system of the virtual robot model are calculated; the posture teaching unit records the received posture data as the postures of a path point where the virtual robot model is located, so that the virtual robot model is driven to move according to the postures and positions of the path points, thereby completing the robot teaching.
-
公开(公告)号:US20250084652A1
公开(公告)日:2025-03-13
申请号:US18281513
申请日:2023-05-17
Applicant: QINGDAO UNIVERSITY OF TECHNOLOGY
Inventor: Yang Li , Chengjun Chen , Xuefeng Zhang , Guangzheng Wang , Yongqi Wang , Liping Liang , Jianze Liu , Fazhan Yang , Fu'e Ren
Abstract: A building construction robot is provided. The building construction robot includes a vehicle body, a feeding assembly is arranged in the vehicle body, a mounting frame is fixedly connected to one end of the vehicle body, and the mounting frame is located at a discharge end close to the feeding assembly. One end of a vibrating assembly is fixedly connected to one end of a top of the vehicle body, and the other end of the vibrating assembly passes through a middle part of the mounting frame. A leveling assembly is arranged at one end, away from the vehicle body, of the mounting frame, a measuring part is arranged at a top of the mounting frame, and a moving part is arranged at a bottom of the vehicle body.
-
6.
公开(公告)号:US12243159B2
公开(公告)日:2025-03-04
申请号:US17893368
申请日:2022-08-23
Inventor: Chengjun Chen , Zhengxu Zhao , Tianliang Hu , Jianhua Zhang , Yang Guo , Dongnian Li , Qinghai Zhang , Yuanlin Guan
IPC: G06T17/00 , B25J9/16 , B25J13/08 , G06T7/00 , G06T7/194 , G06T7/50 , G06T7/70 , G06T19/20 , H04N13/20 , H04N23/10
Abstract: A digital twin modeling method to assemble a robotic teleoperation environment, including: capturing images of the teleoperation environment; identifying a part being assembled; querying the assembly assembling order to obtain a list of assembled parts according to the part being assembled; generating a three-dimensional model of the current assembly from the list and calculating position pose information of the current assembly in an image acquisition device coordinate system; loading a three-dimensional model of the robot, determining a coordinate transformation relationship between a robot coordinate system and an image acquisition device coordinate system; determining position pose information of the robot in an image acquisition device coordinate system from the coordinate transformation relationship; determining a relative positional relationship between the current assembly and the robot from position pose information of the current assembly and the robot in an image acquisition device coordinate system; establishing a digital twin model of the teleoperation environment.
-
公开(公告)号:US20220161422A1
公开(公告)日:2022-05-26
申请号:US17153202
申请日:2021-01-20
Applicant: QINGDAO UNIVERSITY OF TECHNOLOGY
Inventor: Chengjun Chen , Yong Pan , Dongnian Li , Zhengxu Zhao , Jun Hong
Abstract: The present invention relates to a robot teaching system based on image segmentation and surface electromyography and robot teaching method thereof, comprising a RGB-D camera, a surface electromyography sensor, a robot and a computer, wherein the RGB-D camera collects video information of robot teaching scenes and sends to the computer; the surface electromyography sensor acquires surface electromyography signals and inertial acceleration signals of the robot teacher, and sends to the computer; the computer recognizes a articulated arm and a human joint, detects a contact position between the articulated arm and the human joint, and further calculates strength and direction of forces rendered from a human contact position after the human joint contacts the articulated arm, and sends a signal controlling the contacted articulated arm to move along with such a strength and direction of forces and robot teaching is done.
-
-
-
-
-
-