-
公开(公告)号:US11560192B2
公开(公告)日:2023-01-24
申请号:US16885227
申请日:2020-05-27
Applicant: UBTECH ROBOTICS CORP LTD
Inventor: Jie Bai , Ligang Ge , Hongge Wang , Yizhang Liu , Shuping Hu , Jianxin Pang , Youjun Xiong
Abstract: The present disclosure provides a stair climbing gait planning method and an apparatus and a robot using the same. The method includes: obtaining first visual measurement data through a visual sensor of the robot; converting the first visual measurement data to second visual measurement data; and performing a staged gait planning on a process of the robot to climb the staircase based on the second visual measurement data. Through the method, the visual measurement data is used as a reference to perform the staged gait planning on the process of the robot to climb the staircase, which greatly improves the adaptability of the robot in the complex scene of stair climbing.
-
公开(公告)号:US11475707B2
公开(公告)日:2022-10-18
申请号:US17134467
申请日:2020-12-27
Applicant: UBTECH ROBOTICS CORP LTD
Inventor: Yusheng Zeng , Jianxin Pang , Youjun Xiong
Abstract: The present disclosure provides a method for extracting a face detection image, wherein the method includes: obtaining a plurality of image frames by an image detector, performing a face detection process on each image frame to extract a face area, performing a clarity analysis on the face area of each image frame to obtain a clarity degree of a face, conducting a posture analysis on the face area of each image frame to obtain a face posture angle, generating a comprehensive evaluation index for each image frame in accordance with the clarity degree of the face and the face posture angle of each image frame, and selecting a key frame from the image frames based on the comprehensive evaluation index. Such that the resource occupancy rate during image data processing may be reduced, and the quality of the face detection process may be improved.
-
公开(公告)号:US11465298B2
公开(公告)日:2022-10-11
申请号:US16370891
申请日:2019-03-30
Applicant: UBTECH Robotics Corp
Inventor: Sicong Liu , Youjun Xiong , Hongyu Ding , Qidong Xu , Jianxin Pang
Abstract: A robotic hand includes a palm, a thumb and four fingers that are connected to the palm; a first driving assembly to drive the thumb to rotate, a second driving assembly and a third driving assembly to respectively drive two of the four fingers to rotate; and a fourth driving assembly to drive the other two of the four fingers to rotate. The first driving assembly, the second driving assembly, the third driving assembly, and the fourth driving assembly are received within the palm.
-
124.
公开(公告)号:US11423701B2
公开(公告)日:2022-08-23
申请号:US17118578
申请日:2020-12-10
Applicant: UBTECH ROBOTICS CORP LTD
Inventor: Miaochen Guo , Jingtao Zhang , Shuping Hu , Dong Wang , Zaiwang Gu , Jianxin Pang , Youjun Xiong
IPC: G06K9/62 , G06V40/20 , G06T7/73 , H04N1/60 , G06T5/00 , G06T7/90 , G06T7/64 , G06V10/22 , G06V10/56 , G06V20/40
Abstract: The present disclosure provides a gesture recognition method as well as a terminal device and a computer-readable storage medium using the same. The method includes: obtaining a video stream collected by an image recording device in real time; performing a hand recognition on the video stream to determine static gesture information of a recognized hand in each video frame of the video stream; encoding the static gesture information in the video frames of the video stream in sequence to obtain an encoded information sequence of the recognized hands; and performing a slide detection on the encoded information sequence using a preset sliding window to determine a dynamic gesture category of each recognized hand. In this manner, static gesture recognition and dynamic gesture recognition are effectively integrated in the same process. The dynamic gesture recognition is realized through the slide detection of the sliding window without complex network calculations.
-
125.
公开(公告)号:US20220207913A1
公开(公告)日:2022-06-30
申请号:US17562963
申请日:2021-12-27
Applicant: UBTECH ROBOTICS CORP LTD
Inventor: Yusheng Zeng , Jun Cheng , Jianxin Pang
Abstract: A method for training a multi-task recognition model includes: obtaining a number of sample images, wherein some of the sample images are to provide feature-independent facial attributes, some of the sample images are to provide feature-coupled facial attributes, and some of the sample images are to provide facial attributes of face poses; training an initial feature-sharing model based on a first set of sample images to obtain a first feature-sharing model; training the first feature-sharing model based on the first set of sample images and a second set of sample images to obtain a second feature-sharing model with a loss value less than a preset second threshold; obtaining an initial multi-task recognition model by adding a feature decoupling model to the second feature-sharing model; and training the initial multi-task recognition model based on the sample images to obtain a trained multi-task recognition model.
-
公开(公告)号:US11373443B2
公开(公告)日:2022-06-28
申请号:US17105667
申请日:2020-11-27
Applicant: UBTECH ROBOTICS CORP LTD
Inventor: Yue Wang , Jun Cheng , Yepeng Liu , Yusheng Zeng , Jianxin Pang , Youjun Xiong
Abstract: The present disclosure provides a method and an apparatus for face recognition and a computer readable storage medium. The method includes: inputting a to-be-recognized blurry face image into a generator of a trained generative adversarial network to obtain a to-be-recognized clear face image; inputting the to-be-recognized clear face image to the feature extraction network to obtain a facial feature of the to-be-recognized clear face image; matching the facial feature of the to-be-recognized clear face image with each user facial feature in a preset facial feature database to determine the user facial feature best matching the to-be-recognized clear face image as a target user facial feature; and determining a user associated with the target user facial feature as a recognition result. Through this solution, the accuracy of the recognition of blurry faces can be improved.
-
公开(公告)号:US20220156534A1
公开(公告)日:2022-05-19
申请号:US17389380
申请日:2021-07-30
Applicant: UBTECH ROBOTICS CORP LTD
Inventor: Yonghui Cai , Jun Cheng , Jianxin Pang , Youjun Xiong
Abstract: A target object detection model is provided. The target object detection model includes a YOLOv3-Tiny model. Through the target object detection model, low-level information in the YOLOv3-Tiny sub-model can be merged with high-level information therein, so as to fuse the low-level information and the high-level information. Since the low-level information can be further used, the comprehensiveness of target detection is effectively improved, and the detection effect of small targets is improved.
-
公开(公告)号:US20220067354A1
公开(公告)日:2022-03-03
申请号:US17463500
申请日:2021-08-31
Applicant: UBTECH Robotics Corp Ltd
Inventor: Chi Shao , Miaochen Guo , Jun Cheng , Jianxin Pang
Abstract: A dynamic gesture recognition method includes: performing detection on each frame of image of a video stream using a preset static gesture detection model to obtain a static gesture in each frame of image of the video stream; in response to detection of a change of the static gesture from a preset first gesture to a second gesture, suspending the static gesture detection model and activating a preset dynamic gesture detection model; and performing detection on multiple frames of images that are pre-stored in a storage medium using the dynamic gesture detection model to obtain a dynamic gesture recognition result.
-
公开(公告)号:US11230001B2
公开(公告)日:2022-01-25
申请号:US16572637
申请日:2019-09-17
Applicant: UBTECH ROBOTICS CORP LTD
Inventor: Youjun Xiong , Ligang Ge , Yizhang Liu , Chunyu Chen , Zheng Xie , Jianxin Pang
IPC: B25J9/00 , B25J13/08 , B25J9/16 , B62D57/032
Abstract: There are a biped robot gait control method and a biped robot, where the method includes: obtaining six-dimensional force information, and determining a motion state of two legs of the biped robot; calculating a ZMP position of each of two legs of the biped robot; determining a ZMP expected value of each of the two legs in real time; obtaining a compensation angle of an ankle joint of each of the two legs of the biped robot by inputting the ZMP position, a change rate of the ZMP position, the ZMP expected value, and a change rate of the ZMP expected value to an ankle joint smoothing controller so as to perform a close-loop ZMP tracking control on each of the two legs; adjusting a current angle of the ankle joint of each of the two legs of the biped robot in real time; and repeating the forgoing steps.
-
公开(公告)号:US20210331753A1
公开(公告)日:2021-10-28
申请号:US16885227
申请日:2020-05-27
Applicant: UBTECH ROBOTICS CORP LTD
Inventor: JIE BAI , Ligang Ge , Hongge Wang , Yizhang Liu , Shuping Hu , Jianxin Pang , Youjun Xiong
Abstract: The present disclosure provides a stair climbing gait planning method and an apparatus and a robot using the same. The method includes: obtaining first visual measurement data through a visual sensor of the robot; converting the first visual measurement data to second visual measurement data; and performing a staged gait planning on a process of the robot to climb the staircase based on the second visual measurement data. Through the method, the visual measurement data is used as a reference to perform the staged gait planning on the process of the robot to climb the staircase, which greatly improves the adaptability of the robot in the complex scene of stair climbing.
-
-
-
-
-
-
-
-
-