-
公开(公告)号:US11644841B2
公开(公告)日:2023-05-09
申请号:US17113132
申请日:2020-12-07
Applicant: UBTECH ROBOTICS CORP LTD
Inventor: Shuping Hu , Jun Cheng , Jingtao Zhang , Miaochen Guo , Dong Wang , Jianxin Pang , Youjun Xiong
CPC classification number: G05D1/0212 , B62D57/02 , G05D1/0231 , G06T1/0014 , G06T7/13 , G06T7/73
Abstract: A robot climbing control method is disclosed. A gravity direction vector in a gravity direction in a camera coordinate system of a robot is obtained. A stair edge of stairs in a scene image is obtained and an edge direction vector of the stair edge in the camera coordinate system is determined. A position parameter of the robot relative to the stairs is determined according to the gravity direction vector and the edge direction vector. Poses of the robot are adjusted according to the position parameter to control the robot to climb the stairs.
-
公开(公告)号:US11636712B2
公开(公告)日:2023-04-25
申请号:US17463500
申请日:2021-08-31
Applicant: UBTECH ROBOTICS CORP LTD
Inventor: Chi Shao , Miaochen Guo , Jun Cheng , Jianxin Pang
IPC: G06V40/20 , G06V10/94 , G06V20/40 , G06V10/62 , G06V10/82 , G06V40/10 , G06N3/08 , G06F18/21 , G06F18/20 , G06F18/214
Abstract: A dynamic gesture recognition method includes: performing detection on each frame of image of a video stream using a preset static gesture detection model to obtain a static gesture in each frame of image of the video stream; in response to detection of a change of the static gesture from a preset first gesture to a second gesture, suspending the static gesture detection model and activating a preset dynamic gesture detection model; and performing detection on multiple frames of images that are pre-stored in a storage medium using the dynamic gesture detection model to obtain a dynamic gesture recognition result.
-
公开(公告)号:US20210124925A1
公开(公告)日:2021-04-29
申请号:US16726833
申请日:2019-12-25
Applicant: UBTECH ROBOTICS CORP LTD
Abstract: The present disclosure provides a picture hook identification method as well as an apparatus and a terminal device using the same. The method includes; determining geometric parameter(s) of an identification object based on image(s) collected b a camera and internal parameter(s) of the camera; comparing the geometric parameters of the identification object with geometric parameter(s) of a target picture book; and determining the identification object as the target picture book, if a difference between the geometric parameters of the identification object and the geometric parameters of the target picture book is within a preset range. In this manner, the target picture book is further filtered by matching the geometric parameters, which can reduce misidentification to improve the accuracy of identifying the picture book.
-
公开(公告)号:US12243242B2
公开(公告)日:2025-03-04
申请号:US17866574
申请日:2022-07-18
Applicant: UBTECH ROBOTICS CORP LTD
Inventor: Shuping Hu , Jun Cheng , Jingtao Zhang , Miaochen Guo , Dong Wang , Zaiwang Gu , Jianxin Pang
Abstract: A method includes: performing target detection on a current image to obtain detection information of a plurality of detected targets; obtaining position prediction information of each of a plurality of tracked targets and a number of times of tracking losses of targets from tracking information of each of the tracked targets, and determining a first matching threshold for each of the tracked targets according to the number of times of tracking losses of targets; calculating a motion matching degree between each of the tracked targets and each of the detected targets according to the position detection information and the position prediction information; for each of the tracked targets, obtaining a motion matching result according to the motion matching degree and the first matching threshold corresponding to the tracked target; and matching the detected targets and the tracked targets according to the motion matching results to obtain a tracking result.
-
15.
公开(公告)号:US12080098B2
公开(公告)日:2024-09-03
申请号:US17562963
申请日:2021-12-27
Applicant: UBTECH ROBOTICS CORP LTD
Inventor: Yusheng Zeng , Jun Cheng , Jianxin Pang
CPC classification number: G06V40/171 , G06T7/73 , G06V40/166
Abstract: A method for training a multi-task recognition model includes: obtaining a number of sample images, wherein some of the sample images are to provide feature-independent facial attributes, some of the sample images are to provide feature-coupled facial attributes, and some of the sample images are to provide facial attributes of face poses; training an initial feature-sharing model based on a first set of sample images to obtain a first feature-sharing model; training the first feature-sharing model based on the first set of sample images and a second set of sample images to obtain a second feature-sharing model with a loss value less than a preset second threshold; obtaining an initial multi-task recognition model by adding a feature decoupling model to the second feature-sharing model; and training the initial multi-task recognition model based on the sample images to obtain a trained multi-task recognition model.
-
公开(公告)号:US11941844B2
公开(公告)日:2024-03-26
申请号:US17403902
申请日:2021-08-17
Applicant: UBTECH ROBOTICS CORP LTD
Inventor: Yepeng Liu , Yusheng Zeng , Jun Cheng , Jing Gu , Yue Wang , Jianxin Pang
CPC classification number: G06T7/74 , G06N3/04 , G06T7/75 , G06V40/164 , G06T2207/20081 , G06T2207/20084 , G06T2207/30201
Abstract: An object detection model generation method as well as an electronic device and a computer readable storage medium using the same are provided. The method includes: during the iterative training of the to-be-trained object detection model, the detection accuracy of the iteration nodes of the object detection model is sequentially determined according to the node order, and the mis-detected negative samples of the object detection model at the iteration nodes with the detection accuracy less than or equal to a preset threshold are enhanced. Then the object detection model is trained at the iteration node based on the enhanced negative samples and a first amount of preset training samples. After the training at the iteration nodes are completed, it returns to the step of sequentially determining the detection accuracy of the iteration nodes of the object detection model until the training of the object detection model is completed.
-
公开(公告)号:US11776288B2
公开(公告)日:2023-10-03
申请号:US17389380
申请日:2021-07-30
Applicant: UBTECH ROBOTICS CORP LTD
Inventor: Yonghui Cai , Jun Cheng , Jianxin Pang , Youjun Xiong
IPC: G06N3/04 , G06V30/24 , G06V10/94 , G06T3/40 , G06F18/214
CPC classification number: G06V30/2504 , G06T3/4046 , G06V10/95 , G06F18/2148 , G06N3/04 , G06V2201/07
Abstract: A target object detection model is provided. The target object detection model includes a YOLOv3-Tiny model. Through the target object detection model, low-level information in the YOLOv3-Tiny sub-model can be merged with high-level information therein, so as to fuse the low-level information and the high-level information. Since the low-level information can be further used, the comprehensiveness of target detection is effectively improved, and the detection effect of small targets is improved.
-
公开(公告)号:US20220189208A1
公开(公告)日:2022-06-16
申请号:US17566734
申请日:2021-12-31
Applicant: UBTech Robotics Corp Ltd
Inventor: Chenghao Qian , Miaochen Guo , Jun Cheng , Jianxin Pang
Abstract: A gesture recognition method includes: acquiring a target image containing a gesture to be recognized; inputting the target image to a gesture recognition model that has a first sub-model, a second sub-model, and a third sub-model, the first sub-model is to determine a gesture category and a gesture center point, the second sub-model is to determine an offset of the gesture center point, and the third sub-model is to determine a length and a width of a bounding box for the gesture to be recognized; acquiring an output result from the gesture recognition model, the output result includes the gesture category, the gesture center point, and the offset of the gesture center point, and the length and the width of the bounding box; and determining the gesture category and a position of the bounding box of the gesture to be recognized according to the output result.
-
公开(公告)号:US20210056295A1
公开(公告)日:2021-02-25
申请号:US16817554
申请日:2020-03-12
Applicant: UBTECH ROBOTICS CORP LTD
Inventor: Jun Cheng , Kui Guo , Jing Gu , Jianxin Pang , Youjun Xiong
Abstract: The present disclosure provides a face identification method and a terminal device using the same. The method includes: obtaining a to-be-detected image; performing a brightness enhancement process on the to-be-detected image based on a preset second calculation method to generate a to-be-identified face image; obtaining a first channel value of each channel corresponding to each pixel in the to-be-identified face image; performing another brightness enhancement process on the to-be-identified face image based on each first channel value and a preset first calculation method to obtain a target to-be-identified face image; and performing a face identification process on the target to-be-identified face image to obtain an identification result. Through the above-mentioned scheme, an enhanced face identification manner for the images of low brightness is provided.
-
公开(公告)号:US11727784B2
公开(公告)日:2023-08-15
申请号:US17138944
申请日:2020-12-31
Applicant: UBTECH ROBOTICS CORP LTD
Inventor: Yusheng Zeng , Yepeng Liu , Jun Cheng , Jianxin Pang , Youjun Xiong
CPC classification number: G08B21/18 , G06T7/70 , G06V40/165 , G06T2207/30201
Abstract: A mask wearing status alarming method, a mobile device, and a computer readable storage medium are provided. The method includes: performing a face detection on an image to determine face areas each including a target determined as a face; determining a mask wearing status of the target in each face area; confirming the mask wearing status of the target in each face area using a trained face confirmation model to remove the face areas comprising the target being mistakenly determined as the face and determining a face pose in each of the remaining face areas to remove the face areas with the face pose not meeting a preset condition, in response to determining the mask wearing status as a not-masked-well status or a unmasked status; and releasing an alert corresponding to the mask wearing status of the target in each of the remaining face areas.
-
-
-
-
-
-
-
-
-