-
公开(公告)号:US12172325B2
公开(公告)日:2024-12-24
申请号:US18075426
申请日:2022-12-06
Applicant: UBTECH ROBOTICS CORP LTD
Inventor: Meihui Zhang , Yizhang Liu , Youjun Xiong , Huan Tan
Abstract: A collision detection method, a storage medium, and a robot are provided. The method includes: calculating an external torque of a first joint of the robot based on a preset generalized momentum-based disturbance observer; calculating an external torque of a second joint of the robot based on a preset long short-term memory network; calculating an external torque of a third joint of the robot based on the external torque of the first joint and the external torque of the second joint; and determining whether the robot has collided with an external environment or not based on the external torque of the third joint and a preset collision threshold. In the present disclosure, the component of the model error in the joint external torque calculated by the disturbance observer is eliminated to obtain the accurate contact torque, thereby improving the accuracy of the collision detection.
-
公开(公告)号:US20240193929A1
公开(公告)日:2024-06-13
申请号:US18536287
申请日:2023-12-12
Applicant: UBTECH ROBOTICS CORP LTD
Inventor: KAN WANG , Jianxin Pang , Huan Tan
CPC classification number: G06V10/82 , G06V10/7715 , G06V2201/07
Abstract: A target identification method includes: obtaining an image containing a target to be identified; performing feature extraction on the image to obtain image features in the image; and inputting the image features into a target identification network model to obtain an identification result that determines a class to which the target to be identified belongs. The target identification network model includes a loss function that is based on intra-class constraints and inter-class constraints. The intra-class constraints are to constrain an intra-class distance between sample image features of a sample target and a class center of a class to which the sample target belongs, and the inter-class constraints are to constrain inter-class distances between class centers of different classes, and/or inter-class angles between the class centers of different classes.
-
63.
公开(公告)号:US20240135579A1
公开(公告)日:2024-04-25
申请号:US18380086
申请日:2023-10-13
Applicant: UBTECH ROBOTICS CORP LTD
Inventor: Kan WANG , Shuping Hu , Jianxin Pang , Huan Tan
IPC: G06T7/73 , G06V10/77 , G06V10/774 , G06V10/776 , G06V10/82 , G06V40/10
CPC classification number: G06T7/74 , G06V10/7715 , G06V10/774 , G06V10/776 , G06V10/82 , G06V40/10 , G06T2207/20081 , G06T2207/20084 , G06T2207/30196 , G06V20/52
Abstract: A method for obtaining a feature extraction model, a method for human fall detection and a terminal device are provided. The method for human fall detection includes: inputting a human body image into a feature extraction model for feature extraction to obtain a target image feature; in response to a distance between the target image feature and a pre-stored mean value of standing category image features being greater than or equal to a preset distance threshold, determining that the human body image is a human falling image; and in response to the distance being less than the preset distance threshold, determining that the human body image is a human standing image. The feature extraction model is obtained based on constraint training to aggregate standing category image features and separate falling category image features from the standing category image features.
-
公开(公告)号:US11837006B2
公开(公告)日:2023-12-05
申请号:US17364743
申请日:2021-06-30
Inventor: Chuqiao Dong , Dan Shao , Zhen Xiu , Dejun Guo , Huan Tan
CPC classification number: G06V40/10 , G06T7/74 , G06V10/462 , G06T2207/10024 , G06T2207/10028 , G06T2207/30196
Abstract: Human posture determination is disclosed. Human posture is determined by obtaining range image(s) through a range camera, detecting key points of an estimated skeleton of a human in color data of the range image(s) and calculating positions of the detected key points based on depth data of the range image(s), choosing a feature map from a set of predefined feature maps based on the detected key points among a set of predefined key points, obtaining two features of a body of the human corresponding to the chosen feature map based on the positions of the detected key points, and determining a posture of the human according to the two features in the chosen feature map.
-
公开(公告)号:USD986119S1
公开(公告)日:2023-05-16
申请号:US29796810
申请日:2021-06-28
Designer: Brandon Jon LaPlante , Francisco Jose Hernandez , Zhen Xiu , ChengKun Zhang , Huan Tan
Abstract: FIG. 1 is a first perspective view of a robot showing the claimed design in accordance with the present disclosure;
FIG. 2 is a second perspective view thereof;
FIG. 3 is a front elevational view thereof;
FIG. 4 is a rear elevational view thereof;
FIG. 5 is a left side elevational view thereof;
FIG. 6 is a right side elevational view thereof;
FIG. 7 is a top plan view thereof;
FIG. 8 is a bottom plan view thereof;
FIG. 9 is a perspective view of the robot, wherein the robot is in a walk free navigation state; and,
FIG. 10 is a perspective view of the robot, wherein the robot is in a walk assist state.
The broken lines in the Figures are for the purpose of illustrating portions of the article that form no part of the claimed design.-
66.
公开(公告)号:US20230137715A1
公开(公告)日:2023-05-04
申请号:US17512685
申请日:2021-10-28
Inventor: Dan Shao , Yang Shen , Fei Long , Jiexin Cai , Huan Tan
IPC: B25J9/16
Abstract: A vision-guided picking and placing method for a mobile robot that has a manipulator having a hand and a camera, includes: receiving a command instruction that instructs the mobile robot to grasp a target item among at least one object; controlling the mobile robot to move to a determined location, controlling the manipulator to reach for the at least one object, and capturing one or more images of the at least one object using the camera; extracting visual feature data from the one or more images, matching the extracted visual feature data to preset feature data of the target item to identify the target item, and determining a grasping position and a grasping vector of the target item; and controlling the manipulator and the hand to grasp the target item according to the grasping position and the grasping vector, and placing the target item to a target position.
-
公开(公告)号:US20230072318A1
公开(公告)日:2023-03-09
申请号:US17530501
申请日:2021-11-19
Inventor: Houzhu Ding , Armen Gardabad Ohanian , Brandon Jon LaPlante , Chengkun Zhang , Zhen Xiu , Huan Tan
Abstract: A robotic assistant includes a wheeled base, a body positioned on the base, a foldable seat rotatably connected to the body, an actuator to rotate the foldable seat with respect to the body, and a control system that receives command instructions. The actuator is electrically coupled to the control system. In response to the command instructions, the control system is to control the actuator to rotate the foldable seat to a folded position or an unfolded position. The control system is further to detect whether an external force from a user has applied to the foldable seat, and release the actuator to allow the foldable seat to be manually rotated.
-
公开(公告)号:US20220409468A1
公开(公告)日:2022-12-29
申请号:US17359672
申请日:2021-06-28
Inventor: Chengkun Zhang , Luis Alfredo Mateos Guzman , Houzhu Ding , Zhen Xiu , Huan Tan
Abstract: A robotic walking assistant includes a wheeled base having a base and one or more position adjustable wheels connected to the base, a body disposed in a vertical direction, positioned on the wheeled base and having a handle, and a control system that receives command instructions. Each of the one or more wheels is slidable with respect to the base between a retracted position and an extended position in a direction that is substantially parallel to a surface where the wheeled base moves. In response to the command instructions, the control system moves the one or more wheels between the retracted positions and the extended positions.
-
公开(公告)号:US11514927B2
公开(公告)日:2022-11-29
申请号:US17232934
申请日:2021-04-16
Inventor: David Ayllón Álvarez , Yi Zheng , Huan Tan
IPC: G10L25/78 , H04S3/00 , H04R1/40 , H04R3/00 , G10L15/16 , G10L15/02 , G10L25/24 , G10L25/18 , G06N3/04 , G06N3/08 , G10L15/22
Abstract: Embodiments of the disclosure provide systems and methods for speech detection. The method may include receiving a multichannel audio input that includes a set of audio signals from a set of audio channels in an audio detection array. The method may further include processing the multichannel audio input using a neural network classifier to generate a series of classification results in a series of time windows for the multichannel audio input. The neural network classifier includes a causal temporal convolutional network (TCN) configured to determine a classification result for each time window based on portions of the multichannel audio input in the corresponding time window and one or more time windows before the corresponding time window. The method may additionally include determining whether the multichannel audio input includes one or more speech segments in the series of time windows based on the series of classification results.
-
公开(公告)号:US20220350342A1
公开(公告)日:2022-11-03
申请号:US17239603
申请日:2021-04-25
Inventor: Dejun Guo , Ting-Shuo Chou , Yang Shen , Huan Tan
Abstract: A moving target following method, which is executed by one or more processors of a robot that includes a camera and a sensor electrically coupled to the one or more processors, includes: performing a body detection to a body of a target based on images acquired by the camera to obtain a body detection result; performing a leg detection to legs of the target based on data acquired by die sensor to obtain a leg detection result; and fusing the body detection result and the leg detection result to obtain a fusion result, and controlling the robot to follow the target based on the fusion result.
-
-
-
-
-
-
-
-
-