-
公开(公告)号:US12017116B2
公开(公告)日:2024-06-25
申请号:US17106465
申请日:2020-11-30
发明人: Do-Hyung Kim , Jae-Hong Kim , Young-Woo Yoon , Jae-Yeon Lee , Min-Su Jang , Jeong-Dan Choi
CPC分类号: A63B24/0062 , A61B5/1116 , A61B5/1128 , B25J19/023 , G06T7/246 , G06T7/73 , A63B2220/806 , A63B2230/62 , G06T2207/30196
摘要: Disclosed herein are an apparatus and method for evaluating a human motion using a mobile robot. The method may include identifying the exercise motion of a user by analyzing an image of the entire body of the user captured using a camera installed in the mobile robot, evaluating the pose of the user by comparing the standard pose of the identified exercise motion with images of the entire body of the user captured by the camera of the mobile robot from two or more target locations, and comprehensively evaluating the exercise motion of the user based on the pose evaluation information of the user from each of the two or more target locations.
-
2.
公开(公告)号:US10800043B2
公开(公告)日:2020-10-13
申请号:US16206711
申请日:2018-11-30
发明人: Cheon-Shu Park , Jae-Hong Kim , Jae-Yeon Lee , Min-Su Jang
摘要: Disclosed herein are an interaction apparatus and method. The interaction apparatus includes an input unit for receiving multimodal information including an image and a voice of a target to allow the interaction apparatus to interact with the target, a recognition unit for recognizing turn-taking behavior of the target using the multimodal information, and an execution unit for taking an activity for interacting with the target based on results of recognition of the turn-taking behavior.
-
公开(公告)号:US10789458B2
公开(公告)日:2020-09-29
申请号:US16213833
申请日:2018-12-07
发明人: Do-Hyung Kim , Jin-Hyeok Jang , Jae-Hong Kim , Sung-Woong Shin , Jae-Yeon Lee , Min-Su Jang
摘要: Disclosed herein are a human behavior recognition apparatus and method. The human behavior recognition apparatus includes a multimodal sensor unit for generating at least one of image information, sound information, location information, and Internet-of-Things (IoT) information of a person using a multimodal sensor, a contextual information extraction unit for extracting contextual information for recognizing actions of the person from the at least one piece of generated information, a human behavior recognition unit for generating behavior recognition information by recognizing the actions of the person using the contextual information and recognizing a final action of the person using the behavior recognition information and behavior intention information, and a behavior intention inference unit for generating the behavior intention information based on context of action occurrence related to each of the actions of the person included in the behavior recognition information.
-
公开(公告)号:US10777198B2
公开(公告)日:2020-09-15
申请号:US16102398
申请日:2018-08-13
发明人: Young-Woo Yoon , Jae-Hong Kim , Jae-Yeon Lee , Min-Su Jang
摘要: Disclosed herein are an apparatus and method for determining the speech and motion properties of an interactive robot. The method for determining the speech and motion properties of an interactive robot includes receiving interlocutor conversation information including at least one of voice information and image information about an interlocutor that interacts with an interactive robot, extracting at least one of a verbal property and a nonverbal property of the interlocutor by analyzing the interlocutor conversation information, determining at least one of a speech property and a motion property of the interactive robot based on at least one of the verbal property, the nonverbal property, and context information inferred from a conversation between the interactive robot and the interlocutor, and controlling the operation of the interactive robot based on at least one of the determined speech property and motion property of the interactive robot.
-
公开(公告)号:US11983248B2
公开(公告)日:2024-05-14
申请号:US17496588
申请日:2021-10-07
发明人: Chan-Kyu Park , Do-Hyung Kim , Jae-Hong Kim , Jae-Yeon Lee , Min-Su Jang
IPC分类号: G06F18/2431 , G06F18/20 , G06F18/214 , G06F18/25 , G06N3/08 , G06T3/4046 , G06T7/11 , G06V10/44 , G06V10/94 , G06V40/10
CPC分类号: G06F18/2431 , G06F18/214 , G06F18/254 , G06F18/285 , G06N3/08 , G06T3/4046 , G06T7/11 , G06V10/449 , G06V10/95 , G06V40/10 , G06T2207/20076 , G06T2207/20081 , G06T2207/20084 , G06T2207/30196
摘要: Disclosed herein are an apparatus and method for classifying clothing attributes based on deep learning. The apparatus includes memory for storing at least one program and a processor for executing the program, wherein the program includes a first classification unit for outputting a first classification result for one or more attributes of clothing worn by a person included in an input image, a mask generation unit for outputting a mask tensor in which multiple mask layers respectively corresponding to principal part regions obtained by segmenting a body of the person included in the input image are stacked, a second classification unit for outputting a second classification result for the one or more attributes of the clothing by applying the mask tensor, and a final classification unit for determining and outputting a final classification result for the input image based on the first classification result and the second classification result.
-
公开(公告)号:US11691291B2
公开(公告)日:2023-07-04
申请号:US17105924
申请日:2020-11-27
发明人: Woo-Ri Ko , Do-Hyung Kim , Jae-Hong Kim , Young-Woo Yoon , Jae-Yeon Lee , Min-Su Jang
CPC分类号: B25J11/001 , B25J9/161 , B25J9/163 , B25J9/1664 , B25J9/1669 , B25J9/1692 , B25J11/0005 , B25J13/003 , B25J9/1661 , G05B2219/40411
摘要: Disclosed herein are an apparatus and method for generating robot interaction behavior. The method for generating robot interaction behavior includes generating co-speech gesture of a robot corresponding to utterance input of a user, generating a nonverbal behavior of the robot, that is a sequence of next joint positions of the robot, which are estimated from joint positions of the user and current joint positions of the robot based on a pre-trained neural network model for robot pose estimation, and generating a final behavior using at least one of the co-speech gesture and the nonverbal behavior.
-
公开(公告)号:US10915772B2
公开(公告)日:2021-02-09
申请号:US16227327
申请日:2018-12-20
发明人: Ho-Sub Yoon , Jae-Yoon Jang , Jae-Hong Kim
摘要: Disclosed herein are an apparatus and method for registering face poses for face recognition. The apparatus includes a face detection unit for detecting the face of a user from an image including the face of the user; a pose recognition unit for recognizing the face pose of the face of the user based on the degree of rotation of the face of the user; a registration interface unit for providing an interface for showing information about whether the face poses of the face of the user are registered; and a face registration unit for registering the face pose of the face of the user when the face pose of the face of the user is recognized as an unregistered face pose based on the interface.
-
公开(公告)号:US10748444B2
公开(公告)日:2020-08-18
申请号:US15450337
申请日:2017-03-06
发明人: Do-Hyung Kim , Min-Su Jang , Jae-Hong Kim , Young-Woo Yoon , Jae-Il Cho
摘要: Disclosed herein are an apparatus for writing a motion script and an apparatus and method for self-teaching of a motion. The method for self-teaching of a motion, in which the apparatus for writing a motion script and the apparatus for self-teaching of a motion are used, includes creating, by the apparatus for writing a motion script, a motion script based on expert motion of a first user; analyzing, by the apparatus for self-teaching of a motion, a motion of a second user, who learns the expert motion, based on the motion script; and outputting, by the apparatus for self-teaching of a motion, a result of analysis of the motion of the second user.
-
公开(公告)号:US10540567B2
公开(公告)日:2020-01-21
申请号:US15079261
申请日:2016-03-24
发明人: Kye-Kyung Kim , Sang-Seung Kang , Jae-Yeon Lee , Jae-Hong Kim , Joong-Bae Kim , Sung-Woong Shin
IPC分类号: G06K9/46
摘要: Disclosed are a bin-picking system and a bin-picking method. A bin-picking system includes a transformable bin-picking box; a supporting unit configured to support the bottom part of the bin-picking box and be movable upward and downward; and a control unit configured to change an alignment of at least one bin-picking candidate object by transforming the bin-picking box by controlling a movement of the supporting unit if no bin-picking target object is detected from the at least one bin-picking candidate object existing inside bin-picking box and placed on the supporting unit to facilitate detection of a bin-picking target object and bin-picking.
-
公开(公告)号:US09990538B2
公开(公告)日:2018-06-05
申请号:US15089944
申请日:2016-04-04
发明人: Ho-Sub Yoon , Kyu-Dae Ban , Young-Woo Yoon , Jae-Hong Kim
CPC分类号: G06K9/00275 , G06K9/00248 , G06K9/00281 , G06K9/00288 , G06K9/6215
摘要: A face recognition technology using physiognomic feature information, which can improve the accuracy of face recognition. For this, the face recognition method using physiognomic feature information includes defining standard physiognomic types for respective facial elements, capturing a facial image of a user, detecting information about facial elements from the facial image, and calculating similarity scores relative to the standard physiognomic types for respective facial elements of the user based on the facial element information.
-
-
-
-
-
-
-
-
-