Abstract:
Disclosed herein are an apparatus and method for determining a modality of interaction between a user and a robot. The apparatus includes memory in which at least one program is recorded and a processor for executing the program. The program may perform recognizing a user state and an environment state by sensing circumstances around a robot, determining an interaction capability state associated with interaction with a user based on the recognized user state and environment state, and determining the interaction behavior of the robot for the interaction with the user based on the user state, the environment state, and the interaction capability state.
Abstract:
An apparatus and method for detecting an object using a multi-directional integral image are disclosed. The apparatus includes an area segmentation unit, an integral image calculation unit, and an object detection unit. The area segmentation unit places windows having a size of x*y on a full image having w*h pixels so that they overlap each other at their edges, thereby segmenting the full image into a single area, a double area and a quadruple area. The integral image calculation unit calculates a single directional integral image for the single area, and calculates multi-directional integral images for the double and quadruple areas. The object detection unit detects an object for the full image using the single directional integral image and the multi-directional integral images.
Abstract:
Disclosed herein are a human behavior recognition apparatus and method. The human behavior recognition apparatus includes a multimodal sensor unit for generating at least one of image information, sound information, location information, and Internet-of-Things (IoT) information of a person using a multimodal sensor, a contextual information extraction unit for extracting contextual information for recognizing actions of the person from the at least one piece of generated information, a human behavior recognition unit for generating behavior recognition information by recognizing the actions of the person using the contextual information and recognizing a final action of the person using the behavior recognition information and behavior intention information, and a behavior intention inference unit for generating the behavior intention information based on context of action occurrence related to each of the actions of the person included in the behavior recognition information.
Abstract:
An apparatus for creating a radio map includes a radio signal acquiring unit that acquires information on radio signals between one or more cooperative intelligent robots, a radio environment modeling unit that estimates radio strength for each cell configuring the radio map from the information on radio signals acquired by the radio signal acquiring unit, and a radio map creating unit that classifies a communication region of each cell and models the radio map according to the radio strength for each cell estimated by the radio environment modeling unit.
Abstract:
Disclosed herein are an apparatus and method for classifying clothing attributes based on deep learning. The apparatus includes memory for storing at least one program and a processor for executing the program, wherein the program includes a first classification unit for outputting a first classification result for one or more attributes of clothing worn by a person included in an input image, a mask generation unit for outputting a mask tensor in which multiple mask layers respectively corresponding to principal part regions obtained by segmenting a body of the person included in the input image are stacked, a second classification unit for outputting a second classification result for the one or more attributes of the clothing by applying the mask tensor, and a final classification unit for determining and outputting a final classification result for the input image based on the first classification result and the second classification result.
Abstract:
Disclosed herein are an apparatus and method for generating robot interaction behavior. The method for generating robot interaction behavior includes generating co-speech gesture of a robot corresponding to utterance input of a user, generating a nonverbal behavior of the robot, that is a sequence of next joint positions of the robot, which are estimated from joint positions of the user and current joint positions of the robot based on a pre-trained neural network model for robot pose estimation, and generating a final behavior using at least one of the co-speech gesture and the nonverbal behavior.
Abstract:
Disclosed herein are an apparatus and method for determining the speech and motion properties of an interactive robot. The method for determining the speech and motion properties of an interactive robot includes receiving interlocutor conversation information including at least one of voice information and image information about an interlocutor that interacts with an interactive robot, extracting at least one of a verbal property and a nonverbal property of the interlocutor by analyzing the interlocutor conversation information, determining at least one of a speech property and a motion property of the interactive robot based on at least one of the verbal property, the nonverbal property, and context information inferred from a conversation between the interactive robot and the interlocutor, and controlling the operation of the interactive robot based on at least one of the determined speech property and motion property of the interactive robot.
Abstract:
Disclosed herein are a cloud server, an edge server, and a method for generating an intelligence model using the same. The method for generating an intelligence model includes receiving, by the edge server, an intelligence model generation request from a user terminal, generating an intelligence model corresponding to the intelligence model generation request, and adjusting the generated intelligence model.
Abstract:
Disclosed herein are an apparatus and method for recommending federated learning based on recognition model tendency analysis. The method for recommending federated learning based on recognition model tendency analysis in a server device may include analyzing the tendency of a recognition model trained using reinforcement learning by each of multiple user terminals, grouping the multiple user terminals according to the tendency of the recognition model, and transmitting federated-learning group information including information about other user terminals grouped together with at least one of the multiple user terminals.
Abstract:
Disclosed herein are an apparatus and method for evaluating a human motion using a mobile robot. The method may include identifying the exercise motion of a user by analyzing an image of the entire body of the user captured using a camera installed in the mobile robot, evaluating the pose of the user by comparing the standard pose of the identified exercise motion with images of the entire body of the user captured by the camera of the mobile robot from two or more target locations, and comprehensively evaluating the exercise motion of the user based on the pose evaluation information of the user from each of the two or more target locations.