Abstract:
A mobile robot includes a travel drive unit configured to move a main body, an image acquisition unit configured to acquire a plurality of images by continuously photographing surroundings of the main body, a storage configured to store the plurality of continuous images acquired by the image acquisition unit, a sensor unit having one or more sensors configured to sense an object during the movement of the main body, and a controller configured to, in response to the sensing of the object by the sensor unit, select an image acquired at a specific point in time earlier than an object sensing time of the sensor unit from among the plurality of continuous images based on a moving direction and a moving speed of the main body, and recognize an attribute of the object included in the selected image acquired at the specific point in time.
Abstract:
A self-learning robot, according to one embodiment of the present invention, comprises: a data receiving unit for sensing video data or audio data relating to an object located within a predetermined range; a data recognition unit for matching data received from the data receiving unit and data included in a database in the self-learning robot; a result output unit for outputting a matching result from the data recognition unit; a recognition result verifying unit for determining the accuracy of the matching result; a server communication unit for transmitting data received from the data receiving unit to a server, when the accuracy of the matching result determined by the recognition result verifying unit is lower than a predetermined level; and an action command unit for causing the self-learning robot to perform a pre-set object response action, when the accuracy of the matching result determined by the recognition result verifying unit is at least the predetermined level.
Abstract:
A mobile robot includes a travel drive unit configured to move a main body, an image acquisition unit configured to acquire an image of surroundings of the main body, a sensor unit having one or more sensors configured to sense an object during the movement of the main body, a storage configured to, when the sensor unit senses an object, store position information of the sensed object and position information of the mobile robot, register an area having a predetermined size around a position of the sensed object as an object area in a map, and store the image acquired by the image acquisition unit in the object area, and a controller having an object recognition module configured to recognize the object sequentially with respect to images acquired by the image acquisition unit in the object area, and determine a final attribute of the object based on a plurality of recognition results of the sequential recognition.
Abstract:
The present disclosure provides a method for controlling a robot cleaner including a travel operation in which the robot cleaner travels, a recognition operation in which when the robot cleaner contacts an obstacle during the travel, the robot cleaner determines whether the obstacle is pushed by the robot cleaner and slides, and an obstacle bypass operation in which upon determination that the obstacle is the pushed-and-sliding obstacle, the robot cleaner stops the travel and then bypasses the pushed-and-sliding obstacle.
Abstract:
An image photographing device for photographing images through a front camera and a rear camera, according to one embodiment of the present invention, comprises: a display unit; a feature extraction unit for extracting facial features from an image of a user's face which is displayed on a preview screen through the rear camera; a structure extraction unit for extracting the structure of the user's face by using the extracted facial features; an expression extraction unit for extracting the facial expression of the user by using the extracted facial features if the extracted structure of the face matches a standard facial structure; and an notification unit for outputting a photograph notification signal if the extracted facial expression of the user matches a standard facial expression.