Abstract:
According to various embodiments of the present invention, an electronic device comprises: a memory including instructions and a training database, which includes data, on at least one object, acquired on the basis of an artificial intelligence algorithm; at least one sensor; and a processor connected to the at least one sensor and the memory, wherein the processor can be configured to execute the instructions in order to acquire data on a designated area including the at least one object by using the at least one sensor, identify location information and positioning information on the at least one object on the basis of the training database, and transmit a control signal for picking the at least one object to a picking tool related to the electronic device on the basis of the identified location information and positioning information.
Abstract:
Disclosed herein are a movable object and a movable object control method. The movable object control method may include acquiring an image of a movable object's surroundings, acquiring a signal having strength changing depending on a location of the movable object, generating a map on the basis of the signal and the image of the surroundings of the movable object, and applying the map to an algorithm to acquire a learned algorithm.
Abstract:
An electronic device to be put on a cradle may include a housing including a part in a hemispherical shape and physically coming into contact with the cradle in an arbitrary position when the electronic device is put on the cradle, a display arranged on another part of the housing, a camera module to obtain an image in a direction that the display faces, a sensor module to sense an orientation of the electronic device; and a processor to determine a target orientation of the electronic device based on the obtained image, and create control data to change the orientation of the electronic device based on the sensed orientation and the target orientation of the electronic device.
Abstract:
A control method may be applied to a surgical robot system including a slave robot having a robot arm to which a main surgical tool and an auxiliary surgical tool are coupled, and a master robot having a master manipulator to manipulate the robot arm. The control method includes acquiring data regarding a motion of the master manipulator, predicting a basic motion to be performed by an operator based on the acquired motion data and results of learning a plurality of motions constituting a surgical task, and adjusting the auxiliary surgical tool so as to correspond to the operator basic motion based on the predicted basic motion. The control method allows an operator to perform surgery more comfortably and to move or fix all required surgical tools to or at an optimized surgical position.
Abstract:
A mobile robot and method for controlling the same are provided creating patches in images captured by a camera while the mobile robot is moving, estimating motion blur of the patches, and correcting the position of the mobile robot based on the patch from which the motion blur is eliminated, thereby increasing precision in tracking and reliability through accurate mapping. The mobile robot includes a main body, a traveler to move the main body, a camera combined with the main body to capture an image of a surrounding of the main body, a position detector to create a patch in the image captured by the camera, estimate a motion blur of the patch, and track a position of the main body based on the created patch from which the motion blur is eliminated, and a controller to control the traveler based on the position of the main body.
Abstract:
A surgical robot system includes a slave system to perform a surgical operation on a patient and an imaging system that includes an image capture unit including a plurality of cameras to acquire a plurality of affected area images, an image generator detecting an occluded region in each of the affected area images acquired by the plurality of cameras, removing the occluded region therefrom, warping each of the affected area images from which the occluded region is removed, and matching the affected area images to generate a final image, and a controller driving each of the plurality of cameras of the image capture unit to acquire the plurality of affected area images and inputting the acquired plurality of affected area images to the image generator to generate a final image.
Abstract:
A surgical robot system and a control method thereof include a slave device and a master device to control motion of the slave device. The surgical robot system further includes a monitoring device that inspects a signal transmitted within the system in real time to stop motion of the slave device if an abnormal signal is detected.
Abstract:
A robot and method to recognize and handle abnormal situations includes a sensing unit to sense internal and external information of the robot, a storage unit to store the information sensed by the sensing unit, an inference model, a learning model, services providable by the robot, subtasks to provide the services, and handling tasks to handle abnormal situations, and a controller to determine whether an abnormal situation has occurred while the subtasks to provide the selected service are being performed through the learning model in the storage unit and to select a handling task to handle the abnormal situation through the inference model in the storage unit if it is determined that the abnormal situation has occurred. The robot may recognize and handle abnormal situations even though data other than that defined by a designer of the robot is input thereto due to noises or operational environment.
Abstract:
An object recognition method, a descriptor generating method for object recognition, and a descriptor for object recognition capable of extracting feature points using the position relationship and color information relationship between points in a group that are sampled from an image of an object, and capable of recognizing the object using the feature points, the object recognition method including extracting feature components of a point cloud using the position information and the color information of the points that compose the point cloud of the three-dimensional (3D) image of an object, generating a descriptor configured to recognize the object using the extracted feature components; and performing the object recognition based on the descriptor.
Abstract:
An endoscope to acquire a 3D image and a wide view-angle image and an image processing apparatus using the endoscope includes a front image acquirer to acquire a front image and a lower image acquirer to acquire a lower image in a downward direction of the front image acquirer. The front image acquirer includes a first objective lens and a second objective lens arranged side by side in a horizontal direction. The lower image acquirer includes a third objective lens located below the first objective lens and inclined from the first objective lens and a fourth objective lens located below the second objective lens and inclined from the second objective lens.