Abstract:
A force sensing apparatus and an operating method of the force sensing apparatus may obtain and provide information about a force applied to an object, thereby enabling control of a force to be applied to manipulate the object.
Abstract:
An electronic apparatus is provided. The electronic apparatus includes a camera, a Light Detection And Ranging (LiDAR) sensor, and a processor configured to track an object based on a plurality of images photographed by the camera sequentially, wherein, to track the object based on the plurality of images, the processor is configured to, based on the object being identified in a first image from among a plurality of images and subsequently, not being identified in a second image after the first image, control a photographing direction of the camera based on scanning information obtained by the LiDAR sensor.
Abstract:
A key generating method includes obtaining a first error correcting code (ECC) for original data, obtaining read data from a cell array of a memory comprising the original data, generating a second ECC for the read data, obtaining a location of a cell in which an error occurs from the cell array of the memory in response to the second ECC being different from the first ECC, and generating a key for the memory based on the location of the cell in which the error occurs.
Abstract:
A three-dimensional display device and a user interfacing method therefor are disclosed. The three-dimensional display device according to one embodiment comprises: a display unit for displaying a three-dimensional virtual object; a user input signal generation unit for generating a user input signal by detecting a handling object for handling an operation of the three-dimensional virtual object in a three-dimensional space matched with the three-dimensional virtual object; and a control unit for controlling the operation of the three-dimensional virtual object according to the user input signal.
Abstract:
Methods and systems of controlling an electronic device that executes at least one application include receiving a multipoint input; detecting input points of the multipoint input; and generating a layer for executing the at least one application based on the detected input points of the multipoint input.
Abstract:
A tactile feedback apparatus may include: a touch display configured to display objects; a tactile information extractor configured to extract tactile information corresponding to an object touched by a touch input tool; a tactile information changing unit configured to change the tactile information based on sensing information of the touch input tool; and/or a tactile feedback provider configured to provide the touch input tool with tactile feedback based on the tactile information.
Abstract:
A user input apparatus and method may measure, using a first sensor, surface input information that is applied to a surface of a user input apparatus, may measure, using a second sensor, orientation information that is input based on a physical quantity associated with a pose or a rotary motion of the user input apparatus, and may generate a content control signal by combining the surface input information and the orientation information.
Abstract:
A contact type tactile feedback apparatus and operational method of the contact type tactile feedback apparatus is provided. The contact type tactile feedback apparatus may enable an object to be in close contact with a power feedback portion to transfer a power sensed by a sensor, using a fixing portion, thereby enabling the object to recognize the power, intuitively.
Abstract:
Provided are a mobile robot and a method of driving the same. A method in which the mobile robot moves along with a user includes photographing surroundings of the mobile robot, detecting the user from an image captured by the photographing, tracking a location of the user within the image as the user moves, predicting a movement direction of the user, based on a last location of the user within the image, when the tracking of the location of the user is stopped, and determining a traveling path of the mobile robot, based on the predicted movement direction of the user.
Abstract:
An electronic apparatus is provided. The electronic apparatus includes a camera; a memory configured to store at least one instruction; and at least one processor configured to execute the at least one instruction to: detect at least one object included in an image captured by the camera; identify information on an engagement of each of the at least one object with the electronic apparatus; obtain gesture information of each of the at least one object; obtain a target object from among the at least one object based on an operation status of the electronic apparatus, the information on the engagement of each of the at least one object, and the obtained gesture information of each of the at least one object; identify a function corresponding to gesture information of the target object; and execute the identified function.