Abstract:
An apparatus, system, and method for controlling a virtual object. The virtual object is controlled by detecting a hand motion of a user and generating an event corresponding to the hand motion. Accordingly, the user may control the virtual object displayed on a 3-dimensional graphic user interface (3D GUI) more intuitively and efficiently.
Abstract:
An apparatus and method for controlling a 3-dimensional (3D) image using a virtual tool are provided. The apparatus may detect a user object that controls the 3D image, and determine a target virtual tool matching movement of the user object. In addition, the apparatus may display the determined target virtual tool along with the 3D image.
Abstract:
In an apparatus and method for controlling an interface, a user interface (UI) may be controlled using information on a hand motion and a gaze of a user without separate tools such as a mouse and a keyboard. That is, the UI control method provides more intuitive, immersive, and united control of the UI. Since a region of interest (ROI) sensing the hand motion of the user is calculated using a UI object that is controlled based on the hand motion within the ROI, the user may control the UI object in the same method and feel regardless of a distance from the user to a sensor. In addition, since positions and directions of view points are adjusted based on a position and direction of the gaze, a binocular 2D/3D image based on motion parallax may be provided.
Abstract:
Provided are an apparatus and method for improving usability of a touch screen. The apparatus includes a touch sensing unit that senses a first touch and a second touch and detects the touched locations of the sensed touches, a pointer setting unit that sets a pointer to the detected location of the first touch, and a coordinate transforming unit that transforms movement of a touched location, which is caused by movement of the second touch, into movement of a location of the pointer.
Abstract:
An image display and storage device, method, and medium to process an original image and generate a main image so that the original image does not overlap a sub image, and store the original image instead of the main image when the main image and the sub image are displayed. The device includes an image processor to receive an image, and to generate a display image and a storage image using the received image, a display unit to receive the display image from the image processor, and to display the display image, and an image storing unit to receive the storage image from the image processor and to store the image.
Abstract:
Provided is an apparatus and method for output voice, which receives an information item suitable to a user's taste among information items existing on a network such as the Internet in a text format, converts the information item into voice, and then outputs the voice. The apparatus to output voice includes an information search unit searching at least one first information item corresponding to a preset information class among information items existing on a network, an information processing unit extracting a core information item from the first information items in such a manner as to correspond with a preset reproducing time period, a voice generating unit converting the core information into voice, and an output unit outputting the converted voice.
Abstract:
Provided is an imaging apparatus, method and computer-readable medium that photographs an object through at least one aperture of each of pixel of a display panel. The imaging apparatus includes a camera unit in back of the display panel and photographs the object through the at least one aperture of each of the pixels of the display panel to enable a location of the camera unit to be the same as a location of a display unit, measures a distance to the object using a blur effect occurring when photographing through the at least one aperture of the display panel, and restores an image where the blur effect is included.
Abstract:
A method of controlling a viewpoint of a user or a virtual object on a two-dimensional (2D) interactive display is provided. The method may convert a user input to at least 6 degrees of freedom (DOF) structured data according to a number of touch points, a movement direction thereof, and a rotation direction thereof. Any one of the virtual object and the viewpoint of the user may be determined as a manipulation target based on a location of the touch point.
Abstract:
A sensing module, and a Graphical User Interface (GUI) control apparatus and method are provided. The sensing module may be inserted into an input device, for example a keyboard, a mouse, a remote controller, and the like, and may sense a hovering movement of a hand of a user within a sensing area, and thus it is possible to provide an interface to control a wider variety of GUIs, and possible to prevent a display from being covered.
Abstract:
A method of controlling a viewpoint of a user or a virtual object on a two-dimensional (2D) interactive display is provided. The method may convert a user input to at least 6 degrees of freedom (DOF) structured data according to a number of touch points, a movement direction thereof, and a rotation direction thereof. Any one of the virtual object and the viewpoint of the user may be determined as a manipulation target based on a location of the touch point.