Abstract:
Provided are method and apparatus of defining at least a portion of a vicinity area of a portable device as an input area and controlling the portable device based on a user input provided on the input area, and the portable device enabling the method, wherein the portable device includes a sensing unit configured to sense a user input in a vicinity area of the portable device, a recognizer configured to recognize a user gesture corresponding to the user input, and an output unit configured to output a control instruction corresponding to the recognized user gesture to control the portable device.
Abstract:
A display apparatus and method may be used to estimate a depth distance from an external object to a display panel of the display apparatus. The display apparatus may acquire a plurality of images by detecting lights that are input from an external object and passed through apertures formed in a display panel, may generate one or more refocused images, and may calculate a depth from the external object to the display panel using the plurality of images acquired and one or more refocused images.
Abstract:
A display device, and a method of operating and manufacturing the display device may receive input light from an object to be scanned that is positioned in front of a display for displaying an image, and may perform scanning of the object to be scanned.
Abstract:
A method and apparatus for processing an image is provided. The image processing apparatus may adjust or generate a disparity of a pixel, by assigning similar disparities to two pixels that are adjacent to each other and have similar pixels. The image processing apparatus may generate a final disparity map that may minimize energy, based on an image and an initial disparity map, under a predetermined constraint. A soft constraint or a hard constraint may be used as the constraint.
Abstract:
A method and apparatus for a user interface using a gaze interaction is disclosed. The method for the user interface using the gaze interaction may include obtaining an image including eyes of a user, estimating a gaze position of the user, using the image including the eyes of the user, and determining whether to activate a gaze adjustment function for controlling a device by a gaze of the user, based on the gaze position of the user with respect to at least one toggle area on a display.