Abstract:
A device and method to display a screen based on an event are provided. A device according to an exemplary embodiment may display, in response to an event associated with a movement of an object, a graphic representation that corresponds to the event by overlaying the graphic representation on visual contents.
Abstract:
A method and device for recognizing a motion of an object, the method including receiving event signals from a vision sensor configured to sense the motion, storing, in an event map, first time information indicating a time at which intensity of light corresponding to the event signals changes; generating an image based on second time information corresponding to a predetermined time range among the first time information, and recognizing the motion of the object based on the image.
Abstract:
A method and apparatus for detecting an object using an event-based sensor is provided. An object detection method includes determining a feature vector based on target pixels and neighbor pixels included in an event image, and determining a target object corresponding to the target pixels based on the feature vector.
Abstract:
A method and an apparatus for displaying a screen in response to an event related to a motion of an external object, are provided. The method includes generating an event signal in response a motion of an external object being sensed, sensing a movement of the external object relative to an apparatus based on the event signal, and displaying a screen based on the movement of the external object.
Abstract:
An event filtering device and a motion recognition device using thereof are provided. The motion recognition device includes an emitter configured to emit an infrared ray in a pattern; a detector configured to detect events in a visible ray area and an infrared ray area; a filter configured to determine whether at least one portion of the detected events is detected using the infrared ray in the pattern, and filter the detected events based on a result of the determination; and a motion recognizer configured to perform motion recognition based on a detected event accepted by the filter.
Abstract:
A proximity sensor and proximity sensing method using a change in light quantity of a reflected light are disclosed. The proximity sensor may include a quantity change detection unit which detects a change in a quantity of reflected light which is output light which has been reflected by an object, where an intensity of the output light changes, and a proximity determination unit which determines a proximity of the object to the quantity change detection unit based on a change in the intensity of the output light and the detected change in the quantity of the reflected light.
Abstract:
An apparatus for providing a user interface provides a first user interface mode, and can switch to a second user interface mode if it receives a user command instructing to switch the mode to the second user interface mode which has a different user command input method from the first user interface mode. In the switching process, the apparatus is configured to reset a recognition pattern to distinguish a smaller number of user input types than the number of the user input types distinguishable in the first UI mode.
Abstract:
An object recognition apparatus and an object recognition method are provided. The object recognition method includes generating an input image based on an event flow of an object, generating a composite feature based on features extracted by a plurality of recognizers, and recognizing the object based on the composite feature.
Abstract:
An apparatus and a method. The apparatus includes an image representation unit configured to receive a sequence of frames generated from events sensed by a dynamic vision sensor (DVS) and generate a confidence map from non-noise events; and an image denoising unit connected to the image representation unit and configured to denoise an image in a spatio-temporal domain. The method includes receiving, by an image representation unit, a sequence of frames generated from events sensed by a DVS, and generating a confidence map from non-noise events; and denoising, by an image denoising unit connected to the image representation unit, images formed from the frames in a spatio-temporal domain.