Abstract:
A method of extracting a static pattern from an output of an event-based sensor. The method may include receiving an event signal from the event-based sensor in response to dynamic input, and extracting a static pattern associated with the dynamic input based on an identifier and time included in the event signal. The static pattern may be extracted from a map generated based on the identifier and time.
Abstract:
A disparity determination method and apparatus are provided. The disparity determination method includes receiving first signals of an event from a first sensor disposed at a first location and second signals of the event from a second sensor disposed at a second location that is different than the first location, and extracting a movement direction of the event, based on at least one among the first signals and the second signals. The disparity determination method further includes determining a disparity between the first sensor and the second sensor, based on the movement direction, a difference between times at which the event is sensed by corresponding pixels in the first sensor, and a difference between times at which the event is sensed by corresponding pixels in the first sensor and the second sensor.
Abstract:
A user input processing apparatus using a motion of an object to determine whether to track the motion of the object, and track the motion of the object using an input image including information associated with the motion of the object.
Abstract:
Provided is an event-based image processing apparatus and method, the apparatus including a sensor which senses occurrences of a predetermined event in a plurality of image pixels and which outputs an event signal in response to the sensed occurrences, a time stamp unit which generates time stamp information by mapping a pixel corresponding to the event signals to a time at which the event signals are output from the sensor, and an optical flow generator which generates an optical flow based on the time stamp information in response to the outputting of the event signals.
Abstract:
An image processing device includes a vision sensor and a processor. The vision sensor generates a plurality of events in which an intensity of light changes and generates a plurality of timestamps depending on times when the events occur. In addition, the processor may regenerate a timestamp of a pixel where an abnormal event occurs, based on temporal correlation of the events.
Abstract:
An image processing device includes a vision sensor and a processor. The vision sensor generates a plurality of events in which an intensity of light changes and generates a plurality of timestamps depending on times when the events occur. In addition, the processor may regenerate a timestamp of a pixel where an abnormal event occurs, based on temporal correlation of the events.
Abstract:
A device and method to display a screen based on an event are provided. A device according to an exemplary embodiment may display, in response to an event associated with a movement of an object, a graphic representation that corresponds to the event by overlaying the graphic representation on visual contents.
Abstract:
Provided is a method and apparatus for identifying a spatial gesture of a user that may recognize a reference image of a user from a three-dimensional (3D) space in which a gesture of the user is performed, divide the 3D space into a plurality of partitioned spaces based on the reference image, and identifies the gesture of the user in the plurality of partitioned spaces, based on the reference image.
Abstract:
An image processing device includes a vision sensor and a processor. The vision sensor generates a plurality of events in which an intensity of light changes and generates a plurality of timestamps depending on times when the events occur. In addition, the processor may regenerate a timestamp of a pixel where an abnormal event occurs, based on temporal correlation of the events.
Abstract:
A method and apparatus for detecting an error in gesture recognition are provided. The method includes sensing whether an effective gesture occurs in a first area for gesture recognition of a user; setting a second area and sensing an occurrence of an event due to a movement of the user, based on a result of the sensing in the first area; and detecting the error in the gesture recognition based on whether the occurrence of the event is sensed in the second area.