Abstract:
A method and an apparatus for displaying a screen in response to an event related to a motion of an external object, are provided. The method includes generating an event signal in response a motion of an external object being sensed, sensing a movement of the external object relative to an apparatus based on the event signal, and displaying a screen based on the movement of the external object.
Abstract:
An event filtering device and a motion recognition device using thereof are provided. The motion recognition device includes an emitter configured to emit an infrared ray in a pattern; a detector configured to detect events in a visible ray area and an infrared ray area; a filter configured to determine whether at least one portion of the detected events is detected using the infrared ray in the pattern, and filter the detected events based on a result of the determination; and a motion recognizer configured to perform motion recognition based on a detected event accepted by the filter.
Abstract:
Apparatuses and methods for sensing spatial information based on a vision sensor are disclosed. The apparatus and method recognize the spatial information of an object sensed by the vision sensor that senses a temporal change of light. The light being input into the vision sensor is artificially changed using a change unit configured to change the light being input to the vision sensor.
Abstract:
A proximity sensor and proximity sensing method using a change in light quantity of a reflected light are disclosed. The proximity sensor may include a quantity change detection unit which detects a change in a quantity of reflected light which is output light which has been reflected by an object, where an intensity of the output light changes, and a proximity determination unit which determines a proximity of the object to the quantity change detection unit based on a change in the intensity of the output light and the detected change in the quantity of the reflected light.
Abstract:
Provided is a neuromorphic signal processing device for locating a sound source using a plurality of neuron circuits, the neuromorphic signal processing device including a detector configured to output a detected spiking signal using a detection neuron circuit corresponding to a predetermined time difference, in response to a first signal and a second signal containing an identical input spiking signal with respect to the predetermined time difference, for each of a plurality of predetermined frequency bands, a multiplexor configured to output a multiplexed spiking signal corresponding to the predetermined time difference based on a plurality of the detected spiking signals output from a plurality of neuron circuits corresponding to the plurality of frequency bands, and an integrator configured to output an integrated spiking signal corresponding to the predetermined time difference, based on a plurality of the multiplexed spiking signals corresponding to a plurality of predetermined time differences.
Abstract:
A modeling method using a three-dimensional (3D) point cloud, the modeling method including extracting at least one region from an image captured by a camera; receiving pose information of the camera based on two-dimensional (2D) feature points extracted from the image; estimating first depth information of the image based on the at least one region and the pose information of the camera; and generating a 3D point cloud to model a map corresponding to the image based on the first depth information.
Abstract:
A disparity determination method and apparatus are provided. The disparity determination method includes receiving first signals of an event from a first sensor disposed at a first location and second signals of the event from a second sensor disposed at a second location that is different than the first location, and extracting a movement direction of the event, based on at least one among the first signals and the second signals. The disparity determination method further includes determining a disparity between the first sensor and the second sensor, based on the movement direction, a difference between times at which the event is sensed by corresponding pixels in the first sensor, and a difference between times at which the event is sensed by corresponding pixels in the first sensor and the second sensor.
Abstract:
A method of recognizing an object includes controlling an event-based vision sensor to perform sampling in a first mode and to output first event signals based on the sampling in the first mode, determining whether object recognition is to be performed based on the first event signals, controlling the event-based vision sensor to perform sampling in a second mode and to output second event signals based on the sampling in the second mode in response to the determining indicating that the object recognition is to be performed, and performing the object recognition based on the second event signals.
Abstract:
A method and apparatus for detecting a movement of an object based on an event are provided. The apparatus may detect a movement of an object, for example, based on time difference information of a pixel corresponding to an event detected using an event-based vision sensor.
Abstract:
A user input processing method is provided. The user input processing method determines, based on a recognition reliability of a user input for a function, a delay time and whether the function is to be performed, the function being determined in advance, and controls the function based on the delay time and whether the function is to be performed.