Abstract:
An electronic device includes a laser source configured to direct laser radiation toward a user's hand. A laser detector is configured to receive reflected laser radiation from the user's hand. A controller is coupled to the laser source and laser detector and configured to determine a plurality of distance values to the user's hand based upon a time-of-flight of the laser radiation, calculate a mean absolute deviation (MAD) value based upon the plurality of distance values, and identify whether the user's hand is moving in a first or second gesture based upon the MAD value.
Abstract:
A method includes emitting, by a single sensor of a device, a signal into a region; receiving, by the single sensor, a reflected signal; and detecting motion in a detection cone comprising a central axis based on the reflected signal, wherein detecting motion comprises detecting a first type of motion from a first position to a second position, and detecting a second type of motion from the second position to the first position.
Abstract:
Method, having detecting from a first histogram signal delivered by a sensor device, successive sets of targets at respective successive instants, determining for a current set of current detected targets, a current histogram output, the current histogram output having for each current detected target of the current set, a current group of parameters stored in a memory including a confidence indicator, performing a matching operation between the current set of detected targets and previous sets of detected targets stored in the memory, and performing a filtering operation of at least one parameter of the current group of parameters of at least some of the current detected targets of the current set, on the basis of the result of the matching operation, the filtering operation being weighted on the basis of at least the confidence indicators of current and previous sets of detected targets.
Abstract:
A method includes dividing a field of view into a plurality of zones and sampling the field of view to generate a photon count for each zone of the plurality of zones, identifying a focal sector of the field of view and analyzing each zone to select a final focal object from a first prospective focal object and a second prospective focal object.
Abstract:
The method of determination of a depth map of a scene comprises generation of a distance map of the scene obtained by time of flight measurements, acquisition of two images of the scene from two different viewpoints, and stereoscopic processing of the two images taking into account the distance map. The generation of the distance map includes generation of distance histograms acquisition zone by acquisition zone of the scene, and the stereoscopic processing includes, for each region of the depth map corresponding to an acquisition zone, elementary processing taking into account the corresponding histogram.
Abstract:
A method for controlling an apparatus, includes steps of: determining distance measurements of an object in a first direction, using distance sensors defining between them a second direction different from the first direction, assessing a first inclination of the object in relation to a second direction based on the distance measurements, and determining a first command of the apparatus according to the inclination assessment.
Abstract:
A method includes dividing a field of view into a plurality of zones and sampling the field of view to generate a photon count for each zone of the plurality of zones, identifying a focal sector of the field of view and analyzing each zone to select a final focal object from a first prospective focal object and a second prospective focal object.
Abstract:
A method includes dividing a field of view into a plurality of zones and sampling the field of view to generate a photon count for each zone of the plurality of zones, identifying a focal sector of the field of view and analyzing each zone to select a final focal object from a first prospective focal object and a second prospective focal object.
Abstract:
An embodiment device includes: a plurality of optical emitters configured to emit incident radiation within a field of view of the device; a plurality of optical detectors configured to receive reflected radiation and to generate a histogram based on the incident radiation and the reflected radiation, the histogram being indicative of a number of photon events detected by the plurality of optical detectors over a plurality of time bins, the plurality of time bins being indicative of a plurality of time differences between emission of the incident radiation and reception of the reflected radiation; and a processor configured to iteratively process the histogram by executing an expectation-maximization algorithm to detect a presence of objects located in the field of view of the device.
Abstract:
The method of determination of a depth map of a scene comprises generation of a distance map of the scene obtained by time of flight measurements, acquisition of two images of the scene from two different viewpoints, and stereoscopic processing of the two images taking into account the distance map. The generation of the distance map includes generation of distance histograms acquisition zone by acquisition zone of the scene, and the stereoscopic processing includes, for each region of the depth map corresponding to an acquisition zone, elementary processing taking into account the corresponding histogram.