Abstract:
A method for depth mapping includes projecting a pattern of optical radiation onto an object. A first image of the pattern on the object is captured using a first image sensor, and this image is processed to generate pattern-based depth data with respect to the object. A second image of the object is captured using a second image sensor, and the second image is processed together with another image to generate stereoscopic depth data with respect to the object. The pattern-based depth data is combined with the stereoscopic depth data to create a depth map of the object.
Abstract:
A method for depth mapping includes acquiring first depth data with respect to an object using a first depth mapping technique and providing first candidate depth coordinates for a plurality of pixels, and acquiring second depth data with respect to the object using a second depth mapping technique, different from the first depth mapping technique, and providing second candidate depth coordinates for the plurality of pixels. A weighted voting process is applied to the first and second depth data in order to select one of the candidate depth coordinates at each pixel. A depth map of the object is output, including the selected one of the candidate depth coordinates at each pixel.
Abstract:
A method for projection includes projecting a pattern of structured light with a given average intensity onto a scene. A sequence of images is captured of the scene while projecting the pattern. At least one captured image in the sequence is processed in order to extract a depth map of the scene. A condition is identified in the depth map indicative of a fault in projection of the pattern. Responsively to the identified condition, the average intensity of the projection of the pattern is reduced.
Abstract:
A method for depth mapping includes receiving optical radiation reflected from multiple points on an object and processing the received optical radiation to generate depth data including multiple candidate depth coordinates for each of a plurality of pixels and respective measures of confidence associated with the candidate depth coordinates. One of the candidate depth coordinates is selected at each of the plurality of the pixels responsively to the respective measures of confidence. A depth map of the object is output, including the selected one of the candidate depth coordinates at each of the plurality of the pixels.
Abstract:
A method for depth mapping includes projecting a pattern of optical radiation onto an object. A first image of the pattern on the object is captured using a first image sensor, and this image is processed to generate pattern-based depth data with respect to the object. A second image of the object is captured using a second image sensor, and the second image is processed together with another image to generate stereoscopic depth data with respect to the object. The pattern-based depth data is combined with the stereoscopic depth data to create a depth map of the object.
Abstract:
A method for projection includes projecting a pattern of structured light with a given average intensity onto a scene. A sequence of images is captured of the scene while projecting the pattern. At least one captured image in the sequence is processed in order to extract a depth map of the scene. A condition is identified in the depth map indicative of a fault in projection of the pattern. Responsively to the identified condition, the average intensity of the projection of the pattern is reduced.
Abstract:
A method for depth mapping includes receiving optical radiation reflected from multiple points on an object and processing the received optical radiation to generate depth data including multiple candidate depth coordinates for each of a plurality of pixels and respective measures of confidence associated with the candidate depth coordinates. One of the candidate depth coordinates is selected at each of the plurality of the pixels responsively to the respective measures of confidence. A depth map of the object is output, including the selected one of the candidate depth coordinates at each of the plurality of the pixels.