Abstract:
An electronic imaging device and method for image capture are described. The imaging device includes a camera configured to obtain image information of a scene and that may be focused on a region of interest in the scene. The imaging device also includes a LIDAR unit configured to obtain depth information of at least a portion of the scene at specified scan locations of the scene. The imaging device is configured to detect an object in the scene and provides specified scan locations to the LIDAR unit. The camera is configured to capture an image with an adjusted focus based on depth information, obtained by the LIDAR unit, associated with the detected object.
Abstract:
Systems, apparatus, and methods for generating a fused depth map from one or more individual depth maps, wherein the fused depth map is configured to provide robust depth estimation for points within the depth map. The methods, apparatus, or systems may comprise components that identify a field of view (FOV) of an imaging device configured to capture an image of the FOV and select a first depth sensing method. The system or method may sense a depth of the FOV with respect to the imaging device using the first selected depth sensing method and generate a first depth map of the FOV based on the sensed depth of the first selected depth sensing method. The system or method may also identify a region of one or more points of the first depth map having one or more inaccurate depth measurements and determine if additional depth sensing is needed.
Abstract:
Aspects of the present disclosure relate to systems and methods for structured light depth systems. An example active depth system may include a receiver to receive reflections of transmitted light and a transmitter including one or more light sources to transmit light in a spatial distribution. The spatial distribution of transmitted light may include a first region of a first plurality of light points and a second region of a second plurality of light points. A first density of the first plurality of light points is greater than a second density of the second plurality of light points when a first distance between a center of the spatial distribution and a center of the first region is less than a second distance between the center of the spatial distribution and the center of the second region.
Abstract:
Systems and methods for reconstructing an object boundary in a disparity map generated by a structured light system are disclosed. One aspect is a structured light system. The system includes an image projecting device configured to project codewords. The system further includes a receiver device including a sensor, the receiver device configured to sense the projected codewords reflected from an object. The system further includes a processing circuit configured to generate a disparity map of the object, detect a first boundary of the object in the disparity map, identify a shadow region in the disparity map adjoining the first boundary, the shadow region including pixels with codeword outages, and change a shape of the object in the disparity map based on the detected shadow region. The system further includes a memory device configured to store the disparity map.
Abstract:
Systems and methods configured to generate virtual gimbal information for range images produced from 3D depth scans are described. In operation according to embodiments, known and advantageous spatial geometries of features of a scanned volume are exploited to generate virtual gimbal information for a pose. The virtual gimbal information of embodiments may be used to align a range image of the pose with one or more other range images for the scanned volume, such as for combining the range images for use in indoor mapping, gesture recognition, object scanning, etc. Implementations of range image registration using virtual gimbal information provide a realtime one shot direct pose estimator by detecting and estimating the normal vectors for surfaces of features between successive scans which effectively imparts a coordinate system for each scan with an orthogonal set of gimbal axes and defines the relative camera attitude.
Abstract:
An electronic device for generating a corrected depth map is described. The electronic device includes a processor. The processor is configured to obtain a first depth map. The first depth map includes first depth information of a first portion of a scene sampled by the depth sensor at a first sampling. The processor is also configured to obtain a second depth map. The second depth map includes second depth information of a second portion of the scene sampled by the depth sensor at a second sampling. The processor is additionally configured to obtain displacement information indicative of a displacement of the depth sensor between the first sampling and the second sampling. The processor is also configured to generate a corrected depth map by correcting erroneous depth information based on the first depth information, the second depth information, and the displacement information.
Abstract:
Systems, apparatus, and methods for generating a fused depth map from one or more individual depth maps, wherein the fused depth map is configured to provide robust depth estimation for points within the depth map. The methods, apparatus, or systems may comprise components that identify a field of view (FOV) of an imaging device configured to capture an image of the FOV and select a first depth sensing method. The system or method may sense a depth of the FOV with respect to the imaging device using the first selected depth sensing method and generate a first depth map of the FOV based on the sensed depth of the first selected depth sensing method. The system or method may also identify a region of one or more points of the first depth map having one or more inaccurate depth measurements and determine if additional depth sensing is needed.
Abstract:
Methods, systems, and apparatuses are provided to compensate for a misalignment of optical devices within an imaging system. For example, the methods receive image data captured by a first optical device having a first optical axis and a second optical device having a second optical axis. The methods also receive sensor data indicative of a deflection of a substrate that supports the first and second optical devices. The deflection can result from a misalignment of the first optical axis relative to the second optical axis. The methods generate a depth value based on the captured image data and the sensor data. The depth value can reflect a compensation for the misalignment of the first optical axis relative to the second optical axis.
Abstract:
An electronic device is described. The electronic device includes a camera configured to capture an image of a scene. The electronic device also includes an image segmentation mapper configured to perform segmentation of the image based on image content to generate a plurality of image segments, each of the plurality of image segments associated with spatial coordinates indicative of a location of each segment in the scene. The electronic device further includes a memory configured to store the image and the spatial coordinates. The electronic device additionally includes a LIDAR (light+radar) unit, the LIDAR unit steerable to selectively obtain depth values corresponding to at least a subset of the spatial coordinates. The electronic device further includes a depth mapper configured to generate a depth map of the scene based on the depth values and the spatial coordinates.
Abstract:
Systems and methods for reconstructing an object boundary in a disparity map generated by a structured light system are disclosed. One aspect is a structured light system. The system includes an image projecting device configured to project codewords. The system further includes a receiver device including a sensor, the receiver device configured to sense the projected codewords reflected from an object. The system further includes a processing circuit configured to generate a disparity map of the object, detect a first boundary of the object in the disparity map, identify a shadow region in the disparity map adjoining the first boundary, the shadow region including pixels with codeword outages, and change a shape of the object in the disparity map based on the detected shadow region. The system further includes a memory device configured to store the disparity map.