Abstract:
A method performed by an electronic device is described. The method includes obtaining a first image from a first camera, the first camera having a first focal length and a first field of view. The method also includes obtaining a second image from a second camera, the second camera having a second focal length and a second field of view disposed within the first field of view. The method further includes aligning at least a portion of the first image and at least a portion of the second image to produce aligned images. The method additionally includes fusing the aligned images based on a diffusion kernel to produce a fused image. The diffusion kernel indicates a threshold level over a gray level range. The method also includes outputting the fused image. The method may be performed for each of a plurality of frames of a video feed.
Abstract:
Aspects of the present disclosure relate to systems and methods for active depth sensing. An example apparatus configured to perform active depth sensing includes a projector. The projector is configured to emit a first distribution of light during a first time and emit a second distribution of light different from the first distribution of light during a second time. A set of final depth values of one or more objects in a scene is based on one or more reflections of the first distribution of light and one or more reflections of the second distribution of light. The projector may include a laser array, and the apparatus may be configured to switch between a first plurality of lasers of the laser array to emit light during the first time and a second plurality of laser to emit light during the second time.
Abstract:
Devices and methods for providing seamless preview images for multi-camera devices having two or more asymmetric cameras. A multi-camera device may include two asymmetric cameras disposed to image a target scene. The multi-camera device further includes a processor coupled to a memory component and a display, the processor configured to retrieve an image generated by a first camera from the memory component, retrieve an image generated by a second camera from the memory component, receive input corresponding to a preview zoom level, retrieve spatial transform information and photometric transform information from memory, modify at least one image received from the first and second cameras by the spatial transform and the photometric transform, and provide on the display a preview image comprising at least a portion of the at least one modified image and a portion of either the first image or the second image based on the preview zoom level.
Abstract:
A method includes identifying one or more codewords of a bit sequence that fail to satisfy at least one codeword constraint. The method also includes removing the one or more codewords from the bit sequence to generate a punctured bit sequence. The method includes, in response to determining that the punctured bit sequence is symmetric, generating a hermitian symmetric codebook primitive based at least in part on the punctured bit sequence, where the hermitian symmetric codebook primitive is useable to form a diffractive optical element (DOE) of a structured light depth sensing system.
Abstract:
A structured light three-dimensional (3D) depth map based on content filtering is disclosed. In a particular embodiment, a method includes receiving, at a receiver device, image data that corresponds to a structured light image. The method further includes processing the image data to decode depth information based on a pattern of projected coded light. The depth information corresponds to a depth map. The method also includes performing one or more filtering operations on the image data. An output of the one or more filtering operations includes filtered image data. The method further includes performing a comparison of the depth information to the filtered image data and modifying the depth information based on the comparison to generate a modified depth map.
Abstract:
A method includes identifying one or more codewords of a bit sequence that fail to satisfy at least one codeword constraint. The method also includes removing the one or more codewords from the bit sequence to generate a punctured bit sequence. The method further includes determining whether the punctured bit sequence is symmetric. The method includes, in response to determining that the punctured bit sequence is symmetric, generating a hermitian symmetric codebook primitive based at least in part on the punctured bit sequence, where the hermitian symmetric codebook primitive is useable to form a diffractive optical element (DOE) of a structured light depth sensing system.
Abstract:
A method operational on a transmitter device is provided for projecting a composite code mask. A composite code mask on a tangible medium is obtained, where the composite code mask includes a code layer combined with a carrier layer. The code layer may include uniquely identifiable spatially-coded codewords defined by a plurality of symbols. The carrier layer may be independently ascertainable and distinct from the code layer and includes a plurality of reference objects that are robust to distortion upon projection. At least one of the code layer and carrier layer may be pre-shaped by a synthetic point spread function prior to projection. At least a portion of the composite code mask is projected, by the transmitter device, onto a target object to help a receiver ascertain depth information for the target object with a single projection of the composite code mask.
Abstract:
A device for image processing includes an optical receiver configured to receive a reflection of a coded pattern from an object to generate an image, and processing circuitry. The processing circuitry is configured to determine an estimated position of zero order light in the image, determine a spatial region of the coded pattern that corresponds to a position of the zero order light in the coded pattern, map the spatial region to the estimated position of the zero order light in the image to generate a corrected image, and generate a depth map for the coded pattern based on the corrected image.
Abstract:
A method performed by an electronic device is described. The method includes obtaining a first image from a first camera, the first camera having a first focal length and a first field of view. The method also includes obtaining a second image from a second camera, the second camera having a second focal length and a second field of view disposed within the first field of view. The method further includes aligning at least a portion of the first image and at least a portion of the second image to produce aligned images. The method additionally includes fusing the aligned images based on a diffusion kernel to produce a fused image. The diffusion kernel indicates a threshold level over a gray level range. The method also includes outputting the fused image. The method may be performed for each of a plurality of frames of a video feed.
Abstract:
A method performed by an electronic device is described. The method includes obtaining a first image from a first camera, the first camera having a first focal length and a first field of view. The method also includes obtaining a second image from a second camera, the second camera having a second focal length and a second field of view disposed within the first field of view. The method further includes aligning at least a portion of the first image and at least a portion of the second image to produce aligned images. The method additionally includes fusing the aligned images based on a diffusion kernel to produce a fused image. The diffusion kernel indicates a threshold level over a gray level range. The method also includes outputting the fused image. The method may be performed for each of a plurality of frames of a video feed.