Abstract:
Systems, apparatus, and methods for generating a fused depth map from one or more individual depth maps, wherein the fused depth map is configured to provide robust depth estimation for points within the depth map. The methods, apparatus, or systems may comprise components that identify a field of view (FOV) of an imaging device configured to capture an image of the FOV and select a first depth sensing method. The system or method may sense a depth of the FOV with respect to the imaging device using the first selected depth sensing method and generate a first depth map of the FOV based on the sensed depth of the first selected depth sensing method. The system or method may also identify a region of one or more points of the first depth map having one or more inaccurate depth measurements and determine if additional depth sensing is needed.
Abstract:
Exemplary embodiments are directed to configurable demodulation of image data produced by an image sensor. In some aspects, a method includes receiving information indicating a configuration of the image sensor. In some aspects, the information may indicate a configuration of sensor elements and/or corresponding color filters for the sensor elements. A modulation function may then be generated based on the information. In some aspects, the method also includes demodulating the image data based on the generated modulation function to determine chrominance and luminance components of the image data, and generating the second image based on the determined chrominance and luminance components.
Abstract:
Aspects relate to an array camera exhibiting little or no parallax artifacts in captured images. For example, the planes of the central prism of the array camera can intersect at an apex defining the vertical axis of symmetry of the system. The apex can serve as a point of intersection for the optical axes of the sensors in the array. Each sensor in the array “sees” a portion of the image scene using a corresponding facet of the central prism, and accordingly each individual sensor/facet pair represents only a sub-aperture of the total array camera. The complete array camera has a synthetic aperture generated based on the sum of all individual aperture rays.
Abstract:
A method performed by an electronic device is described. The method includes obtaining a first image from a first camera, the first camera having a first focal length and a first field of view. The method also includes obtaining a second image from a second camera, the second camera having a second focal length and a second field of view disposed within the first field of view. The method further includes aligning at least a portion of the first image and at least a portion of the second image to produce aligned images. The method additionally includes fusing the aligned images based on a diffusion kernel to produce a fused image. The diffusion kernel indicates a threshold level over a gray level range. The method also includes outputting the fused image. The method may be performed for each of a plurality of frames of a video feed.
Abstract:
Described herein are methods and devices that employ a plurality of image sensors to capture a target image of a scene. As described, positioning at least one reflective or refractive surface near the plurality of image sensors enables the sensors to capture together an image of wider field of view and longer focal length than any sensor could capture individually by using the reflective or refractive surface to guide a portion of the image scene to each sensor. The different portions of the scene captured by the sensors may overlap, and may be aligned and cropped to generate the target image.
Abstract:
Aspects relate to an array camera exhibiting little or no parallax artifacts in captured images. For example, the planes of the central mirror surfaces of the array camera can be located at a midpoint along, and orthogonally to, a line between the corresponding camera location and the virtual camera location. Accordingly, the cones of all of the cameras in the array appear as if coming from the virtual camera location after folding by the mirrors. Each sensor in the array “sees” a portion of the image scene using a corresponding facet of the central mirror prism, and accordingly each individual sensor/mirror pair represents only a sub-aperture of the total array camera. The complete array camera has a synthetic aperture generated based on the sum of all individual aperture rays.
Abstract:
One innovation includes an IR sensor having an array of sensor pixels to convert light into current, each sensor pixel of the array including a photodetector region, a lens configured to focus light into the photodetector region, the lens adjacent to the photodetector region so light propagates through the lens and into the photodetector region, and a substrate disposed with photodetector region between the substrate and the lens, the substrate having one or more transistors formed therein. The sensor also includes reflective structures positioned between at least a portion of the substrate and at least a portion of the photodetector region and such that at least a portion of the photodetector region is between the one or more reflective structures and the lens, the one or more reflective structures configured to reflect the light that has passed through at least a portion of the photodetector region into the photodetector region.
Abstract:
Aspects relate to an array camera exhibiting little or no parallax artifacts in captured images. For example, the planes of the central mirror surfaces of the array camera can be located at a midpoint along, and orthogonally to, a line between the corresponding camera location and the virtual camera location. Accordingly, the cones of all of the cameras in the array appear as if coming from the virtual camera location after folding by the mirrors. Each sensor in the array “sees” a portion of the image scene using a corresponding facet of the central mirror prism, and accordingly each individual sensor/mirror pair represents only a sub-aperture of the total array camera. The complete array camera has a synthetic aperture generated based on the sum of all individual aperture rays.
Abstract:
A method operational on a receiver device for decoding a codeword is provided. At least a portion of a composite code mask is obtained, via a receiver sensor, and projected on the surface of a target object. The composite code mask may be defined by a code layer and a carrier layer. A code layer of uniquely identifiable spatially-coded codewords may be defined by a plurality of symbols. A carrier layer may be independently ascertainable and distinct from the code layer and may include a plurality of reference objects that are robust to distortion upon projection. At least one of the code layer and carrier layer may have been pre-shaped by a synthetic point spread function prior to projection. The code layer may be adjusted, at a processing circuit, for distortion based on the reference objects within the portion of the composite code mask.
Abstract:
Aspects relate to an array camera exhibiting little or no parallax artifacts in captured images. For example, the planes of the central mirror surfaces of the array camera can be located at a midpoint along, and orthogonally to, a line between the corresponding camera location and the virtual camera location. Accordingly, the cones of all of the cameras in the array appear as if coming from the virtual camera location after folding by the mirrors. Each sensor in the array “sees” a portion of the image scene using a corresponding facet of the central mirror prism, and accordingly each individual sensor/mirror pair represents only a sub-aperture of the total array camera. The complete array camera has a synthetic aperture generated based on the sum of all individual aperture rays.