Abstract:
Structured light active sensing systems transmit and receive spatial codes to generate depth maps. Spatial codes can't be repeated within a disparity range if they are to be uniquely identified. This results in large numbers of codes for single transmitter/single receiver systems, because reflected ray traces from two object locations may be focused onto the same location of the receiver sensor, making it impossible to determine which object location reflected the code. However, the original code location may be uniquely identified because ray traces from the two object locations that focus onto the same location of the first receiver sensor may focus onto different locations on the second receiver sensor. Described herein are active sensing systems and methods that use two receivers to uniquely identify original code positions and allow for greater code reuse.
Abstract:
Described herein are methods and devices that employ a plurality of image sensors to capture a target image of a scene. As described, positioning at least one reflective or refractive surface near the plurality of image sensors enables the sensors to capture together an image of wider field of view and longer focal length than any sensor could capture individually by using the reflective or refractive surface to guide a portion of the image scene to each sensor. The different portions of the scene captured by the sensors may overlap, and may be aligned and cropped to generate the target image.
Abstract:
Aspects relate to methods and systems for producing ultra-wide field of view images. In some embodiments, an image capture system for capturing wide field-of-view images comprises an aperture, a central camera positioned to receive light through the aperture, the center camera having an optical axis, a plurality of periphery cameras disposed beside the central camera and pointed towards a portion of the optical axis of the center camera, the plurality of cameras arranged around the center camera, and a plurality of extendible reflectors. The reflectors are configured to move from a first position to a second position and have a mirrored first surface that faces away from the optical axis of the center camera and a second black surface that faces towards the optical axis of the center camera, the plurality of periphery cameras arranged around the center camera.
Abstract:
Devices and methods for providing seamless preview images for multi-camera devices having two or more asymmetric cameras. A multi-camera device may include two asymmetric cameras disposed to image a target scene. The multi-camera device further includes a processor coupled to a memory component and a display, the processor configured to retrieve an image generated by a first camera from the memory component, retrieve an image generated by a second camera from the memory component, receive input corresponding to a preview zoom level, retrieve spatial transform information and photometric transform information from memory, modify at least one image received from the first and second cameras by the spatial transform and the photometric transform, and provide on the display a preview image comprising at least a portion of the at least one modified image and a portion of either the first image or the second image based on the preview zoom level.
Abstract:
Various embodiments are directed to an image sensor that includes a first sensor portion and a second sensor portion coupled to the first sensor portion. The second sensor portion may be positioned relative to the first sensor portion so that the second sensor portion may initially detect light entering the image sensor, and some of that light passes through the second sensor portion and is be detected by the first sensor portion. In some embodiments, the second sensor portion may be configured to have a thickness suitable for sensing visible light. The first sensor portion may be configured to have a thickness suitable for sensing IR or NIR light. As a result of the arrangement and structure of the second sensor portion and the first sensor portion, the image sensor captures substantially more light from the light source.
Abstract:
One innovation includes an IR sensor having an array of sensor pixels to convert light into current, each sensor pixel of the array including a photodetector region, a lens configured to focus light into the photodetector region, the lens adjacent to the photodetector region so light propagates through the lens and into the photodetector region, and a substrate disposed with photodetector region between the substrate and the lens, the substrate having one or more transistors formed therein. The sensor also includes reflective structures positioned between at least a portion of the substrate and at least a portion of the photodetector region and such that at least a portion of the photodetector region is between the one or more reflective structures and the lens, the one or more reflective structures configured to reflect the light that has passed through at least a portion of the photodetector region into the photodetector region.
Abstract:
Certain aspects relate to systems and techniques for full well capacity extension. For example, a storage capacitor included in the pixel readout architecture can enable multiple charge dumps from a pixel in the analog domain, extending the full well capacity of the pixel. Further, multiple reads can be integrated in the digital domain using a memory, for example DRAM, in communication with the pixel readout architecture. This also can effectively multiply a small pixel's full well capacity. In some examples, multiple reads in the digital domain can be used to reduce, eliminate, or compensate for kTC noise in the pixel readout architecture.
Abstract:
Described herein are methods and devices that employ a plurality of image sensors to capture a target image of a scene. As described, positioning at least one reflective or refractive surface near the plurality of image sensors enables the sensors to capture together an image of wider field of view and longer focal length than any sensor could capture individually by using the reflective or refractive surface to guide a portion of the image scene to each sensor. The different portions of the scene captured by the sensors may overlap, and may be aligned and cropped to generate the target image.
Abstract:
Systems, apparatus, and methods for generating a fused depth map from one or more individual depth maps, wherein the fused depth map is configured to provide robust depth estimation for points within the depth map. The methods, apparatus, or systems may comprise components that identify a field of view (FOV) of an imaging device configured to capture an image of the FOV and select a first depth sensing method. The system or method may sense a depth of the FOV with respect to the imaging device using the first selected depth sensing method and generate a first depth map of the FOV based on the sensed depth of the first selected depth sensing method. The system or method may also identify a region of one or more points of the first depth map having one or more inaccurate depth measurements and determine if additional depth sensing is needed.
Abstract:
Aspects relate to an array camera exhibiting little or no parallax artifacts in captured images. For example, the planes of the central mirror prism of the array camera can intersect at an apex defining the vertical axis of symmetry of the system. The apex can serve as a point of intersection for the optical axes of the sensors in the array. Each sensor in the array “sees” a portion of the image scene using a corresponding facet of the central mirror prism, and accordingly each individual sensor/mirror pair represents only a sub-aperture of the total array camera. The complete array camera has a synthetic aperture generated based on the sum of all individual aperture rays.