Abstract:
Various embodiments are directed to an image sensor that includes a first sensor portion and a second sensor portion. The second sensor portion may be positioned relative to the first sensor portion such that the second sensor portion may initially detect light entering the image sensor, and some of that light passes through the second sensor portion and may be detected by the first sensor portion. In some embodiments, one more optical filters may be disposed within the image sensor. The one or more optical filters may include at least one of a dual bandpass filter disposed above the second photodetector or a narrow bandpass filter disposed between the first photodetector and the second photodetector.
Abstract:
One innovation includes an IR sensor having an array of sensor pixels to convert light into current, each sensor pixel of the array including a photodetector region, a lens configured to focus light into the photodetector region, the lens adjacent to the photodetector region so light propagates through the lens and into the photodetector region, and a substrate disposed with photodetector region between the substrate and the lens, the substrate having one or more transistors formed therein. The sensor also includes reflective structures positioned between at least a portion of the substrate and at least a portion of the photodetector region and such that at least a portion of the photodetector region is between the one or more reflective structures and the lens, the one or more reflective structures configured to reflect the light that has passed through at least a portion of the photodetector region into the photodetector region.
Abstract:
A high dynamic range solid state image sensor and camera system are disclosed. In one aspect, the solid state image sensor includes a first wafer including an array of pixels, each of the pixels comprising a photosensor, and a second wafer including an array of readout circuits. Each of the readout circuits is configured to output a readout signal indicative of an amount of light received by a corresponding one of the pixels and each of the readout circuits includes a counter. Each of the counters is configured to increment in response to the corresponding photosensor receiving an amount of light that is greater than a photosensor threshold. Each of the readout circuits is configured to generate the readout signal based on a value stored in the corresponding counter and a remainder stored in the corresponding pixel.
Abstract:
Techniques and systems are provided for high resolution time-of-flight (ToF) depth imaging. In some examples, an apparatus includes a projection system including one or more light-emitting devices, each light-emitting device being configured to illuminate at least one portion of an entire field-of-view (FOV) of the projection system. The entire FOV includes a plurality of FOV portions. The apparatus also includes a receiving system including a sensor configured to sequentially capture a plurality of images based on a plurality of illumination reflections corresponding to light emitted by the one or more light-emitting devices. Each image of the plurality of images corresponds to one of the plurality of FOV portions. An image resolution associated with each image corresponds to a full resolution of the sensor. The apparatus further includes a processor configured to generate, using the plurality of images, an increased resolution depth map associated with the entire FOV.
Abstract:
Various embodiments are directed to an image sensor that includes a first sensor portion and a second sensor portion coupled to the first sensor portion. The second sensor portion may be positioned relative to the first sensor portion so that the second sensor portion may initially detect light entering the image sensor, and some of that light passes through the second sensor portion and is be detected by the first sensor portion. In some embodiments, the second sensor portion may be configured to have a thickness suitable for sensing visible light. The first sensor portion may be configured to have a thickness suitable for sensing IR or NIR light. As a result of the arrangement and structure of the second sensor portion and the first sensor portion, the image sensor captures substantially more light from the light source.
Abstract:
One innovation includes an IR sensor having an array of sensor pixels to convert light into current, each sensor pixel of the array including a photodetector region, a lens configured to focus light into the photodetector region, the lens adjacent to the photodetector region so light propagates through the lens and into the photodetector region, and a substrate disposed with photodetector region between the substrate and the lens, the substrate having one or more transistors formed therein. The sensor also includes reflective structures positioned between at least a portion of the substrate and at least a portion of the photodetector region and such that at least a portion of the photodetector region is between the one or more reflective structures and the lens, the one or more reflective structures configured to reflect the light that has passed through at least a portion of the photodetector region into the photodetector region.
Abstract:
Certain aspects relate to systems and techniques for full well capacity extension. For example, a storage capacitor included in the pixel readout architecture can enable multiple charge dumps from a pixel in the analog domain, extending the full well capacity of the pixel. Further, multiple reads can be integrated in the digital domain using a memory, for example DRAM, in communication with the pixel readout architecture. This also can effectively multiply a small pixel's full well capacity. In some examples, multiple reads in the digital domain can be used to reduce, eliminate, or compensate for kTC noise in the pixel readout architecture.
Abstract:
Certain aspects relate to systems and techniques for full well capacity extension. For example, a storage capacitor included in the pixel readout architecture can enable multiple charge dumps from a pixel in the analog domain, extending the full well capacity of the pixel. Further, multiple reads can be integrated in the digital domain using a memory, for example DRAM, in communication with the pixel readout architecture. This also can effectively multiply a small pixel's full well capacity. In some examples, multiple reads in the digital domain can be used to reduce, eliminate, or compensate for kTC noise in the pixel readout architecture.
Abstract:
Aspects of the present disclosure relate to depth sensing using a device. An example device includes a light projector configured to project light in a first and a second distribution. The first and the second distribution include a flood projection when the device operates in a first mode and a pattern projection when the device operates in a second mode, respectively. The example device includes a receiver configured to detect reflections of light projected by the light projector. The example device includes a processor connected to a memory storing instructions. The processor is configured to determine first depth information based on reflections detected by the receiver when the device operates in the first mode, determine second depth information based on reflections detected by the receiver when the device operates in the second mode, and resolve multipath interference (MPI) using the first depth information and the second depth information.
Abstract:
Certain aspects relate to systems and techniques for folded optic stereoscopic imaging, wherein a number of folded optic paths each direct a different one of a corresponding number of stereoscopic images toward a portion of a single image sensor. Each folded optic path can include a set of optics including a first light folding surface positioned to receive light propagating from a scene along a first optical axis and redirect the light along a second optical axis, a second light folding surface positioned to redirect the light from the second optical axis to a third optical axis, and lens elements positioned along at least the first and second optical axes and including a first subset having telescopic optical characteristics and a second subset lengthening the optical path length. The sensor can be a three-dimensionally stacked backside illuminated sensor wafer and reconfigurable instruction cell array processing wafer that performs depth processing.