Abstract:
Aspects relate to a prism array camera having a wide field of view. For example, the prism array camera can use a central refractive prism, for example with multiple surfaces or facets, to split incoming light comprising the target image into multiple portions for capture by the sensors in the array. The prism can have a refractive index of approximately 1.5 or higher, and can be shaped and positioned to reduce chromatic aberration artifacts and increase the FOV of a sensor. In some examples a negative lens can be incorporated into or attached to a camera-facing surface of the prism to further increase the FOV.
Abstract:
Certain aspects relate to systems and techniques for folded optic stereoscopic imaging, wherein a number of folded optic paths each direct a different one of a corresponding number of stereoscopic images toward a portion of a single image sensor. Each folded optic path can include a set of optics including a first light folding surface positioned to receive light propagating from a scene along a first optical axis and redirect the light along a second optical axis, a second light folding surface positioned to redirect the light from the second optical axis to a third optical axis, and lens elements positioned along at least the first and second optical axes and including a first subset having telescopic optical characteristics and a second subset lengthening the optical path length. The sensor can be a three-dimensionally stacked backside illuminated sensor wafer and reconfigurable instruction cell array processing wafer that performs depth processing.
Abstract:
Various embodiments are directed to an image sensor that includes a first sensor portion and a second sensor portion coupled to the first sensor portion. The second sensor portion may be positioned relative to the first sensor portion so that the second sensor portion may initially detect light entering the image sensor, and some of that light passes through the second sensor portion and is be detected by the first sensor portion. In some embodiments, the second sensor portion may be configured to have a thickness suitable for sensing visible light. The first sensor portion may be configured to have a thickness suitable for sensing IR or NIR light. As a result of the arrangement and structure of the second sensor portion and the first sensor portion, the image sensor captures substantially more light from the light source.
Abstract:
Innovations include a sensing device having a sensor array comprising a plurality of sensors, each sensor having a length dimension and a width dimension and configured to generate a signal responsive to radiation incident on the sensor, and a filter array comprising a plurality of filters, the filter array disposed to filter light before it is incident on the sensor array, the filter array arranged relative to the sensor array so each of the plurality of sensors receives radiation propagating through at least one corresponding filter. Each filter has a length dimension and a width dimension, and a ratio of the length dimension of a filter to the length dimension of a corresponding sensor, a ratio of the width dimension of a filter to the width dimension of a corresponding sensor, or both, is a non-integer greater than 1.
Abstract:
Aspects relate to autofocus systems and techniques for an array camera having a low-profile height, for example approximately 4 mm. A voice coil motor (VCM) can be positioned proximate to a folded optic assembly in the array camera to enable vertical motion of a second light directing surface for changing the focal position of the corresponding sensor. A driving member can be positioned within the coil of the VCM to provide vertical movement, and the driving member can be coupled to the second light directing surface, for example by a lever. Accordingly, the movement of the VCM driving member can be transferred to the second light directing surface across a distance, providing autofocus capabilities without increasing the overall height of the array camera.
Abstract:
Aspects relate to methods and systems for producing ultra-wide field of view images. In some embodiments, an image capture system for capturing wide field-of-view images comprises an aperture, a central camera positioned to receive light through the aperture, the center camera having an optical axis, a plurality of periphery cameras disposed beside the central camera and pointed towards a portion of the optical axis of the center camera, the plurality of cameras arranged around the center camera, and a plurality of extendible reflectors. The reflectors are configured to move from a first position to a second position and have a mirrored first surface that faces away from the optical axis of the center camera and a second black surface that faces towards the optical axis of the center camera, the plurality of periphery cameras arranged around the center camera.
Abstract:
Aspects relate to a prism array camera having a wide field of view. For example, the prism array camera can use a central refractive prism, for example with multiple surfaces or facets, to split incoming light comprising the target image into multiple portions for capture by the sensors in the array. The prism can have a refractive index of approximately 1.5 or higher, and can be shaped and positioned to reduce chromatic aberration artifacts and increase the FOV of a sensor. In some examples a negative lens can be incorporated into or attached to a camera-facing surface of the prism to further increase the FOV.
Abstract:
An optical system may include a lens assembly that has two or more single-sided wafer level optics (WLO) lenses arranged to propagate light. The optical system can further include an image sensor, wherein the lens assembly is arranged relative to the image sensor to propagate light received at a first surface of the lens assembly, through the two or more single-sided WLO lenses and to the image sensor. In some embodiments, the optical system further includes a camera which includes the lens assembly and the image sensor. In various embodiments, a smart phone, a tablet computer, or another mobile computing device may include such a camera. In some embodiments, the at least two single-sided wafer level optics (WLO) lenses are each separated by a gap G, wherein the gap may be different between each of the single-sided lenses, and the gap G may be zero.
Abstract:
Certain aspects relate to systems and techniques for submicron alignment in wafer optics. One disclosed method of alignment between wafers to produce an integrated lens stack employs a beam splitter (that is, a 50% transparent mirror) that reflects the alignment mark of the top wafer when the microscope objective is focused on the alignment mark of the bottom wafer. Another disclosed method of alignment between wafers to produce an integrated lens stack implements complementary patterns that can produce a Moiré effect when misaligned in order to aid in visually determining proper alignment between the wafers. In some embodiments, the methods can be combined to increase precision.
Abstract:
Certain aspects relate to wafer level optical designs for a folded optic stereoscopic imaging system. One example folded optical path includes first and second reflective surfaces defining first, second, and third optical axes, and where the first reflective surface redirects light from the first optical axis to the second optical axis and where the second reflective surface redirects light from the second optical axis to the third optical axis. Such an example folded optical path further includes wafer-level optical stacks providing ten lens surfaces distributed along the first and second optical axes. A variation on the example folded optical path includes a prism having the first reflective surface, wherein plastic lenses are formed in or secured to the input and output surfaces of the prism in place of two of the wafer-level optical stacks.