Abstract:
Described herein are methods and devices that employ a plurality of image sensors to capture a target image of a scene. As described, positioning at least one reflective or refractive surface near the plurality of image sensors enables the sensors to capture together an image of wider field of view and longer focal length than any sensor could capture individually by using the reflective or refractive surface to guide a portion of the image scene to each sensor. The different portions of the scene captured by the sensors may overlap, and may be aligned and cropped to generate the target image.
Abstract:
Aspects relate to methods and systems for producing ultra-wide field of view images. In some embodiments, an image capture system for capturing wide field-of-view images comprises an aperture, a central camera positioned to receive light through the aperture, the center camera having an optical axis, a plurality of periphery cameras disposed beside the central camera and pointed towards a portion of the optical axis of the center camera, the plurality of cameras arranged around the center camera, and a plurality of extendible reflectors. The reflectors are configured to move from a first position to a second position and have a mirrored first surface that faces away from the optical axis of the center camera and a second black surface that faces towards the optical axis of the center camera, the plurality of periphery cameras arranged around the center camera.
Abstract:
An optical system may include a lens assembly that has two or more single-sided wafer level optics (WLO) lenses arranged to propagate light. The optical system can further include an image sensor, wherein the lens assembly is arranged relative to the image sensor to propagate light received at a first surface of the lens assembly, through the two or more single-sided WLO lenses and to the image sensor. In some embodiments, the optical system further includes a camera which includes the lens assembly and the image sensor. In various embodiments, a smart phone, a tablet computer, or another mobile computing device may include such a camera. In some embodiments, the at least two single-sided wafer level optics (WLO) lenses are each separated by a gap G, wherein the gap may be different between each of the single-sided lenses, and the gap G may be zero.
Abstract:
Method and devices are disclosed for focusing on tilted image planes. For example, one imaging device includes an objective lens configured to focus a scene at an image plane, the scene having an object plane tilted relative to the objective lens plane and a sensor receive light from the objective lens, the sensor having a plurality of light sensing elements configured to generate image data based on the light received at the sensor. The imaging device also includes a processor and memory component configured to receive the image data, the image data indicative of a first image; receive a tilt parameter indicative of an orientation of a selected non-parallel image plane, and convert the image data to relative image data based on the tilt parameter, the relative image data indicative of a second image focused along the non-parallel image plane.
Abstract:
Various embodiments are directed to an image sensor that includes a first sensor portion and a second sensor portion coupled to the first sensor portion. The second sensor portion may be positioned relative to the first sensor portion so that the second sensor portion may initially detect light entering the image sensor, and some of that light passes through the second sensor portion and is be detected by the first sensor portion. In some embodiments, the second sensor portion may be configured to have a thickness suitable for sensing visible light. The first sensor portion may be configured to have a thickness suitable for sensing IR or NIR light. As a result of the arrangement and structure of the second sensor portion and the first sensor portion, the image sensor captures substantially more light from the light source.
Abstract:
Certain aspects relate to systems and techniques for submicron alignment in wafer optics. One disclosed method of alignment between wafers to produce an integrated lens stack employs a beam splitter (that is, a 50% transparent mirror) that reflects the alignment mark of the top wafer when the microscope objective is focused on the alignment mark of the bottom wafer. Another disclosed method of alignment between wafers to produce an integrated lens stack implements complementary patterns that can produce a Moiré effect when misaligned in order to aid in visually determining proper alignment between the wafers. In some embodiments, the methods can be combined to increase precision.
Abstract:
Described herein are methods and devices that employ a plurality of image sensors to capture a target image of a scene. As described, positioning at least one reflective or refractive surface near the plurality of image sensors enables the sensors to capture together an image of wider field of view and longer focal length than any sensor could capture individually by using the reflective or refractive surface to guide a portion of the image scene to each sensor. The different portions of the scene captured by the sensors may overlap, and may be aligned and cropped to generate the target image.
Abstract:
Aspects relate to an array camera exhibiting little or no parallax artifacts in captured images. For example, the planes of the central mirror prism of the array camera can intersect at an apex defining the vertical axis of symmetry of the system. The apex can serve as a point of intersection for the optical axes of the sensors in the array. Each sensor in the array “sees” a portion of the image scene using a corresponding facet of the central mirror prism, and accordingly each individual sensor/mirror pair represents only a sub-aperture of the total array camera. The complete array camera has a synthetic aperture generated based on the sum of all individual aperture rays.
Abstract:
Aspects relate to an array camera exhibiting little or no parallax artifacts in captured images. For example, the planes of the central mirror surfaces of the array camera can be located at a midpoint along, and orthogonally to, a line between the corresponding camera location and the virtual camera location. Accordingly, the cones of all of the cameras in the array appear as if coming from the virtual camera location after folding by the mirrors. Each sensor in the array “sees” a portion of the image scene using a corresponding facet of the central mirror prism, and accordingly each individual sensor/mirror pair represents only a sub-aperture of the total array camera. The complete array camera has a synthetic aperture generated based on the sum of all individual aperture rays.