Abstract:
Innovations relating to systems for generating plenoptic images, are disclosed. One system includes an objective lens having a focal plane, a light sensor positioned to receive light propagating through the objective lens, a first optical element array positioned between the objective lens and the sensor, the first optical element array including a first plurality of optical elements, and a second optical element array positioned between the first optical element array and the sensor, the second optical element array comprising a second plurality of optical elements. Each optical element of the first optical element array is configured to direct light from a separate portion of an image onto a separate optical element of the second optical element array and wherein each optical element of the second optical element array is configured to project the separate portion of the image of the scene onto a separate location of the sensor.
Abstract:
Aspects relate to an array camera exhibiting little or no parallax artifacts in captured images. For example, the planes of the central mirror prism of the array camera can intersect at an apex defining the vertical axis of symmetry of the system. The apex can serve as a point of intersection for the optical axes of the sensors in the array. Each sensor in the array “sees” a portion of the image scene using a corresponding facet of the central mirror prism, and accordingly each individual sensor/mirror pair represents only a sub-aperture of the total array camera. The complete array camera has a synthetic aperture generated based on the sum of all individual aperture rays.
Abstract:
Certain aspects relate to systems and techniques for efficiently recording captured plenoptic image data and for rendering images from the captured plenoptic data. The plenoptic image data can be captured by a plenoptic or other light field camera. In some implementations, four dimensional radiance data can be transformed into three dimensional data by performing a Radon transform to define the image by planes instead of rays. A resulting Radon image can represent the summed values of energy over each plane. The original three-dimensional luminous density of the scene can be recovered, for example, by performing an inverse Radon transform. Images from different views and/or having different focus can be rendered from the luminous density.
Abstract:
Certain aspects relate to wafer level optical designs for a folded optic stereoscopic imaging system. One example folded optical path includes first and second reflective surfaces defining first, second, and third optical axes, and where the first reflective surface redirects light from the first optical axis to the second optical axis and where the second reflective surface redirects light from the second optical axis to the third optical axis. Such an example folded optical path further includes wafer-level optical stacks providing ten lens surfaces distributed along the first and second optical axes. A variation on the example folded optical path includes a prism having the first reflective surface, wherein plastic lenses are formed in or secured to the input and output surfaces of the prism in place of two of the wafer-level optical stacks.
Abstract:
Certain aspects relate to systems and techniques for efficiently recording captured plenoptic image data and for rendering images from the captured plenoptic data. The plenoptic image data can be captured by a plenoptic or other light field camera. In some implementations, four dimensional radiance data can be transformed into three dimensional data by performing a Radon transform to define the image by planes instead of rays. A resulting Radon image can represent the summed values of energy over each plane. The original three-dimensional luminous density of the scene can be recovered, for example, by performing an inverse Radon transform. Images from different views and/or having different focus can be rendered from the luminous density.
Abstract:
Certain aspects relate to systems and techniques for submicron alignment in wafer optics. One disclosed method of alignment between wafers to produce an integrated lens stack employs a beam splitter (that is, a 50% transparent mirror) that reflects the alignment mark of the top wafer when the microscope objective is focused on the alignment mark of the bottom wafer. Another disclosed method of alignment between wafers to produce an integrated lens stack implements complementary patterns that can produce a Moiré effect when misaligned in order to aid in visually determining proper alignment between the wafers. In some embodiments, the methods can be combined to increase precision.
Abstract:
Aspects relate to a prism array camera having a wide field of view. For example, the prism array camera can use a central refractive prism, for example with multiple surfaces or facets, to split incoming light comprising the target image into multiple portions for capture by the sensors in the array. The prism can have a refractive index of approximately 1.5 or higher, and can be shaped and positioned to reduce chromatic aberration artifacts and increase the FOV of a sensor. In some examples a negative lens can be incorporated into or attached to a camera-facing surface of the prism to further increase the FOV.
Abstract:
Aspects relate to autofocus systems and techniques for an array camera having a low-profile height, for example approximately 4 mm. A voice coil motor (VCM) can be positioned proximate to a folded optic assembly in the array camera to enable vertical motion of a second light directing surface for changing the focal position of the corresponding sensor. A driving member can be positioned within the coil of the VCM to provide vertical movement, and the driving member can be coupled to the second light directing surface, for example by a lever. Accordingly, the movement of the VCM driving member can be transferred to the second light directing surface across a distance, providing autofocus capabilities without increasing the overall height of the array camera.
Abstract:
Various embodiments are directed to an image sensor that includes a first sensor portion and a second sensor portion. The second sensor portion may be positioned relative to the first sensor portion such that the second sensor portion may initially detect light entering the image sensor, and some of that light passes through the second sensor portion and may be detected by the first sensor portion. In some embodiments, one more optical filters may be disposed within the image sensor. The one or more optical filters may include at least one of a dual bandpass filter disposed above the second photodetector or a narrow bandpass filter disposed between the first photodetector and the second photodetector.
Abstract:
Aspects relate to an array camera exhibiting little or no parallax artifacts in captured images. For example, the planes of the central prism of the array camera can intersect at an apex defining the vertical axis of symmetry of the system. The apex can serve as a point of intersection for the optical axes of the sensors in the array. Each sensor in the array “sees” a portion of the image scene using a corresponding facet of the central prism, and accordingly each individual sensor/facet pair represents only a sub-aperture of the total array camera. The complete array camera has a synthetic aperture generated based on the sum of all individual aperture rays.