Abstract:
A method of performing an image autofocus operation using multiple cameras includes performing, at an image processor, a first autofocus operation on a first region of interest in a scene captured by a first camera and determining a second region of interest in the scene captured by a second camera. The second region of interest is determined based on the first region of interest. The method further includes performing a second autofocus operation on the second region of interest. The method also includes fusing a first image of the scene captured by the first camera with a second image of the scene captured by the second camera to generate a fused image. The first image is based on the first autofocus operation and the second image is based on the second autofocus operation.
Abstract:
Systems and techniques are provided for processing one or more frames. For example, a process can include obtaining a first plurality of frames associated with a first settings domain from an image capture system, wherein the first plurality of frames is captured prior to obtaining a capture input. The process can include obtaining a reference frame associated with a second settings domain from the image capture system, wherein the reference frame is captured proximate to obtaining the capture input. The process can include obtaining a second plurality of frames associated with the second settings domain from the image capture system, wherein the second plurality of frames is captured after the reference frame. The process can include, based on the reference frame, transforming at least a portion of the first plurality of frames to generate a transformed plurality of frames associated with the second settings domain.
Abstract:
Systems, methods, and non-transitory media are provided for predictive camera initialization. An example method can include obtain, from a first image capture device, image data depicting a scene; classify the scene based on the image data; based on the classification of the scene, predict a camera use event; and based on the predicted camera use event, adjust a power mode of at least one of the first image capture device and a second image capture device.
Abstract:
Devices and methods for providing seamless preview images for multi-camera devices having two or more asymmetric cameras. A multi-camera device may include two asymmetric cameras disposed to image a target scene. The multi-camera device further includes a processor coupled to a memory component and a display, the processor configured to retrieve an image generated by a first camera from the memory component, retrieve an image generated by a second camera from the memory component, receive input corresponding to a preview zoom level, retrieve spatial transform information and photometric transform information from memory, modify at least one image received from the first and second cameras by the spatial transform and the photometric transform, and provide on the display a preview image comprising at least a portion of the at least one modified image and a portion of either the first image or the second image based on the preview zoom level.
Abstract:
An imaging system can obtain image data, for instance from an image sensor. The imaging system can supply the image data as input data to a machine learning system, which can generate one or more maps based on the image data. Each map can identify strengths at which a certain image processing function is to be applied to each pixel of the image data. Different maps can be generated for different image processing functions, such as noise reduction, sharpening, or color saturation. The imaging system can generate a modified image based on the image data and the one or more maps, for instance by applying each of one or more image processing functions in accordance with each of the one or more maps. The imaging system can supply the image data and the one or more maps to a second machine learning system to generate the modified image.
Abstract:
Systems and techniques are described for imaging. An imaging system includes an image sensor with a plurality of photodetectors, grouped into a first group of photodetectors and a second group of photodetectors. The imaging system can reset its image sensor. The imaging system exposes its image sensor to light from a scene. The plurality of photodetectors convert the light into charge. The imaging system stores analog photodetector signals corresponding to the charge from each the photodetectors. The imaging system reads first digital pixel data from a first subset of the analog photodetector signals corresponding to the first group of photodetectors without reading second digital pixel data from a second subset of the analog photodetector signals corresponding to the second group of photodetectors. The imaging system generates an image of the scene using the first digital pixel data.
Abstract:
Systems and techniques are provided for processing one or more frames. For example, a process can include obtaining a first plurality of frames associated with a first settings domain from an image capture system, wherein the first plurality of frames is captured prior to obtaining a capture input. The process can include obtaining a reference frame associated with a second settings domain from the image capture system, wherein the reference frame is captured proximate to obtaining the capture input. The process can include obtaining a second plurality of frames associated with the second settings domain from the image capture system, wherein the second plurality of frames is captured after the reference frame. The process can include, based on the reference frame, transforming at least a portion of the first plurality of frames to generate a transformed plurality of frames associated with the second settings domain.
Abstract:
Methods and apparatus for determining a depth of an object within a scene are provided. Image data of a scene can be captured using a lens configured to project an image of the scene onto an image sensor. The lens has a known focal length and is movable between at least a first lens position and a second lens position. A first image of the scene is captured with the lens at a first lens position, and a second image of the scene is captured with the lens at a second, different position. By measuring a first dimension of the object using the first image and a second dimension of the object using the second image, a depth of the object may be determined based upon a ratio of the first and second dimensions, the focal length of the lens, and a distance between the first and second lens positions.
Abstract:
Method and systems for autofocus triggering focusing are disclosed herein. In one example, a system may include a lens, a memory component configured to store lens parameters of the lens and regions of focus corresponding to the lens parameters, and a processor coupled to the memory and the lens. The processor may be configured to focus the lens on a target object at a first instance of time, receive information indicative of distances from an imaging device to the target object over a period of time, obtain lens parameters of the lens, and determine a region of focus, and trigger the lens to re-focus on the target object if the distance to the target object indicates the target object is outside of the region of focus and the distance to the target object is unchanged for a designated time period.
Abstract:
Systems, methods, and devices for power optimization in imaging devices having dual cameras are contained herein. In one aspect, a method for power optimization for a dual camera imaging device is disclosed. The method includes determining a zoom factor selection, determining whether the zoom factor selection falls within a first zoom factor range, a second zoom factor range, or a third zoom factor range, and sending a series of frames of an image captured by a first sensor or a series of frames of an image captured by a second sensor or both to a camera application based on the determined zoom factor section.