Abstract:
Various embodiments disclosed herein include techniques for operating a multiple camera system. In some embodiments, a primary camera may be selected from a plurality of cameras using object distance estimates, distance error information, and minimum object distances for some or all of the plurality of cameras. In other embodiments, a camera may be configured to use defocus information to obtain an object distance estimate to a target object closer than a minimum object distance of the camera. This object distance estimate may be used to assist in focusing another camera of the multi-camera system.
Abstract:
Some embodiments include methods for correcting optical alignment of components in a camera module for a multifunction device. In some embodiments, components of a camera module for use in a multifunction device are assembled on a test station. Some embodiments include a method that includes capturing a single test image, calculating from the spatial frequency response data an optical tilt between the optical axis of a lens and an optical axis of the image sensor of the camera module, and mechanically adjusting an alignment of the lens and the optical axis of the image sensor of the camera module to reduce the optical tilt. In some embodiments, the capturing is performed using the components of the camera module, and the single test image contains visually encoded spatial frequency response data for characterizing the components of the camera module.
Abstract:
Various embodiments disclosed herein include techniques for determining autofocus for a camera on a mobile device. In some instances, depth imaging is used to assist in determining a focus position for the camera through an autofocus process. For example, a determination of depth may be used to determine a focus position for the camera. In another example, the determination of depth may be used to assist another autofocus process.
Abstract:
Various embodiments include synchronization of camera focus movement control with frame capture. In some embodiments, such synchronization may comprise synchronized focus movement control that is based at least in part on integration timing and/or region of interest (ROI) timing. According to some examples, an actuator of a camera module may be controlled such that a lens group and/or an image sensor of the camera module move towards a focus position during one or more time periods (e.g., a non-integration time period in which the image sensor is not being exposed, a non-ROI time period in which a ROI of the image sensor is not being exposed for image capture, and/or a blanking interval, etc.). Additionally, or alternatively, the actuator may be controlled such that the lens group and the image sensor do not move relative to each other in a focus direction during one or more other time periods (e.g., an integration time period in which the image sensor is being exposed, a ROI time period in which the ROI of the image sensor is being exposed, etc.).
Abstract:
Various embodiments disclosed herein include techniques for determining autofocus for a camera on a mobile device. In some instances, depth imaging is used to assist in determining a focus position for the camera through an autofocus process. For example, a determination of depth may be used to determine a focus position for the camera. In another example, the determination of depth may be used to assist another autofocus process.
Abstract:
Various embodiments disclosed herein include techniques for determining autofocus for a camera on a mobile device. In some instances, depth imaging is used to assist in determining a focus position for the camera through an autofocus process. For example, a determination of depth may be used to determine a focus position for the camera. In another example, the determination of depth may be used to assist another autofocus process.
Abstract:
Various embodiments disclosed herein include techniques for operating a multiple camera system. In some embodiments, a primary camera may be selected from a plurality of cameras using object distance estimates, distance error information, and minimum object distances for some or all of the plurality of cameras. In other embodiments, a camera may be configured to use defocus information to obtain an object distance estimate to a target object closer than a minimum object distance of the camera. This object distance estimate may be used to assist in focusing another camera of the multi-camera system.
Abstract:
Various embodiments disclosed herein include techniques for determining autofocus for a camera on a mobile device. In some instances, depth imaging is used to assist in determining a focus position for the camera through an autofocus process. For example, a determination of depth may be used to determine a focus position for the camera. In another example, the determination of depth may be used to assist another autofocus process.
Abstract:
Some embodiments include methods for correcting optical alignment of components in a camera module for a multifunction device. In some embodiments, components of a camera module for use in a multifunction device are assembled on a test station. Some embodiments include a method that includes capturing a single test image, calculating from the spatial frequency response data an optical tilt between the optical axis of a lens and an optical axis of the image sensor of the camera module, and mechanically adjusting an alignment of the lens and the optical axis of the image sensor of the camera module to reduce the optical tilt. In some embodiments, the capturing is performed using the components of the camera module, and the single test image contains visually encoded spatial frequency response data for characterizing the components of the camera module.
Abstract:
Determining a focus setting includes determining a plurality of regions of interest in a view of a scene, and, for each of the plurality of regions of interest, obtaining a set of image data for each of multiple focal positions, and then applying focus filters to the set of image data for each of the plurality of focal positions for each of the regions of interest to obtain a set of focus scores, i.e., a focus score for each focus filter applied to the set of image data for each of the focal positions. Further, determining a confidence value associated with each of the sets of focus scores, selecting a subset of the sets of focus scores based on the confidence values associated with each of the sets of focus scores, and determining a focus setting for the scene based on the selected subset of the focus scores.