Abstract:
Certain aspects relate to systems and techniques for using imaging pixels (that is, non-phase detection pixels) for performing noise reduction on phase detection autofocus. Advantageously, this can provide for more accurate phase detection autofocus and also to optimized processor usage for performing phase detection. The phase difference detection pixels are provided to obtain a phase difference detection signal indicating a shift direction (defocus direction) and a shift amount (defocus amount) of image focus, and analysis of imaging pixel values can be used to estimate a level of focus of an in-focus region of interest and to limit the identified phase difference accordingly.
Abstract:
Certain aspects relate to systems and techniques for using imaging pixels (that is, non-phase detection pixels) for performing noise reduction on phase detection autofocus. Advantageously, this can provide for more accurate phase detection autofocus and also to optimized processor usage for performing phase detection. The phase difference detection pixels are provided to obtain a phase difference detection signal indicating a shift direction (defocus direction) and a shift amount (defocus amount) of image focus, and analysis of imaging pixel values can be used to estimate a level of focus of an in-focus region of interest and to limit the identified phase difference accordingly.
Abstract:
Certain aspects relate to systems and techniques for using imaging pixels (that is, non-phase detection pixels) in addition to phase detection pixels for calculating autofocus information. Imaging pixel values can be used to interpolate a value at a phase detection pixel location. The interpolated value and a value received from the phase difference detection pixel can be used to obtain a virtual phase detection pixel value. The interpolated value, value received from the phase difference detection pixel, and the virtual phase detection pixel value can be used to obtain a phase difference detection signal indicating a shift direction (defocus direction) and a shift amount (defocus amount) of image focus.
Abstract:
This application relates to capturing an image of a target object using information from a time-of-flight sensor. In one aspect, a method may include a time-of-flight (TOF) system configured to emit light and sense a reflection of the emitted light and may determine a return energy based on the reflection of the emitted light. The method may measure a time between when the light is emitted and when the reflection is sensed and may determine a distance between the target object and the TOF system based on that time. The method may also identify a reflectance of the target object based on the return energy and may determine an exposure level based on a distance between the target object and a reflectance of the target object.
Abstract:
Certain aspects relate to systems and techniques for color temperature analysis and matching. For example, three or more camera flash LEDs of different output colors can be used to match any of a range of ambient color temperatures in a non-linear space on the black body curve. The scene color temperature can be analyzed in a preliminary image by determining actual sensor R/G and B/G ratios, enabling more accurate matching of foreground flash lighting to background lighting by the reference illuminant for subsequent white balance processing. The current provided to, and therefore brightness emitted from, each LED can be individually controlled based on the determined sensor response to provide a dynamic and adaptive mix of the output colors of the LEDs.
Abstract:
Systems and methods for correcting stereo yaw of a stereoscopic image sensor pair using autofocus feedback are disclosed. A stereo depth of an object in an image is estimated from the disparity of the object between the images captured by each sensor of the image sensor pair. An autofocus depth to the object is found from the autofocus lens position. If the difference between the stereo depth and the autofocus depth is non zero, one of the images is warped and the disparity is recalculated until the stereo depth and the autofocus depth to the object is substantially the same.
Abstract:
Apparatus and methods for conditional display of a stereoscopic image pair on a display device are disclosed. In some aspects, a vertical disparity between two images is corrected. If the corrected vertical disparity is below a threshold, a three dimensional image may be generated based on the correction. In some cases, the corrected vertical disparity may still be significant, for example, above the threshold. In these instances, the disclosed apparatus and methods may display a two dimensional image.
Abstract:
Systems and methods described herein can adjust a search range generated by a camera using depth-assisted autofocus based in part on measuring focus values in a first search range. For example, in some embodiments, a method includes estimating a depth of an object to be captured in an image, determining a first range of lens positions based at least in part on the estimating, moving the lens of the camera to a plurality of lens positions within the first range of lens positions, capturing a plurality of images, the plurality of images being captured at one or more of the plurality of lens positions, generating one or more focus values based on the plurality of images, and determining one or more additional lens positions or a second range of lens positions based at least in part on the one or more focus values.
Abstract:
Methods and imaging devices are disclosed for reducing defocus events occurring during autofocus search operations. For example, one method includes capturing a plurality of frames depicting a scene with an imaging device, selecting a portion of the scene of at least one frame that corresponds to an object of the scene, and detecting a change in the scene. The method further includes detecting a distance between the object and the imaging device for each of the plurality of frames, determining a lens position for each frame based on the determined distance of each frame and moving a lens toward the lens position of each frame while the scene continuously changes. The method also includes determining the scene is stable and initiating an autofocus search operation based on the determination that the scene is stable.
Abstract:
Certain embodiments relate to banding detection and correction techniques to improve the quality of captured imagery, such as video or still images. In particular, this disclosure describes banding correction techniques that cycle between detection of rolling banding and static banding to determine the power line frequency of ambient light, for example 50 Hz or 60 Hz. The banding correction techniques may also compare different image frames to detect rolling banding. The banding correction techniques may compare row sum data of a plurality of image frames and apply a Fourier analysis to determine a periodic signal of static banding at a particular ambient light power line frequency.