Abstract:
Embodiments relate to an architecture of a vision pipe included in an image signal processor. The architecture includes a front-end portion that includes a pair of image signal pipelines that generate an updated luminance image data. A back-end portion of the vision pipe architecture receives the updated luminance images from the front-end portion and performs, in parallel, scaling and various computer vision operations on the updated luminance image data. The back-end portion may repeatedly perform this parallel operation of computer vision operations on successively scaled luminance images to generate a pyramid image.
Abstract:
Embodiments relate to color correction circuit operations performed by an image signal processor. The color correction circuit computes optimal color correction matrix on a per-pixel basis and adjusts it based on relative noise standard deviations of the color channels to steer the matrix.
Abstract:
Techniques to detect subject and camera motion in a set of consecutively captured image frames are disclosed. More particularly, techniques disclosed herein temporally track two sets of downscaled images to detect motion. One set may contain higher resolution and the other set lower resolution of the same images. For each set, a coefficient of variation may be computed across the set of images for each sample in the downscaled image to detect motion and generate a change mask. The information in the change mask can be used for various applications, including determining how to capture a next image in the sequence.
Abstract:
An image processing pipeline may account for clipped pixels in auto focus statistics. Generating auto focus statistics may include evaluating a neighborhood of pixels with respect to a given pixel in a stream of pixels for an image. If a clipped pixel is identified within the neighborhood of pixels then the evaluation of the given pixel may be excluded from an auto focus statistic. The image processing pipeline may also provide auto focus statistics that do not exclude clipped pixels. A luminance edge detection value may, in some embodiments, be generated by applying an IIR filter to the given pixel in a stream of pixels to band-pass filter the given pixel before including the band-pass filtered pixel in the generation of the luminance edge detection value.
Abstract:
In an embodiment, an electronic device may be configured to capture still frames during video capture, but may capture the still frames in the 4×3 aspect ratio and at higher resolution than the 16×9 aspect ratio video frames. The device may interleave high resolution, 4×3 frames and lower resolution 16×9 frames in the video sequence, and may capture the nearest higher resolution, 4×3 frame when the user indicates the capture of a still frame. Alternatively, the device may display 16×9 frames in the video sequence, and then expand to 4×3 frames when a shutter button is pressed. The device may capture the still frame and return to the 16×9 video frames responsive to a release of the shutter button.
Abstract:
A temporal filter may perform dynamic motion estimation and compensation for filtering an image frame. A row of pixels in an image frame received for processing at the temporal filter may be received. A motion estimate may be dynamically determined that registers a previously filtered reference image frame with respect to the row of pixels in the image frame. The reference image frame may be aligned according to the determined motion estimate, and pixels in the row of the image frame may be blended with corresponding pixels in the aligned reference image frame to generate a filtered version of the image frame. Motion statistics may be collected for subsequent processing based on the motion estimation and alignment for the row of pixels in the image frame.
Abstract:
A temporal filter in an image processing pipeline may be configured to generate a high dynamic range (HDR) image. Image frames captured to generate an HDR image frame be blended together at a temporal filter. An image frame that is part of a group of image frames capture to generate the HDR image may be received for filtering at the temporal filter module. A reference image frame, which may be a previously filtered image frame or an unfiltered image frame may be obtained. A filtered version of the image frame may then be generated according to an HDR blending scheme that blends the reference image frame with the image frame. If the image frame is the last image frame of the group of image frames to be filtered, then the filtered version of the image frame may be provided as the HDR image frame.
Abstract:
Image tone adjustment using local tone curve computation may be utilized to adjust luminance ranges for images. Image tone adjustment using local tone curve computation may reduce the overall contrast of an image, while maintaining local contrast in smaller areas, such as in images capturing brightly lit scenes where the difference in intensity between brightest and darkest areas is large. A desired brightness representation of the image may be generated including target luminance values for corresponding blocks of the image. For each block, one or more tone adjustment values may be computed, that when jointly applied to the respective histograms for the block and neighboring blocks results in the luminance values that match corresponding target values. The tone adjustment values may be determined by solving an under-constrained optimization problem such that optimization constraints are minimized. The image may then be adjusted according to the computed tone adjustment values.
Abstract:
The present disclosure generally relates to systems and methods for image data processing. In certain embodiments, an image processing pipeline may detect and correct a defective pixel of image data acquired using an image sensor. The image processing pipeline may receive an input pixel of the image data acquired using the image sensor. The image processing pipeline may then identify a set of neighboring pixels having the same color component as the input pixel and remove two neighboring pixels from the set of neighboring pixels thereby generating a modified set of neighboring pixels. Here, the two neighboring pixels correspond to a maximum pixel value and a minimum pixel value of the set of neighboring pixels. The image processing pipeline may then determine a gradient for each neighboring pixel in the modified set of neighboring pixels and determine whether the input pixel includes a dynamic defect or a speckle based at least in part on the gradient for each neighboring pixel in the modified set of neighboring pixels.
Abstract:
In an embodiment, an electronic device may be configured to capture still frames during video capture but may capture the still frames in the 4×3 aspect ratio and at higher resolution than the 16×9 aspect ratio video frames. The device may interleave high resolution, 4×3 frames and lower resolution 16×9 frames in the video sequence, and may capture the nearest higher resolution, 4×3 frame when the user indicates the capture of a still frame. Alternatively, the device may display 16×9 frames in the video sequence, and then expand to 4×3 frames when a shutter button is pressed. The device may capture the still frame and return to the 16×9 video frames responsive to a release of the shutter button.