Abstract:
A temporal filter in an image processing pipeline may perform filtering using spatial filtering and noise history. A given pixel of a current image frame may be received for filtering at a temporal filter. A filtering weight may be determined for blending the given pixel with a corresponding pixel of a reference image frame that was previously filtered at the temporal filter. The filtering weight may be determined based on neighboring pixels of the given pixel in the current image frame and corresponding pixels in the reference image frame. The filtering weight may be adjusted according to a quality score indicating noise history for the corresponding pixel in the reference image frame. Based on the filtering weight, a filtered version of the given pixel may be generated, blending the given pixel and the corresponding pixel to store as part of a filtered version of the current image frame.
Abstract:
An image signal processor of a device, apparatus, or computing system that includes a camera capable of capturing image data may apply piecewise perspective transformations to image data received from the camera's image sensor. A scaling unit of an Image Signal Processor (ISP) may perform piecewise perspective transformations on a captured image to correct for rolling shutter artifacts and to provide video image stabilization. Image data may be divided into a series of horizontal slices and perspective transformations may be applied to each slice. The transformations may be based on motion data determined in any of various manners, such as by using gyroscopic data and/or optical-flow calculations. The piecewise perspective transforms may be encoded as Digital Difference Analyzer (DDA) steppers and may be implemented using separable scalar operations. The image signal processor may not write the received image data to system memory until after the transformations have been performed.
Abstract:
In an embodiment, an electronic device may be configured to capture still frames during video capture, but may capture the still frames in the 4×3 aspect ratio and at higher resolution than the 16×9 aspect ratio video frames. The device may interleave high resolution, 4×3 frames and lower resolution 16×9 frames in the video sequence, and may capture the nearest higher resolution, 4×3 frame when the user indicates the capture of a still frame. Alternatively, the device may display 16×9 frames in the video sequence, and then expand to 4×3 frames when a shutter button is pressed. The device may capture the still frame and return to the 16×9 video frames responsive to a release of the shutter button.
Abstract:
An image signal processor of a device, apparatus, or computing system that includes a camera capable of capturing image data may apply piecewise perspective transformations to image data received from the camera's image sensor. A scaling unit of an Image Signal Processor (ISP) may perform piecewise perspective transformations on a captured image to correct for rolling shutter artifacts and to provide video image stabilization. Image data may be divided into a series of horizontal slices and perspective transformations may be applied to each slice. The transformations may be based on motion data determined in any of various manners, such as by using gyroscopic data and/or optical-flow calculations. The piecewise perspective transforms may be encoded as Digital Difference Analyzer (DDA) steppers and may be implemented using separable scalar operations. The image signal processor may not write the received image data to system memory until after the transformations have been performed.
Abstract:
A temporal filter in an image processing pipeline may perform filtering using spatial filtering and noise history. A given pixel of a current image frame may be received for filtering at a temporal filter. A filtering weight may be determined for blending the given pixel with a corresponding pixel of a reference image frame that was previously filtered at the temporal filter. The filtering weight may be determined based on neighboring pixels of the given pixel in the current image frame and corresponding pixels in the reference image frame. The filtering weight may be adjusted according to a quality score indicating noise history for the corresponding pixel in the reference image frame. Based on the filtering weight, a filtered version of the given pixel may be generated, blending the given pixel and the corresponding pixel to store as part of a filtered version of the current image frame.
Abstract:
A temporal filter in an image processing pipeline may insert a frame delay when filtering an image frame. A given pixel of a current image frame may be received and a filtered version of the given pixel may be generated, blending the given pixel and a corresponding pixel of a reference image frame to store as part of a filtered version of the current image frame. If a frame delay setting is enabled, the corresponding pixel of the reference image frame may be provided as output for subsequent image processing inserting a frame delay for the current image frame. During the frame delay programming instructions may be received and image processing pipeline components may be configured according to the programming instructions. If the frame delay setting is disabled, then the filtered version of the given pixel may be provided as output for subsequent image processing.
Abstract:
Some embodiments relate to sharpening segments of an image differently based on content in the image. Content based sharpening is performed by a content image processing circuit that receives luminance values of an image and a content map. The content map identifies categories of content in segments of the image. Based on one or more of the identified categories of content, the circuit determines a content factor associated with a pixel. The content factor may also be based on a texture and/or chroma values. A texture value indicates a likelihood of a category of content and is based on detected edges in the image. A chroma value indicates a likelihood of a category of content and is based on color information of the image. The circuit receives the content factor and applies it to a version of the luminance value of the pixel to generate a sharpened version of the luminance value.
Abstract:
The present disclosure generally relates to systems and methods for image data processing. In certain embodiments, an image processing pipeline may be configured to receive a frame of the image data having a plurality of pixels acquired using a digital image sensor. The image processing pipeline may then be configured to determine a first plurality of correction factors that may correct each pixel in the plurality of pixels for fixed pattern noise. The first plurality of correction factors may be determined based at least in part on fixed pattern noise statistics that correspond to the frame of the image data. After determining the first plurality of correction factors, the image processing pipeline may be configured to configured to apply the first plurality of correction factors to the plurality of pixels, thereby reducing the fixed pattern noise present in the plurality of pixels.
Abstract:
Methods and apparatus for focusing in virtual reality (VR) or augmented reality (AR) devices based on gaze tracking information are described. Embodiments of a VR/AR head-mounted display (HMD) may include a gaze tracking system for detecting position and movement of the user's eyes. For AR applications, gaze tracking information may be used to direct external cameras to focus in the direction of the user's gaze so that the cameras focus on objects at which the user is looking. For AR or VR applications, the gaze tracking information may be used to adjust the focus of the eye lenses so that the virtual content that the user is currently looking at on the display has the proper vergence to match the convergence of the user's eyes.
Abstract:
Embodiments relate to image signal processors (ISP) that include binner circuits that down-sample an input image. An input image may include a plurality of pixels. The output image of the binner circuit may include a reduced number of pixels. The binner circuit may include a plurality of different operation modes. In a bin mode, the binner circuit may blend a subset of input pixel values to generate an output pixel quad. In a skip mode, the binner circuit may select one of the input pixel values as the output pixel pixel. The selection may be performed randomly to avoid aliasing. In a luminance mode, the binner circuit may take a weighted average of a subset of pixel values having different colors. In a color value mode, the binner circuit may select one of the colors in a subset of pixel values as an output pixel value.