Abstract:
Embodiments relate a keypoint detection circuit for identifying keypoints in captured image frames. The keypoint detection circuit generates an image pyramid based upon a received image frame, and determine multiple sets of keypoints for each octave of the pyramid using different levels of blur. In some embodiments, the keypoint detection circuit includes multiple branches, each branch made up of one or more circuits for determining a different set of keypoints from the image, or for determining a subsampled image for a subsequent octave of the pyramid. By determining multiple sets of keypoints for each of a plurality of pyramid octaves, a larger, more varied set of keypoints can be obtained and used for object detection and matching between images.
Abstract:
Embodiments relate to an architecture of a vision pipe included in an image signal processor. The architecture includes a front-end portion that includes a pair of image signal pipelines that generate an updated luminance image data. A back-end portion of the vision pipe architecture receives the updated luminance images from the front-end portion and performs, in parallel, scaling and various computer vision operations on the updated luminance image data. The back-end portion may repeatedly perform this parallel operation of computer vision operations on successively scaled luminance images to generate a pyramid image.
Abstract:
Methods and systems for detecting keypoints in image data may include an image sensor interface receiving pixel data from an image sensor. A front-end pixel data processing circuit may receive pixel data and convert the pixel data to a different color space format. A back-end pixel data processing circuit may perform one or more operations on the pixel data. An output circuit may receive pixel data and output the pixel data to a system memory. A keypoint detection circuit may receive pixel data from the image sensor interface in the image sensor pixel data format or receive pixel data after processing by the front-end or the back-end pixel data processing circuits. The keypoint detection circuit may perform a keypoint detection operation on the pixel data to detect one or more keypoints in the image frame and output to the system memory a description of the one or more keypoints.
Abstract:
An input rescale module that performs cross-color correlated downscaling of sensor data in the horizontal and vertical dimensions. The module may perform a first-pass demosaic of sensor data, apply horizontal and vertical scalers to resample and downsize the data in the horizontal and vertical dimensions, and then remosaic the data to provide horizontally and vertically downscaled sensor data as output for additional image processing. The module may, for example, act as a front end scaler for an image signal processor (ISP). The demosaic performed by the module may be a relatively simple demosaic, for example a demosaic function that works on 3×3 blocks of pixels. The front end of module may receive and process sensor data at two pixels per clock (ppc); the horizontal filter component reduces the sensor data down to one ppc for downstream components of the input rescale module and for the ISP pipeline.
Abstract:
Systems and methods for automatic lens flare compensation may include a non-uniformity detector configured to operate on pixel data for an image in an image sensor color pattern. The non-uniformity detector may detect a non-uniformity in the pixel data in a color channel of the image sensor color pattern. The non-uniformity detector may generate output including location and magnitude values of the non-uniformity. A lens flare detector may determine, based at least on the location and magnitude values, whether the output of the non-uniformity detector corresponds to a lens flare in the image. In some embodiments, the lens flare detector may generate, in response to determining that the output corresponds to the lens flare, a representative map of the lens flare. A lens flare corrector may determine one or more pixel data correction values corresponding to the lens flare and apply the pixel data correction values to the pixel data.
Abstract:
An image processing pipeline may perform noise filtering and image sharpening utilizing common spatial support. A noise filter may perform a spatial noise filtering technique to determine a filtered value of a given pixel based on spatial support obtained from line buffers. Sharpening may also be performed to generate a sharpened value of the given pixel based on spatial support obtained from the same line buffers. A filtered and sharpened version of the pixel may be generated by combining the filtered value of the given pixel with the sharpened value of the given pixel. In at least some embodiments, the noise filter performs spatial noise filtering and image sharpening on a luminance value of the given pixel, when the given pixel is received in a luminance-chrominance encoding.
Abstract:
An input rescale module that performs cross-color correlated downscaling of sensor data in the horizontal and vertical dimensions. The module may perform a first-pass demosaic of sensor data, apply horizontal and vertical scalers to resample and downsize the data in the horizontal and vertical dimensions, and then remosaic the data to provide horizontally and vertically downscaled sensor data as output for additional image processing. The module may, for example, act as a front end scaler for an image signal processor (ISP). The demosaic performed by the module may be a relatively simple demosaic, for example a demosaic function that works on 3×3 blocks of pixels. The front end of module may receive and process sensor data at two pixels per clock (ppc); the horizontal filter component reduces the sensor data down to one ppc for downstream components of the input rescale module and for the ISP pipeline.
Abstract:
Devices, methods, and non-transitory program storage devices are disclosed herein to perform predictive image sensor cropping operations to improve the performance of video image stabilization operations, especially for high resolution image sensors. According to some embodiments, the techniques include, for each of one or more respective images of a first plurality of images: obtaining image information corresponding to one or more images in the first plurality of images captured prior to the respective image; predicting, for the respective image, an image sensor cropping region to be read out from the first image sensor; and then reading out, into a memory, a first cropped version of the respective image comprising only the predicted image sensor cropping region for the respective image. Then, a first video may be produced based, at least in part, on the first cropped versions of the one or more respective images of the first plurality of images.
Abstract:
An image processing circuit for performing local tone mapping (LTM) after image warping. The image processing circuit includes a warping circuit that warps an input image to generate a warped image, and a LTM circuit coupled to the warping circuit. The LTM circuit determines an input color component value for a color component of a pixel in a version of the warped image, determines an output color component value for the color component of the pixel, based on mapping of coordinates of pixels in the warped image to coordinates of pixels in the input image, determines a gain value for the pixel as a ratio of the output color component value relative to the input color component value, and adjusts color component values for color components of the pixel using the gain value to generate adjusted color component values for the color components of the pixel in an output image.
Abstract:
Embodiments relate to lateral chromatic aberration (LCA) recovery of raw image data generated by image sensors. A chromatic aberration recovery circuit performs chromatic aberration recovery on the raw image data to correct the resulting LCA in the full color images using pre-calculated offset values of a subset of colors of pixels.