Abstract:
Techniques are disclosed for selectively capturing, retaining, and combining multiple sub-exposure images or brackets to yield a final image having diminished motion-induced blur and good noise characteristics. More specifically, after or during the capture of N brackets, the M best may be identified for combining into a single output image, (N>M). As used here, the term “best” means those brackets that exhibit the least amount of relative motion with respect to one another—with one caveat: integer pixel shifts may be preferred over sub-pixel shifts.
Abstract:
Techniques to capture and fuse short- and long-exposure images of a scene from a stabilized image capture device are disclosed. More particularly, the disclosed techniques use not only individual pixel differences between co-captured short- and long-exposure images, but also the spatial structure of occluded regions in the long-exposure images (e.g., areas of the long-exposure image(s) exhibiting blur due to scene object motion). A novel device used to represent this feature of the long-exposure image is a “spatial difference map.” Spatial difference maps may be used to identify pixels in the short- and long-exposure images for fusion and, in one embodiment, may be used to identify pixels from the short-exposure image(s) to filter post-fusion so as to reduce visual discontinuities in the output image.
Abstract:
Systems, methods, and computer readable media to improve image stabilization operations are described. Novel approaches for fusing non-reference images with a pre-selected reference frame in a set of commonly captured images are disclosed. The fusing approach may use a soft transition by using a weighted average for ghost/non-ghost pixels to avoid sudden transition between neighborhood and almost similar pixels. Additionally, the ghost/non-ghost decision can be made based on a set of neighboring pixels rather than independently for each pixel. An alternative approach may involve performing a multi-resolution decomposition of all the captured images, using temporal fusion, spatio-temporal fusion, or combinations thereof, at each level and combining the different levels to generate an output image.
Abstract:
In some embodiments, a method for compensating for lens motion includes estimating a starting position of a lens assembly associated with captured pixel data. The captured pixel data is captured from an image sensor. In some embodiments, the method further includes calculating from the starting position and position data received from the one or more position sensors lens movement associated with the captured pixel data. The lens movement is mapped into pixel movement associated with the captured pixel data. A transform matrix is adjusted to reflect at least the pixel movement. A limit factor associated with the position data is calculated. The captured pixel data is recalculated using the transform matrix and the limit factor.
Abstract:
Systems, methods, and computer readable media to improve image stabilization operations are described. A novel combination of image quality and commonality metrics are used to identify a reference frame from a set of commonly captured images which, when the set's other images are combined with it, results in a quality stabilized image. The disclosed image quality and commonality metrics may also be used to optimize the use of a limited amount of image buffer memory during image capture sequences that return more images that the memory may accommodate at one time. Image quality and commonality metrics may also be used to effect the combination of multiple relatively long-exposure images which, when combined with a one or more final (relatively) short-exposure images, yields images exhibiting motion-induced blurring in interesting and visually pleasing ways.
Abstract:
Techniques to capture and fuse short- and long-exposure images of a scene from a stabilized image capture device are disclosed. More particularly, the disclosed techniques use not only individual pixel differences between co-captured short- and long-exposure images, but also the spatial structure of occluded regions in the long-exposure images (e.g., areas of the long-exposure image(s) exhibiting blur due to scene object motion). A novel device used to represent this feature of the long-exposure image is a “spatial difference map.” Spatial difference maps may be used to identify pixels in the short- and long-exposure images for fusion and, in one embodiment, may be used to identify pixels from the short-exposure image(s) to filter post-fusion so as to reduce visual discontinuities in the output image.
Abstract:
Techniques to improve a digital image capture device's ability to stabilize a video stream are presented. According to some embodiments, improved stabilization of captured video frames is provided by intelligently harnessing the complementary effects of both optical image stabilization (OIS) and electronic image stabilization (EIS). In particular, OIS may be used to remove intra-frame motion blur that is typically lower in amplitude and dominates with longer integration times, while EIS may be used to remove residual unwanted frame-to-frame motion that is typically larger in amplitude. The techniques disclosed herein may also leverage information provided from the image capture device's OIS system to perform improved motion blur-aware video stabilization strength modulation, which permits better video stabilization performance in low light conditions, where integration times tend to be longer, thus leading to a greater amount of motion blurring in the output stabilized video.
Abstract:
Systems, methods, and computer readable media to improve image stabilization operations are described. A novel approach to pixel-based registration of non-reference images to a reference frame in a set of commonly captured images is disclosed which makes use of pyramid decomposition to more efficiently detect corners. The disclosed pixel-based registration operation may also be combined with motion sensor data-based registration approaches to register non-reference images with respect to the reference frame. When the registered non-reference images are combined with the pre-selected reference image, the resulting image is a quality stabilized image.
Abstract:
Systems, methods, and computer readable media to improve image stabilization operations and other image processing operations are described. A novel combination of interleaved image capture and image processing operations, e.g., image registration operations, may be employed on a bracketed capture of still images. Such techniques may result in improved camera performance and processing efficiency, as well as decreased shot-to-shot time intervals. In another embodiment, an image fusion portion of an image post-processing pipeline may also be performed in an interleaved fashion, such that each image in the sequence of obtained bracketed images may be incrementally added to an output composite image after it has been aligned with the preceding image or images from the sequence.
Abstract:
Techniques to capture and fuse short- and long-exposure images of a scene from a stabilized image capture device are disclosed. More particularly, the disclosed techniques use not only individual pixel differences between co-captured short- and long-exposure images, but also the spatial structure of occluded regions in the long-exposure images (e.g., areas of the long-exposure image(s) exhibiting blur due to scene object motion). A novel device used to represent this feature of the long-exposure image is a “spatial difference map.” Spatial difference maps may be used to identify pixels in the short- and long-exposure images for fusion and, in one embodiment, may be used to identify pixels from the short-exposure image(s) to filter post-fusion so as to reduce visual discontinuities in the output image.