Abstract:
Traditionally, time-lapse videos are constructed from images captured at time intervals called “temporal points of interests” or “temporal POIs.” Disclosed herein are systems and methods of constructing improved, motion-stabilized time-lapse videos using temporal points of interest and image similarity comparisons. According to some embodiments, a “burst” of images may be captured, centered around the aforementioned temporal points of interest. Then, each burst sequence of images may be analyzed, e.g., by performing an image similarity comparison between each image in the burst sequence and the image selected at the previous temporal point of interest. Selecting the image from a given burst that is most similar to the previous selected image, while minimizing the amount of motion with the previous selected image, allows the system to improve the quality of the resultant time-lapse video by discarding “outlier” or other undesirable images captured in the burst sequence and motion stabilizing the selected image.
Abstract:
In some embodiments, a method for compensating for lens motion includes estimating a starting position of a lens assembly associated with captured pixel data. The captured pixel data is captured from an image sensor. In some embodiments, the method further includes calculating from the starting position and position data received from the one or more position sensors lens movement associated with the captured pixel data. The lens movement is mapped into pixel movement associated with the captured pixel data. A transform matrix is adjusted to reflect at least the pixel movement. A limit factor associated with the position data is calculated. The captured pixel data is recalculated using the transform matrix and the limit factor.
Abstract:
Traditionally, time-lapse videos are constructed from images captured at time intervals called “temporal points of interests” or “temporal POIs.” Disclosed herein are systems and methods of constructing improved, motion-stabilized time-lapse videos using temporal points of interest and image similarity comparisons. According to some embodiments, a “burst” of images may be captured, centered around the aforementioned temporal points of interest. Then, each burst sequence of images may be analyzed, e.g., by performing an image similarity comparison between each image in the burst sequence and the image selected at the previous temporal point of interest. Selecting the image from a given burst that is most similar to the previous selected image, while minimizing the amount of motion with the previous selected image, allows the system to improve the quality of the resultant time-lapse video by discarding “outlier” or other undesirable images captured in the burst sequence and motion stabilizing the selected image.
Abstract:
Techniques to capture and fuse short- and long-exposure images of a scene from a stabilized image capture device are disclosed. More particularly, the disclosed techniques use not only individual pixel differences between co-captured short- and long-exposure images, but also the spatial structure of occluded regions in the long-exposure images (e.g., areas of the long-exposure image(s) exhibiting blur due to scene object motion). A novel device used to represent this feature of the long-exposure image is a “spatial difference map.” Spatial difference maps may be used to identify pixels in the short- and long-exposure images for fusion and, in one embodiment, may be used to identify pixels from the short-exposure image(s) to filter post-fusion so as to reduce visual discontinuities in the output image.
Abstract:
Techniques are disclosed for selectively capturing, retaining, and combining multiple sub-exposure images or brackets to yield a final image having diminished motion-induced blur and good noise characteristics. More specifically, after or during the capture of N brackets, the M best may be identified for combining into a single output image, (N>M). As used here, the term “best” means those brackets that exhibit the least amount of relative motion with respect to one another—with one caveat: integer pixel shifts may be preferred over sub-pixel shifts.
Abstract:
Techniques to capture and fuse short- and long-exposure images of a scene from a stabilized image capture device are disclosed. More particularly, the disclosed techniques use not only individual pixel differences between co-captured short- and long-exposure images, but also the spatial structure of occluded regions in the long-exposure images (e.g., areas of the long-exposure image(s) exhibiting blur due to scene object motion). A novel device used to represent this feature of the long-exposure image is a “spatial difference map.” Spatial difference maps may be used to identify pixels in the short-and long-exposure images for fusion and, in one embodiment, may be used to identify pixels from the short-exposure image(s) to filter post-fusion so as to reduce visual discontinuities in the output image.
Abstract:
Techniques to capture and fuse short- and long-exposure images of a scene from a stabilized image capture device are disclosed. More particularly, the disclosed techniques use not only individual pixel differences between co-captured short- and long-exposure images, but also the spatial structure of occluded regions in the long-exposure images (e.g., areas of the long-exposure image(s) exhibiting blur due to scene object motion). A novel device used to represent this feature of the long-exposure image is a “spatial difference map.” Spatial difference maps may be used to identify pixels in the short- and long-exposure images for fusion and, in one embodiment, may be used to identify pixels from the short-exposure image(s) to filter post-fusion so as to reduce visual discontinuities in the output image.
Abstract:
Techniques to capture and fuse short- and long-exposure images of a scene from a stabilized image capture device are disclosed. More particularly, the disclosed techniques use not only individual pixel differences between co-captured short- and long-exposure images, but also the spatial structure of occluded regions in the long-exposure images (e.g., areas of the long-exposure image(s) exhibiting blur due to scene object motion). A novel device used to represent this feature of the long-exposure image is a “spatial difference map.” Spatial difference maps may be used to identify pixels in the short-and long-exposure images for fusion and, in one embodiment, may be used to identify pixels from the short-exposure image(s) to filter post-fusion so as to reduce visual discontinuities in the output image.
Abstract:
Techniques to improve a digital image capture device's ability to stabilize a video stream are presented. According to some embodiments, improved stabilization of captured video frames is provided by intelligently harnessing the complementary effects of both optical image stabilization (OIS) and electronic image stabilization (EIS). In particular, OIS may be used to remove intra-frame motion blur that is typically lower in amplitude and dominates with longer integration times, while EIS may be used to remove residual unwanted frame-to-frame motion that is typically larger in amplitude. The techniques disclosed herein may also leverage information provided from the image capture device's OIS system to perform improved motion blur-aware video stabilization strength modulation, which permits better video stabilization performance in low light conditions, where integration times tend to be longer, thus leading to a greater amount of motion blurring in the output stabilized video.
Abstract:
Techniques to improve a digital image capture device's ability to stabilize a video stream are presented. According to some embodiments, improved stabilization of captured video frames is provided by intelligently harnessing the complementary effects of both optical image stabilization (OIS) and electronic image stabilization (EIS). In particular, OIS may be used to remove intra-frame motion blur that is typically lower in amplitude and dominates with longer integration times, while EIS may be used to remove residual unwanted frame-to-frame motion that is typically larger in amplitude. The techniques disclosed herein may also leverage information provided from the image capture device's OIS system to perform improved motion blur-aware video stabilization strength modulation, which permits better video stabilization performance in low light conditions, where integration times tend to be longer, thus leading to a greater amount of motion blurring in the output stabilized video.