Abstract:
Systems, methods, and computer readable media to rapidly identify and track an arbitrary sized object through a temporal sequence of frames is described. The object being tracked may initially be identified via a specified or otherwise known region-of-interest (ROI). A portion of that ROI can be used to generate an initial or reference histogram and luminosity measure, metrics that may be used to identify the ROI in a subsequent frame. For a frame subsequent to the initial or reference frame, a series of putative ROIs (each having its own location and size) may be identified and the “best” of the identified ROIs selected. As used here, the term “best” simply means that the more similar two frames' histograms and luminosity measures are, the better one is with respect to the other.
Abstract:
Techniques to permit a digital image capture device to stabilize a video stream in real-time (during video capture operations) are presented. In general, techniques are disclosed for stabilizing video images using an overscan region and a look-ahead technique enabled by buffering a number of video input frames before generating a first stabilized video output frame. (Capturing a larger image than is displayed creates a buffer of pixels around the edge of an image; overscan is the term given to this buffer of pixels.) More particularly, techniques are disclosed for buffering an initial number of input frames so that a “current” frame can use motion data from both “past” and “future” frames to adjust the strength of a stabilization metric value so as to keep the current frame within its overscan. This look-ahead and look-behind capability permits a smoother stabilizing regime with fewer abrupt adjustments.
Abstract:
Systems, methods, and computer readable media to improve image stabilization operations are described. A novel combination of image quality and commonality metrics are used to identify a reference frame from a set of commonly captured images which, when the set's other images are combined with it, results in a quality stabilized image. The disclosed image quality and commonality metrics may also be used to optimize the use of a limited amount of image buffer memory during image capture sequences that return more images that the memory may accommodate at one time. Image quality and commonality metrics may also be used to effect the combination of multiple relatively long-exposure images which, when combined with a one or more final (relatively) short-exposure images, yields images exhibiting motion-induced blurring in interesting and visually pleasing ways.
Abstract:
Techniques to permit a digital image capture device to stabilize a video stream in real-time (during video capture operations) are presented. In general, techniques are disclosed for stabilizing video images using an overscan region and a look-ahead technique enabled by buffering a number of video input frames before generating a first stabilized video output frame. (Capturing a larger image than is displayed creates a buffer of pixels around the edge of an image; overscan is the term given to this buffer of pixels.) More particularly, techniques are disclosed for buffering an initial number of input frames so that a “current” frame can use motion data from both “past” and “future” frames to adjust the strength of a stabilization metric value so as to keep the current frame within its overscan. This look-ahead and look-behind capability permits a smoother stabilizing regime with fewer abrupt adjustments.
Abstract:
In some embodiments, a method for compensating for lens motion includes estimating a starting position of a lens assembly associated with captured pixel data. The captured pixel data is captured from an image sensor. In some embodiments, the method further includes calculating from the starting position and position data received from the one or more position sensors lens movement associated with the captured pixel data. The lens movement is mapped into pixel movement associated with the captured pixel data. A transform matrix is adjusted to reflect at least the pixel movement. A limit factor associated with the position data is calculated. The captured pixel data is recalculated using the transform matrix and the limit factor.
Abstract:
A method for lens position estimation can include receiving from a lens driver a drive current value representing a current to be provided to a motor to position a camera lens of an electronic device, detecting an orientation of the electronic device using a motion sensor, determining a gravity vector based upon the orientation, and computing an estimated value of a lens position of the camera lens of the electronic device based upon the drive current value and gravity vector.
Abstract:
Devices, methods, and non-transitory computer readable media are disclosed herein to repair or mitigate the appearance of unwanted reflection artifacts in captured video image streams. These unwanted reflection artifacts often present themselves as brightly-colored spots, circles, rings, or halos that reflect the shape of a bright light source in the captured image. These artifacts, also referred to herein as “ghosts” or “green ghosts” (due to often having a greenish tint), are typically located in regions of the captured images where there is not actually a bright light source located in the image. In fact, such unwanted reflection artifacts often present themselves on the image sensor across the principal point of the lens from where the actual bright light source in the captured image is located. Such devices, methods and computer readable media may be configured to detect, track, and repair such unwanted reflection artifacts in an intelligent and efficient fashion.
Abstract:
Electronic devices, computer readable storage media, and related methods are disclosed herein that are configured to stitch together images captured by multiple image capture devices of an image capture system. In particular, various techniques are employed to intelligently extend (and, optionally, smooth) the correspondence mapping between first and second images captured by image capture devices having different fields of view, e.g., fields of view that are at least partially overlapping and at least partially non-overlapping. The techniques may also include determining a “transitional” correspondence in a transitional region between the overlapping and non-overlapping regions of the fields of view, as well as performing one or more appearance correction operations to account for the different properties of the different image capture devices used to capture the first and second images. The techniques described herein may be employed to produce enhanced output images in either the still image or the video context.
Abstract:
Techniques to improve a digital image capture device's ability to stabilize a video stream—while enforcing desired stabilization constraints on particular images in the video stream—are presented that utilize an overscan region and a look-ahead technique enabled by buffering a number of video input frames before generating a first stabilized video output frame. More particularly, techniques are disclosed for buffering an initial number of input frames so that a “current” frame can use motion data from both “past” and “future” frames to adjust the value of a stabilization strength parameter and/or the weighted contribution of particular frames from the buffer in the determination of stabilization motion values for the current frame. Such techniques keep the current frame within its overscan and ensure that the stabilization constraints are enforced, while maintaining desired smoothness in the video stream. In some embodiments, the stabilization constraint may comprise a maximum allowed frame displacement.
Abstract:
Techniques to improve a digital image capture device's ability to stabilize a video stream are presented. According to some embodiments, improved stabilization of captured video frames is provided by intelligently harnessing the complementary effects of both optical image stabilization (OIS) and electronic image stabilization (EIS). In particular, OIS may be used to remove intra-frame motion blur that is typically lower in amplitude and dominates with longer integration times, while EIS may be used to remove residual unwanted frame-to-frame motion that is typically larger in amplitude. The techniques disclosed herein may also leverage information provided from the image capture device's OIS system to perform improved motion blur-aware video stabilization strength modulation, which permits better video stabilization performance in low light conditions, where integration times tend to be longer, thus leading to a greater amount of motion blurring in the output stabilized video.