Abstract:
Real-time video stabilization for mobile devices based on on-board motion sensing. In accordance with a method embodiment of the present invention, a first image frame from a camera at a first time is accessed. A second image frame from the camera at a subsequent time is accessed. A crop polygon around scene content common to the first image frame and the second image frame is identified. Movement information describing movement of the camera in an interval between the first time and the second time is accessed. The crop polygon is warped to remove motion distortions of the second image frame is warped using the movement information. The warping may include defining a virtual camera that remains static when the movement of the camera is below a movement threshold. The movement information may describe the movement of the camera at each scan line of the second image frame.
Abstract:
Real-time video stabilization for mobile devices based on on-board motion sensing. In accordance with a method embodiment of the present invention, a first image frame from a camera at a first time is accessed. A second image frame from the camera at a subsequent time is accessed. A crop polygon around scene content common to the first image frame and the second image frame is identified. Movement information describing movement of the camera in an interval between the first time and the second time is accessed. The crop polygon is warped to remove motion distortions of the second image frame is warped using the movement information. The warping may include defining a virtual camera that remains static when the movement of the camera is below a movement threshold. The movement information may describe the movement of the camera at each scan line of the second image frame.
Abstract:
A number of images of a scene are captured and stored. The images are captured over a range of values for an attribute (e.g., a camera setting). One of the images is displayed. A location of interest in the displayed image is identified. Regions that correspond to the location of interest are identified in each of the images. Those regions are evaluated to identify which of the regions is rated highest with respect to the attribute relative to the other regions. The image that includes the highest-rated region is then displayed.
Abstract:
A computer implemented method of determining a latent image from an observed image is disclosed. The method comprises implementing a plurality of image processing operations within a single optimization framework, wherein the single optimization framework comprises solving a linear minimization expression. The method further comprises mapping the linear minimization expression onto at least one non-linear solver. Further, the method comprises using the non-linear solver, iteratively solving the linear minimization expression in order to extract the latent image from the observed image, wherein the linear minimization expression comprises: a data term, and a regularization term, and wherein the regularization term comprises a plurality of non-linear image priors.
Abstract:
A set of images is processed to modify and register the images to a reference image in preparation for blending the images to create a high-dynamic range image. To modify and register a source image to a reference image, a processing unit generates correspondence information for the source image based on a global correspondence algorithm, generates a warped source image based on the correspondence information, estimates one or more color transfer functions for the source image, and fills the holes in the warped source image. The holes in the warped source image are filled based on either a rigid transformation of a corresponding region of the source image or a transformation of the reference image based on the color transfer functions.
Abstract:
System and method of displaying images in temporal superresolution by multiplicative superposition of cascaded display layers integrated in a display device. Using an original video with a target temporal resolution as a priori, a factorization process is performed to derive respective image data for presentation on each display layer. The multiple layers are refreshed in staggered intervals to synthesize a video with an effective refresh rate exceeding that of each individual display layer, e.g., by a factor equal to the number of layers. Further optically averaging neighboring pixels can minimize artifacts.
Abstract:
A computer implemented method of determining a latent image from an observed image is disclosed. The method comprises implementing a plurality of image processing operations within a single optimization framework, wherein the single optimization framework comprises solving a linear minimization expression. The method further comprises mapping the linear minimization expression onto at least one non-linear solver. Further, the method comprises using the non-linear solver, iteratively solving the linear minimization expression in order to extract the latent image from the observed image, wherein the linear minimization expression comprises: a data term, and a regularization term, and wherein the regularization term comprises a plurality of non-linear image priors.
Abstract:
System and method of displaying images in spatial/temporal superresolution by multiplicative superposition of cascaded display layers integrated in a display device. Using an original image with a target spatial/temporal resolution as a priori, a factorization process is performed to derive respective image data for presentation on each display layer. The cascaded display layers may be progressive and laterally shifted with each other, resulting in an effective spatial resolution exceeding the native display resolutions of the display layers. Factorized images may be refreshed on respective display layers in synchronization or out of synchronization.
Abstract:
An image capture application captures a sequence of images via a digital camera. The sequence of images may have undesirable levels of blurriness due to the motion of objects in the field of view of the digital camera or due to movement of the digital camera itself. A deblur engine within the image capture application generates image segments within one of the captured images, where a given image segment includes pixel values that move coherently between different images in the sequence. The deblur engine then deblurs each image segment based on the coherent motion of each different image segment and combines the resultant, deblurred image segments into a deblurred image. Advantageously, blurriness caused by the combined effects of moving objects and camera motion may be reduced, thereby improving the ability of a digital camera to provide high-quality images. As such, the user experience of digital photography may be enhanced.
Abstract:
System and method of displaying images in temporal superresolution by multiplicative superposition of cascaded display layers integrated in a display device. Using an original video with a target temporal resolution as a priori, a factorization process is performed to derive respective image data for presentation on each display layer. The multiple layers are refreshed in staggered intervals to synthesize a video with an effective refresh rate exceeding that of each individual display layer, e.g., by a factor equal to the number of layers. Further optically averaging neighboring pixels can minimize artifacts.