Abstract:
Image fusion techniques hide artifacts that can arise at seams between regions of different image quality. According to these techniques, image registration may be performed on multiple images having at least a portion of image content in common. A first image may be warped to a spatial domain of a second image based on the image registration. A fused image may be generated from a blend of the warped first image and the second image, wherein relative contributions of the warped first image and the second image are weighted according to a distribution pattern based on a size of a smaller of the pair of images. In this manner, contributions of the different images vary at seams that otherwise would appear.
Abstract:
Techniques to capture and fuse short- and long-exposure images of a scene from a stabilized image capture device are disclosed. More particularly, the disclosed techniques use not only individual pixel differences between co-captured short- and long-exposure images, but also the spatial structure of occluded regions in the long-exposure images (e.g., areas of the long-exposure image(s) exhibiting blur due to scene object motion). A novel device used to represent this feature of the long-exposure image is a “spatial difference map.” Spatial difference maps may be used to identify pixels in the short- and long-exposure images for fusion and, in one embodiment, may be used to identify pixels from the short-exposure image(s) to filter post-fusion so as to reduce visual discontinuities in the output image.
Abstract:
Techniques to capture and fuse short- and long-exposure images of a scene from a stabilized image capture device are disclosed. More particularly, the disclosed techniques use not only individual pixel differences between co-captured short- and long-exposure images, but also the spatial structure of occluded regions in the long-exposure images (e.g., areas of the long-exposure image(s) exhibiting blur due to scene object motion). A novel device used to represent this feature of the long-exposure image is a “spatial difference map.” Spatial difference maps may be used to identify pixels in the short-and long-exposure images for fusion and, in one embodiment, may be used to identify pixels from the short-exposure image(s) to filter post-fusion so as to reduce visual discontinuities in the output image.
Abstract:
Lens flare mitigation techniques determine which pixels in images of a sequence of images are likely to be pixels affected by lens flare. Once the lens flare areas of the images are determined, unwanted lens flare effects may be mitigated by various approaches, including reducing border artifacts along a seam between successive images, discarding entire images of the sequence that contain lens flare areas, and using tone-mapping to reduce the visibility of lens flare.
Abstract:
Techniques to capture and fuse short- and long-exposure images of a scene from a stabilized image capture device are disclosed. More particularly, the disclosed techniques use not only individual pixel differences between co-captured short- and long-exposure images, but also the spatial structure of occluded regions in the long-exposure images (e.g., areas of the long-exposure image(s) exhibiting blur due to scene object motion). A novel device used to represent this feature of the long-exposure image is a “spatial difference map.” Spatial difference maps may be used to identify pixels in the short- and long-exposure images for fusion and, in one embodiment, may be used to identify pixels from the short-exposure image(s) to filter post-fusion so as to reduce visual discontinuities in the output image.
Abstract:
The present disclosure relates to image processing and analysis and in particular automatic segmentation of identifiable items in an image, for example the segmentation and identification of characters or symbols in an image. Upon user indication, multiple images of a subject are captured and variations between the images are created using lighting, spectral content, angles and other factors. The images are processed together so that characters and symbols may be recognized from the surface of the image subject.
Abstract:
Techniques are disclosed for capturing stereoscopic images using one or more high color density or “full color” image sensors and one or more low color density or “sparse color” image sensors. Low color density image sensors, may include substantially fewer color pixels than the sensor's total number of pixels, as well as fewer color pixels than the total number of color pixels on the full color image sensor. More particularly, the mostly-monochrome image captured by the low color density image sensor may be used to reduce noise and increase the spatial resolution of an imaging system's output image. In addition, the color pixels present in the low color density image sensor may be used to identify and fill in color pixel values, e.g., for regions occluded in the image captured using the full color image sensor. Optical Image Stabilization and/or split photodiodes may be employed on one or more sensors.
Abstract:
Techniques to capture and fuse short- and long-exposure images of a scene from a stabilized image capture device are disclosed. More particularly, the disclosed techniques use not only individual pixel differences between co-captured short- and long-exposure images, but also the spatial structure of occluded regions in the long-exposure images (e.g., areas of the long-exposure image(s) exhibiting blur due to scene object motion). A novel device used to represent this feature of the long-exposure image is a “spatial difference map.” Spatial difference maps may be used to identify pixels in the short-and long-exposure images for fusion and, in one embodiment, may be used to identify pixels from the short-exposure image(s) to filter post-fusion so as to reduce visual discontinuities in the output image.
Abstract:
Foreign lighting effects, such as lens flare, are very common in natural images. In a two-camera-system, the two images may be fused together to generate one image of a better quality. However, there are frequently different foreign light patterns in the two images that form the image pair, e.g., due to the difference in lens design, sensor and position, etc. Directly fusing such pairs of images will result in non-photorealistic images, with composed foreign light patterns from both images from the image pair. This disclosure describes a general foreign light mitigation scheme to detect all kinds of foreign light region mismatches. The detected foreign light mismatch regions may be deemphasized or excluded in the fusion step, in order to create a fused image that keeps a natural-looking foreign light pattern that is close to what was seen by the user of an image capture device during an image capture preview mode.
Abstract:
Techniques to capture and fuse short- and long-exposure images of a scene from a stabilized image capture device are disclosed. More particularly, the disclosed techniques use not only individual pixel differences between co-captured short- and long-exposure images, but also the spatial structure of occluded regions in the long-exposure images (e.g., areas of the long-exposure image(s) exhibiting blur due to scene object motion). A novel device used to represent this feature of the long-exposure image is a “spatial difference map.” Spatial difference maps may be used to identify pixels in the short- and long-exposure images for fusion and, in one embodiment, may be used to identify pixels from the short-exposure image(s) to filter post-fusion so as to reduce visual discontinuities in the output image.