Abstract:
A method includes obtaining multiple image frames of a scene using at least one sensor of an electronic device. The multiple image frames include a first image frame and a second image frame having a longer exposure than the first image frame. The method also includes generating a label map that identifies pixels in the multiple image frames that are to be used in an image. The method further includes generating the image of the scene using the pixels extracted from the image frames based on the label map. The label map may include multiple labels, and each label may be associated with at least one corresponding pixel and may include a discrete value that identifies one of the multiple image frames from which the at least one corresponding pixel is extracted.
Abstract:
An electronic device, method, and computer readable medium for compositing high dynamic range frames are provided. The electronic device includes a camera, and a processor coupled to the camera. The processor registers the plurality of multi-exposure frames with a hybrid of matched features to align non-reference frames with a reference frame; generates blending maps of the plurality of multi-exposure frames to reduce moving ghost artifacts and identify local areas that are well-exposed in the plurality of multi-exposure frames; and blends the plurality of multi-exposure frames weighted by the blending maps using a two-step weight-constrained exposure fusion technique into a high dynamic range (HDR) frame.
Abstract:
A method includes obtaining multiple image frames of a scene using at least one camera of an electronic device and processing the multiple image frames to generate a higher-resolution image of the scene. Processing the multiple image frames includes generating an initial estimate of the scene based on the multiple image frames. Processing the multiple image frames also includes, in each of multiple iterations, (i) generating a current estimate of the scene based on the image frames and a prior estimate of the scene and (ii) regularizing the generated current estimate of the scene. The regularized current estimate of the scene from one iteration represents the prior estimate of the scene in a subsequent iteration. The iterations continue until the estimates of the scene converge on the higher-resolution image of the scene.
Abstract:
A method includes capturing multiple ambient images of a scene using at least one camera of an electronic device and without using a flash of the electronic device. The method also includes capturing multiple flash images of the scene using the at least one camera of the electronic device and during firing of a pilot flash sequence using the flash. The method further includes analyzing multiple pairs of images to estimate exposure differences obtained using the flash, where each pair of images includes one of the ambient images and one of the flash images that are both captured using a common camera exposure and where different pairs of images are captured using different camera exposures. In addition, the method includes determining a flash strength for the scene based on the estimate of the exposure differences and firing the flash based on the determined flash strength.
Abstract:
An embodiment of this disclosure provides a wearable device. The wearable device includes a memory configured to store a plurality of content for display, a transceiver configured to receive the plurality of content from a connected device, a display configured to display the plurality of content, and a processor coupled to the memory, the display, and the transceiver. The processor is configured to control the display to display at least some of the plurality of content in a spatially arranged format. The displayed content is on the display at a display position. The plurality of content, when shown on the connected device, is not in the spatially arranged format. The processor is also configured to receive movement information based on a movement of the wearable device. The processor is also configured to adjust the display position of the displayed content according to the movement information of the wearable device.
Abstract:
An electronic device, method, and computer readable medium for multi-frame image processing using semantic saliency are provided. The electronic device includes a camera, a display, and a processor. The processor is coupled to the camera and the display. The processor receives a plurality of frames captured by the camera during a capture event; identifies a salient region in each of the plurality of frames; determines a reference frame from the plurality of frames based on the identified salient regions; fuses non-reference frames with the determined reference frame into a completed image output.
Abstract:
A method includes identifying feature points in an image in images generated of a scene by a camera and identifying locations of the identified feature points in remaining images in the images. The method also includes selecting a group of the identified feature points indicative of relative motion of the camera between image captures and aligning a set of the images using the selected group of feature points. The method may further include selecting a reference image from the set of aligned images, weighting other images the set, and combining the reference image with the weighted images. Weighting of the other images may include, for each other image in the set, comparing the other image and the reference image to identify one or more moving objects in the other image and applying a weight to pixel locations in the other image.
Abstract:
A method and apparatus for aligning and combining images. The method includes identifying feature points in an image in images generated of a scene by a camera and identifying locations of the identified feature points in remaining images in the images. The method also includes selecting a group of the identified feature points indicative of relative motion of the camera between image captures and aligning a set of the images using the selected group of feature points. The method may further include selecting a reference image from the set of aligned images, weighting other images the set, and combining the reference image with the weighted images. Weighting of the other images may include, for each other image in the set, comparing the other image and the reference image to identify one or more moving objects in the other image and applying a weight to pixel locations in the other image.
Abstract:
A mobile device includes an embedded digital camera that is configured to capture a burst of N images. The mobile device includes processing circuitry comprising a registration module configured to, for each image within the burst of images: analyze an amount of warp of the image and generate a set of affine matrices indicating the amount of warp of the image. The processing circuitry includes a High Fidelity Interpolation block configured to, for each image within the burst of images: perform affine transformation using the set of affine matrices associated with the image, apply an aliasing-retaining interpolation filter, and implement rotation transformation and sub-pixel shifts, yielding an interpolated image. The processing circuitry includes a blending module configured to receive the N interpolated images and blend the N interpolated images into a single-blended image having a user-selected digital zoom ratio.