Abstract:
A method includes obtaining multiple image frames. The method also includes selecting, using at least one processing device of an electronic device, an asymmetrical image pair from the multiple image frames. The asymmetrical image pair includes a first image frame and a second image frame, where the first image frame has a shorter exposure than the second image frame. The method further includes identifying, using the at least one processing device, one or more features based on the asymmetrical image pair. The method also includes determining, using the at least one processing device, whether the first image frame contains flicker based on the one or more features. In addition, the method includes enabling or disabling, using the at least one processing device, the first image frame as a reference candidate based on the determination whether the first image frame contains flicker.
Abstract:
A method for training data generation includes obtaining a first set of image frames of a scene and a second set of image frames of the scene using multiple exposure settings. The method also includes generating an alignment map, a blending map, and an input image using the first set of image frames. The method further includes generating a ground truth image using the alignment map, the blending map, and the second set of image frames. In addition, the method includes using the ground truth image and the input image as an image pair in a training dataset when training a machine learning model to reduce image distortion and noise.
Abstract:
A method includes obtaining a blended red-green-blue (RGB) image frame of a scene. The method also includes performing, using at least one processing device of an electronic device, an interband denoising operation to remove at least one of noise and one or more artifacts from the blended RGB image frame in order to produce a denoised RGB image frame. Performing the interband denoising operation includes performing filtering of red, green, and blue color channels of the blended RGB image frame to remove at least one of the noise and the one or more artifacts from the blended RGB image frame. The filtering of the red and blue color channels of the blended RGB image frame is based on image data of at least one of the green color channel and a white color channel of the blended RGB image frame.
Abstract:
Various image sharpening techniques are disclosed. For example, a method for image sharpening includes obtaining, using at least one sensor of an electronic device, an image that includes visual content. The method also includes generating an edge map that indicates edges of the visual content within the image. The method further includes applying a high-pass signal and an adaptive gain based on the edge map to sharpen the image. The method also includes generating a bright halo mask and a dark halo mask based on the edge map, where the bright halo mask indicates an upper sharpening limit and the dark halo mask indicates a lower sharpening limit. In addition, the method includes modifying a level of sharpening at one or more of the edges within the sharpened image to provide halo artifact reduction based on the bright halo mask and the dark halo mask.
Abstract:
An electronic device includes at least one imaging sensor and at least one processor coupled to the at least one imaging sensor. The at least one imaging sensor is configured to capture a burst of image frames. The at least one processor is configured to generate a low-resolution image from the burst of image frames. The at least one processor is also configured to estimate a blur kernel based on the burst of image frames. The at least one processor is further configured to perform deconvolution on the low-resolution image using the blur kernel to generate a deconvolved image. In addition, the at least one processor is configured to generate a high-resolution image using super resolution (SR) on the deconvolved image.
Abstract:
A method includes obtaining, using at least one sensor of an electronic device, multiple image frames of a scene. The multiple image frames include a first image frame and a second image frame captured using different exposures. The method also includes excluding, using at least one processor of the electronic device, pixels in the first and second image frames based on a coarse motion map. The method further includes generating, using the at least one processor, multiple local histogram match maps based on different portions of the first and second image frames. In addition, the method includes generating, using the at least one processor, an image of the scene using the local histogram match maps.
Abstract:
A method includes obtaining multiple image frames of a scene using at least one camera of an electronic device. The method also includes using a convolutional neural network to generate blending maps associated with the image frames. The blending maps contain or are based on both a measure of motion in the image frames and a measure of how well exposed different portions of the image frames are. The method further includes generating a final image of the scene using at least some of the image frames and at least some of the blending maps. The final image of the scene may be generated by blending the at least some of the image frames using the at least some of the blending maps, and the final image of the scene may include image details that are lost in at least one of the image frames due to over-exposure or under-exposure.
Abstract:
A method includes obtaining, using at least one image sensor of an electronic device, multiple image frames of a scene. The multiple image frames include a plurality of short image frames at a first exposure level and a plurality of long image frames at a second exposure level longer than the first exposure level. The method also includes generating a short reference image frame and a long reference image frame using the multiple image frames. The method further includes selecting, using a processor of the electronic device, the short reference image frame or the long reference image frame as a reference frame, where the selection is based on an amount of saturated motion in the long image frame and an amount of a shadow region in the short image frame. In addition, the method includes generating a final image of the scene using the reference frame.
Abstract:
A mobile device includes an embedded digital camera that is configured to capture a burst of N images. The mobile device includes processing circuitry comprising a registration module configured to, for each image within the burst of images: analyze an amount of warp of the image and generate a set of affine matrices indicating the amount of warp of the image. The processing circuitry includes a High Fidelity Interpolation block configured to, for each image within the burst of images: perform affine transformation using the set of affine matrices associated with the image, apply an aliasing-retaining interpolation filter, and implement rotation transformation and sub-pixel shifts, yielding an interpolated image. The processing circuitry includes a blending module configured to receive the N interpolated images and blend the N interpolated images into a single-blended image having a user-selected digital zoom ratio.
Abstract:
A method includes obtaining an image and a gain map associated with the image. The method also includes identifying image patches in the image and corresponding gain map patches in the gain map. Different image patches are centered around different anchor points in the image. The method further includes, for each image patch and its corresponding gain map patch, generating an intensity-gain curve for the associated anchor point. The intensity-gain curve specifies (i) gain values based on the corresponding gain map patch for intensity values up to a threshold intensity value and (ii) gain values based on one or more input parameters for intensity values above the threshold intensity value. In addition, the method includes combining the intensity-gain curves to generate a 3D lookup table, which identifies the gain values for the anchor points in the image at each of multiple intensity values.