Abstract:
A method includes obtaining, using at least one processor of an electronic device, multiple calibration parameters associated with multiple sensors of a selected mobile device. The method also includes obtaining, using the at least one processor, an identification of multiple imaging tasks. The method further includes obtaining, using the at least one processor, multiple synthetically-generated scene images. In addition, the method includes generating, using the at least one processor, multiple training images and corresponding meta information based on the calibration parameters, the identification of the imaging tasks, and the scene images. The training images and corresponding meta information are generated concurrently, different ones of the training images correspond to different ones of the sensors, and different pieces of the meta information correspond to different ones of the imaging tasks.
Abstract:
A method includes obtaining a Bayer input image. The method also includes generating, using at least one processing device of an electronic device, multiple YUV image frames based on the Bayer input image using non-linear scaling, where the YUV image frames are associated with different exposure settings. The method further includes generating, using the at least one processing device of the electronic device, a fused image based on the YUV image frames. In addition, the method includes applying, using the at least one processing device of the electronic device, global tone-mapping to the fused image in order to generate a tone-mapped fused image, where the global tone-mapping is based on a first cubic spline curve.
Abstract:
A method includes obtaining a blended red-green-blue (RGB) image frame of a scene. The method also includes performing, using at least one processing device of an electronic device, an interband denoising operation to remove at least one of noise and one or more artifacts from the blended RGB image frame in order to produce a denoised RGB image frame. Performing the interband denoising operation includes performing filtering of red, green, and blue color channels of the blended RGB image frame to remove at least one of the noise and the one or more artifacts from the blended RGB image frame. The filtering of the red and blue color channels of the blended RGB image frame is based on image data of at least one of the green color channel and a white color channel of the blended RGB image frame.
Abstract:
A method includes obtaining, using at least one processor, an input image frame. The method also includes identifying, using the at least one processor, one or more regions of the input image frame containing redundant information. In addition, the method includes performing, using the at least one processor, an image processing task using the input image frame. The image processing task is guided based on the one or more identified regions of the input image frame. The method may further include obtaining, using the at least one processor, a coarse depth map associated with the input image frame. Performing the image processing task may include refining the coarse depth map to produce a refined depth map, where the refining of the coarse depth map is guided based on the one or more identified regions of the input image frame.
Abstract:
Various image sharpening techniques are disclosed. For example, a method for image sharpening includes obtaining, using at least one sensor of an electronic device, an image that includes visual content. The method also includes generating an edge map that indicates edges of the visual content within the image. The method further includes applying a high-pass signal and an adaptive gain based on the edge map to sharpen the image. The method also includes generating a bright halo mask and a dark halo mask based on the edge map, where the bright halo mask indicates an upper sharpening limit and the dark halo mask indicates a lower sharpening limit. In addition, the method includes modifying a level of sharpening at one or more of the edges within the sharpened image to provide halo artifact reduction based on the bright halo mask and the dark halo mask.
Abstract:
An electronic device includes at least one imaging sensor and at least one processor coupled to the at least one imaging sensor. The at least one imaging sensor is configured to capture a burst of image frames. The at least one processor is configured to generate a low-resolution image from the burst of image frames. The at least one processor is also configured to estimate a blur kernel based on the burst of image frames. The at least one processor is further configured to perform deconvolution on the low-resolution image using the blur kernel to generate a deconvolved image. In addition, the at least one processor is configured to generate a high-resolution image using super resolution (SR) on the deconvolved image.
Abstract:
A method includes obtaining, using at least one sensor of an electronic device, multiple image frames of a scene. The multiple image frames include a first image frame and a second image frame captured using different exposures. The method also includes excluding, using at least one processor of the electronic device, pixels in the first and second image frames based on a coarse motion map. The method further includes generating, using the at least one processor, multiple local histogram match maps based on different portions of the first and second image frames. In addition, the method includes generating, using the at least one processor, an image of the scene using the local histogram match maps.
Abstract:
A method includes obtaining multiple image frames of a scene using at least one camera of an electronic device. The method also includes using a convolutional neural network to generate blending maps associated with the image frames. The blending maps contain or are based on both a measure of motion in the image frames and a measure of how well exposed different portions of the image frames are. The method further includes generating a final image of the scene using at least some of the image frames and at least some of the blending maps. The final image of the scene may be generated by blending the at least some of the image frames using the at least some of the blending maps, and the final image of the scene may include image details that are lost in at least one of the image frames due to over-exposure or under-exposure.
Abstract:
A method includes obtaining, using at least one image sensor of an electronic device, multiple image frames of a scene. The multiple image frames include a plurality of short image frames at a first exposure level and a plurality of long image frames at a second exposure level longer than the first exposure level. The method also includes generating a short reference image frame and a long reference image frame using the multiple image frames. The method further includes selecting, using a processor of the electronic device, the short reference image frame or the long reference image frame as a reference frame, where the selection is based on an amount of saturated motion in the long image frame and an amount of a shadow region in the short image frame. In addition, the method includes generating a final image of the scene using the reference frame.
Abstract:
A mobile device includes an embedded digital camera that is configured to capture a burst of N images. The mobile device includes processing circuitry comprising a registration module configured to, for each image within the burst of images: analyze an amount of warp of the image and generate a set of affine matrices indicating the amount of warp of the image. The processing circuitry includes a High Fidelity Interpolation block configured to, for each image within the burst of images: perform affine transformation using the set of affine matrices associated with the image, apply an aliasing-retaining interpolation filter, and implement rotation transformation and sub-pixel shifts, yielding an interpolated image. The processing circuitry includes a blending module configured to receive the N interpolated images and blend the N interpolated images into a single-blended image having a user-selected digital zoom ratio.