Abstract:
A system and a method for detecting light sources in a multi-illuminated environment using a composite red-green-blue-infrared (RGB-IR) sensor is provided. The method comprises detecting, by the composite RGB-IR sensor, a multi-illuminant area using a visible raw image and a near-infrared (NIR) raw image of a composite RGBIR image, dividing each of the visible raw image and the NIR raw image into a plurality of grid samples, extracting a plurality of illuminant features based on a green/NIR pixel ratio and a blue/NIR pixel ratio, estimating at least one illuminant feature for each grid sample by passing each grid sample through a convolution neural network (CNN) module using the extracted plurality of illuminant features, and smoothing each grid sample based on the estimated at least one illuminant feature.
Abstract:
An image processing apparatus and an image processing method are provided. The image processing apparatus includes an image capturer configured to capture a plurality of images having different zoom levels, a storage configured to store the plurality of image, a display configured to display a first image among the plurality of images, and a processor configured to control the display to display a second image among the plurality of images based on a control input, the second image having a zoom level different from a zoom level of the first image.
Abstract:
Methods and systems for reconstructing a high frame rate high resolution video in a Bayer domain, when an imaging device is set in a Flexible Sub-Sampled Readout (FSR) mode are described. A method provides the FSR mode, which utilizes a multiparty FSR mechanism to spatially and temporally sample the full frame Bayer data. The multi parity FSR utilizes a zigzag sampling that assists reconstruction of motion compensated artifact free high frame rate high resolution video with full frame size. The method includes reconstructing the high frame rate high resolution video using the plurality of parity fields generated. The reconstruction is based on a FSR reconstruction mechanism that can be a pre-Image Signal Processor (ISP) FSR reconstruction or a post-ISP FSP reconstruction based on bandwidth capacity of an ISP used by the imaging device.
Abstract:
Embodiments herein disclose a method for recommending an image capture mode by an electronic device. The method includes identifying, by the electronic device, at least one ROI displayed in a camera preview of the electronic device for capturing an image in a non-ultra-wide image capture mode. Further, the method includes determining, by the electronic device, that the at least one ROI is suitable to capture in an ultra-wide image capture mode. Further, the method includes providing, by the electronic device, at least one recommendation to switch to the ultra-wide image capture mode from the non-ultra-wide image capture mode for capturing the image.
Abstract:
A method and a system for generation of a plurality of portrait effects in an electronic device are provided. The method includes feeding an image captured from the electronic device into an encoder pre-learned using a plurality of features corresponding to the plurality of portrait effects and extracting, using the encoder, at least one of one or more low level features and one or more high level features from the image. The method includes generating, for the image, one or more first portrait effects of the plurality of portrait effects by passing the image through one or more first decoders. The method includes generating, for the image, one or more second portrait effects of the plurality of portrait effects by passing the image through one or more second decoders, wherein each of the one or more first portrait effect, and the one or more second portrait effects is generated in a single inference.
Abstract:
A method for generating metadata pertaining to a RAW frame includes selecting an input frame from a captured RAW frame, a plurality of frames obtained by processing the captured RAW frame, and a scaled RAW frame, selecting identified salient regions in an output frame, constructed from the captured RAW frame, based on errors between regions of the input frame and a corresponding reconstruction of the region of the input frame from the identified salient regions in the output frame, obtaining a plurality of reconstructed frames, reconstructed from a plurality of blocks of each salient region, corresponding to a plurality of regions of the input frame, and generating metadata for reconstructing the captured RAW frame by encoding a plurality of errors between the plurality of reconstructed frames and corresponding plurality of regions of the input frame, and a reconstruction technique used for reconstructing the plurality of reconstructed frames.
Abstract:
Example embodiments include a method and an electronic device for detecting and removing artifacts/degradations in media. Embodiments may detect artifacts and/or degradations in the media based on tag information indicating at least one artifact included in the media. The detection may be triggered automatically or manually. Embodiments may generate artifact/quality tag information associated with the media to indicate artifacts and/or degradations present in the media, and may store the artifact/quality tag information as metadata and/or in a database. Embodiments may identify, based on the artifact/quality tag information associated with the media, at least one artificial intelligence (AI)-based media processing model to be applied to the media to enhance the media. The at least one AI-based media processing model may be configured to enhance at least one artifact detected in the media. Embodiments may enhance the media by applying the at least one AI-based media enhancement model to the media.
Abstract:
A method and an apparatus for generating a composite image in an electronic device are provided. The method includes identifying a first image element of a first event from first images successively captured by a first image sensor of the electronic device, and identifying a second image element of a second event from second images successively captured by a second image sensor of the electronic device, the first images and the second images being simultaneously captured. The method further includes combining the first image element with the second image element based on a synchronization parameter to generate the composite image.