Abstract:
Electronic devices may include camera modules. A camera module may be formed from an array of lenses and corresponding image sensors. The array of image sensors may include three color image sensors for color imaging and a fourth image sensor positioned to improve image depth mapping. Providing a camera module with a fourth image sensor may increase the baseline distance between the two most distant image sensors, allowing parallax and depth information to be determined for objects a greater distance from the camera than in a conventional electronic device. The fourth image sensor may be a second green image sensor positioned at a maximal distance from the green color image sensor used for color imaging. The fourth image sensor may also be a clear image sensor, allowing capture of improved image depth information and enhanced image resolution and low-light performance.
Abstract:
A multimedia data coding method for cell phones is provided, wherein the multimedia data coding method comprises steps as follows: First multimedia data is captured by a first cell phone, and then a coding parameter is determined by selecting a coding mode. Subsequently the multimedia data is coded according to the coding mode and the coding parameter to output a signal. The signal is then transmitted to a second cell phone.
Abstract:
The present invention discloses a test method for image sharpness, which can instantly determine the sharpness of a captured image, wherein a captured image is firstly divided into multiple blocks with each block composed of multiple pixels; in each block of the captured image, the pixels having higher sharpnesses are selected, and the sharpnesses of those pixels are summed up to be the sharpness of the related block; the estimated sharpness of the captured image is also obtained similarly; and the estimated sharpness is compared with a threshold value to determine whether the captured image is sharp enough. Thereby, the present invention can test the sharpness of an image fast and correctly and inform the user of the status of the captured image and provide the user with corresponding suggestions.
Abstract:
This is generally directed to systems and methods for noise reduction in high dynamic range (“HDR”) imaging systems. In some embodiments, multiple images of the same scene can be captured, where each of the images is exposed for a different amount of time. An HDR image may be created by suitably combining the images. However, the signal-to-noise ratio (“SNR”) curve of the resulting HDR image can have discontinuities in sections of the SNR curve corresponding to shifts between different exposure times. Accordingly, in some embodiments, a noise model for the HDR image can be created that takes into account these discontinuities in the SNR curve. For example, a noise model can be created that smoothes the discontinuities of the SNR curve into a continuous function. This noise model may then be used with a Bayer Filter or any other suitable noise filter to remove noise from the HDR image.
Abstract:
Embodiments describe noise reduction methods and systems for imaging devices having a pixel array having a plurality of pixels, each pixel representing one of a plurality of captured colors and having an associated captured color pixel value. Noise reduction methods filter a captured color pixel value for a respective pixel based on the captured color pixel values associated with pixels in a window of pixels surrounding the respective pixel. Disclosed embodiments provide a low cost noise reduction filtering process that takes advantage of the correlations among the red, green and blue color channels to efficiently remove noise while retaining image sharpness. A noise model can be used to derive a parameter of the noise reduction methods.
Abstract:
A non-frame-based motion detection method and apparatus for imagers requires only a few line buffers and little computation. The non-frame-based, low cost motion detection method and apparatus are well suited for “system-a-chip” (SOC) imager implementations.
Abstract:
A method and apparatus for image stabilization while mitigating the amplification of image noise by using a motion adaptive system employing spatial and temporal filtering of pixel signals from multiple captured frames of a scene.
Abstract:
A system for capturing a high dynamic range (HDR) image comprises an image sensor comprising a split pixel including a first pixel having higher effective gain and a second pixel having lower effective gain. The second pixels exposed with a capture window capture at least a pulse emitted by a light emitting diode (LED) controlled by a pulse width modulation. A first HDR image is produced by a combination including an image produced by the second pixels, and images produced by multiple exposures of the first pixels. A weight map of LED flicker correction is generated from the difference of the image produced by second pixels and the images produced by the first pixels, and the flicker areas in the first HDR image are corrected with the weight map and the image from the second pixels.
Abstract:
There are provided video encoders and corresponding methods for encoding video data for an image that is divisible into macroblocks. A video encoder includes an encoder for performing into mode selection when encoding a current macroblock by testing a first subset of intra modes to compute a rate distortion cost, and utilizing the rate distortion cost to determine whether to terminate the intra mode selection and which additional intra modes, if any, are to be examined with respect to the current macroblock.
Abstract:
High dynamic range image sensors and image reconstruction methods for capturing high dynamic range images. An image sensor that captures high dynamic range images may include an array of pixels having two sets of pixels, each of which is used to capture an image of a scene. The two sets of pixels may be interleaved together. As an example, the first and second sets of pixels may be formed in odd-row pairs and even-row pairs of the array, respectively. The first set of pixels may use a longer exposure time than the second set of pixels. The exposures of the two sets of pixels may at least partially overlap in time. Image processing circuitry in the image sensors or an associated electronic device may de-interlace the two images and may combine the de-interlaced images to form a high dynamic range image.