Abstract:
Systems, methods, and computer readable media to capture and process high dynamic range (HDR) images when appropriate for a scene are disclosed. When appropriate, multiple images at a single—slightly underexposed—exposure value are captured (making a constant bracket HDR capture sequence) and local tone mapping (LTM) applied to each image. Local tone map and histogram information can be used to generate a noise-amplification mask which can be used during fusion operations. Images obtained and fused in the disclosed manner provide high dynamic range with improved noise and de-ghosting characteristics.
Abstract:
Devices, methods, and non-transitory program storage devices are disclosed herein to perform intelligent determinations of non-linear (i.e., dynamic) image recording rates for the production of improved timelapse videos. The techniques described herein may be especially applicable to timelapse videos captured over long durations of time and/or with varying amounts of device motion/scene content change over the course of the captured video (e.g., when a user is walking, exercising, driving, etc. during the video's capture). By smoothly varying the image recording rate of the timelapse video in accordance with multi-temporal scale estimates of scene content change, the quality of the produced timelapse video may be improved (e.g., fewer long stretches of the video with too little action, as well as fewer stretches of the video where there is so much rapid action in the timelapse video that it is difficult for a viewer to perceive what is happening in the video).
Abstract:
In one implementation, a method is performed for generating metadata estimations based on metadata subdivisions. The method includes: obtaining an input image; obtaining metadata associated with the input image; subdividing the metadata into a plurality of metadata subdivisions; determining a viewport relative to the input image based on at least one of head pose information and eye tracking information; generating one or more metadata estimations by performing an estimation algorithm on at least a portion of the plurality of metadata subdivisions based on the viewport; and generating an output image by performing an image processing algorithm on the input image based on the one or more metadata estimations.
Abstract:
An electronic image capture device captures a first image of a scene at a first time. A first local tone mapping operator for a first portion of the first image is determined. The electronic image capture device further captures a second image of the scene at a second time. A motion of the electronic device between the first time and the second time is determined. A second local tone mapping operator for a second portion of the second image is determined. The second portion is determined to correspond to the first portion based, at least in part, on the determined motion of the electronic device. The second local tone mapping operator is determined based, at least in part, on the first local tone mapping operator. At least the second local tone mapping operator is applied to the second portion of the second image.
Abstract:
This disclosure relates to a wide gamut encoder capable of receiving a wide gamut color image in accordance with a wide gamut standard. The encoder can encode one or more wide gamut color image pixel values into portions of narrow gamut encoding elements for transmission to a video encoder. The encoder can implement an advanced extended YCC format that is backward compatible with a P3 color gamut.
Abstract:
Techniques for auto white balancing of captured images based on detection of flicker in ambient light is described. When flicker is detected in ambient light during an image capture event, and the flicker is unchanging during the image capture event, a white point of image data may be estimated according to a first technique. When flicker is detected in ambient light during an image capture event, and the flicker is changing during the image capture event, a white point of image data may be estimated according to a second technique. When flicker is not detected, a white point of image data may be estimated according to a third technique. Image data may be color corrected based on the estimated white point.
Abstract:
Methods, devices and computer readable instructions to generate multi-scale tone curves are disclosed. One method includes finding, for a given input image, a global tone curve that exhibits monotonic behavior. The input image may then be partitioned into a first number of sub-regions. For each sub-region, a local tone curve may be determined that has an output level that is constrained to the global tone curve at one or more first luminance levels so that each sub-region's local tone curve's output follows the global tone curve's monotonic behavior. If the resulting local tone curves provide sufficient control of shadow-boost, highlight-suppression, and contrast optimization the first number of local tone curves may be applied directly to the input image. If additional control is needed, each sub-region may again be partitioned and local tone curves determined for each of the new sub-regions.
Abstract:
Methods, devices and computer readable instructions to generate region-of-interest (ROI) tone curves are disclosed. One method includes obtaining a statistic for an entire image such as, for example, a luminance statistic. The same statistic may then be found for a specified ROI of the image. A weighted combination of the statistic of the entire image and the statistic of the ROI yields a combined statistic which may then be converted to a ROI-biased tone curve. The weight used to combine the two statistics may be selected to emphasize or de-emphasize the role of the ROI's statistic in the final tone curve.
Abstract:
Systems, methods, and computer readable media for the use of a metric whose value is especially sensitive to the information lost when an image's pixels are clipped are disclosed. The metric may be used as an image's score, where higher values are indicative of lost highlight information (more clipped pixels). One use of the disclosed metric would be to determine when the use of high dynamic range (HDR) techniques are appropriate. The disclosed metric may also be used to bias a scene's exposure value (EV) such as to a lower or underexposed value (EV−) so that the scene may be captured with no more than an acceptable number of clipped pixels.
Abstract:
Image enhancement is achieved by separating image signals, e.g. YCbCr image signals, into a series of frequency bands and performing locally-adaptive noise reduction on bands below a given frequency but not on bands above that frequency. The bands are summed to develop the image enhanced signals. The YCbCr, multi-band locally-adaptive approach to denoising is able to operate independently—and in an optimized fashion—on both luma and chroma channels. Noise reduction is done based on models developed for both luma and chroma channels by measurements taken for multiple frequency bands, in multiple patches on the ColorChecker chart, and at multiple gain levels, in order to develop a simple yet robust set of models that may be tuned off-line a single time for each camera and then applied to images taken by such cameras in real-time without excessive processing requirements and with satisfactory results across illuminant types and lighting conditions.