Abstract:
The present disclosure provides example methods operable by computing device. An example method can include receiving an image from a camera. The method can also include comparing one or more parameters of the image with one or more control parameters, where the one or more control parameters comprise information indicative of an image from a substantially unobstructed camera. Based on the comparison, the method can also include determining a score between the one or more parameters of the image and the one or more control parameters. The method can also include accumulating, by a computing device, a count of a number of times the determined score image exceeds a first threshold. Based on the count exceeding a second threshold, the method can also include determining that the camera is at least partially obstructed.
Abstract:
A plurality of images of a scene may be obtained. These images may have been captured by an image sensor, and may include a first image and a second image. A particular gain may have been applied to the first image. An effective color temperature and a brightness of a first pixel in the first image may be determined, and a mapping between pixel characteristics and noise deviation of the image sensor may be selected. The pixel characteristics may include pixel brightness. The selected mapping may be used to map at least the brightness of the first pixel to a particular noise deviation. The brightness of the first pixel and the particular noise deviation may be compared to a brightness of a second pixel of the second image. The comparison may be used to determine whether to merge the first pixel and the second pixel.
Abstract:
A plurality of images of a scene may be obtained. These images may have been captured by an image sensor, and may include a first image and a second image. A particular gain may have been applied to the first image. An effective color temperature and a brightness of a first pixel in the first image may be determined, and a mapping between pixel characteristics and noise deviation of the image sensor may be selected. The pixel characteristics may include pixel brightness. The selected mapping may be used to map at least the brightness of the first pixel to a particular noise deviation. The brightness of the first pixel and the particular noise deviation may be compared to a brightness of a second pixel of the second image. The comparison may be used to determine whether to merge the first pixel and the second pixel.
Abstract:
A plurality of images of a scene may be obtained. These images may have been captured by an image sensor, and may include a first image and a second image. A particular gain may have been applied to the first image. An effective color temperature and a brightness of a first pixel in the first image may be determined, and a mapping between pixel characteristics and noise deviation of the image sensor may be selected. The pixel characteristics may include pixel brightness. The selected mapping may be used to map at least the brightness of the first pixel to a particular noise deviation. The brightness of the first pixel and the particular noise deviation may be compared to a brightness of a second pixel of the second image. The comparison may be used to determine whether to merge the first pixel and the second pixel.
Abstract:
The present disclosure provides example methods operable by computing device. An example method can include receiving an image from a camera. The method can also include comparing one or more parameters of the image with one or more control parameters, where the one or more control parameters comprise information indicative of an image from a substantially unobstructed camera. Based on the comparison, the method can also include determining a score between the one or more parameters of the image and the one or more control parameters. The method can also include accumulating, by a computing device, a count of a number of times the determined score image exceeds a first threshold. Based on the count exceeding a second threshold, the method can also include determining that the camera is at least partially obstructed.
Abstract:
An image-stack viewer may switch between images in an image stack based on detected interactions with the images that are displayed in the viewer. In particular, a region-of-interest (ROI) in an image may be determined based on an interaction, and image characteristics of the ROI may be evaluated in two or more images in the image stack where the ROI best represents the evaluated characteristics.
Abstract:
A first plurality of images of a scene may be captured. Each image of the first plurality of images may be captured with a different total exposure time (TET). Based at least on the first plurality of images, a TET sequence may be determined for capturing images of the scene. A second plurality of images of the scene may be captured. Images in the second plurality of images may be captured using the TET sequence. Based at least on the second plurality of images, an output image of the scene may be constructed.
Abstract:
Embodiments described herein may help a computing device, such as a head-mountable device (HMD), to capture and process images in response to a user placing their hands in, and then withdrawing their hands from, a frame formation. For example, an HMD may analyze image data from a point-of-view camera on the HMD, and detect when a wearer holds their hands in front of their face to frame a subject in the wearer's field of view. Further, the HMD may detect when the wearer withdraws their hands from such a frame formation and responsively capture an image. Further, the HMD may determine a selection area that is being framed, within the wearer's field of view, by the frame formation. The HMD may then process the captured image based on the frame formation, such as by cropping, white-balancing, and/or adjusting exposure.
Abstract:
A first plurality of images of a scene may be captured. Each image of the first plurality of images may be captured using a different TET. Based at least on the first plurality of images, a long TET, a short TET, and a TET sequence that includes the long TET and the short TET may be determined. A second plurality of images of the scene may be captured. The images in the second plurality of images may be captured sequentially in an image sequence using a sequence of TETs corresponding to the TET sequence. Based on one or more images in the image sequence, an output image may be constructed.
Abstract:
A first plurality of images of a scene may be captured. Each image of the first plurality of images may be captured with a different total exposure time (TET). Based at least on the first plurality of images, a TET sequence may be determined for capturing images of the scene. A second plurality of images of the scene may be captured. Images in the second plurality of images may be captured using the TET sequence. Based at least on the second plurality of images, an output image of the scene may be constructed.