Abstract:
The invention is directed to exposure control in a camera. An exemplary method comprises determining depth information associated with a portion of an image frame; obtaining exposure data associated with the portion of the image frame; and controlling an amount of exposure for the portion of the image frame based on the depth information and the exposure data.
Abstract:
A lifelog camera is configured to capture the digital image without user input upon detection of a sound-based trigger in an audio signal output by a microphone present at the lifelog camera. The sound-based trigger is indicative of activity of the user or another person near the user.
Abstract:
The invention is directed to determining compensation distance for capturing an image in a zoom mode using a camera. An exemplary method comprises determining the camera is in a non-zoom mode; receiving first depth information associated with an object in the image when the camera is in the non-zoom mode; receiving first angle information associated with the object in the image when the camera is in the non-zoom mode; switching from the non-zoom mode to a zoom mode; receiving second angle information associated with the object in the image when the camera is in the zoom mode; determining a compensation distance for capturing the image of the object in the zoom mode based on the first depth information, the first angle information, and the second angle information; and generate an image based on the compensated distance.
Abstract:
The invention is directed to controlling shake blur and motion blur associated with an image. An exemplary method comprises: receiving a first image frame and a second image frame associated with the image; determining first movement of the camera in the first image frame or the second image frame; determining second movement of the camera between the first image frame and the second image frame; controlling, based on the first movement, at least a portion of shake blur associated with the image; and controlling, based on the second movement, at least a portion of motion blur associated with the image.
Abstract:
A method and apparatus for recording a composite image from a lens array are disclosed. A first composite image is recorded using a first lens providing imaging data in a first color, a second lens providing imaging data in a second color, and a third lens providing imaging data in a third color, wherein the first, second, and third colors are different colors. A second composite image is recorded using the first lens, a fourth lens providing imaging data in the second color, and a fifth lens providing imaging data in the third color. The first and second composite images are compared to identify color fringed areas. Based on the relative size and location of the color fringed areas in the first and second composite images, a de-fringing algorithm is applied to at least one of the composite images to mitigate the color fringing in the color fringed areas.
Abstract:
A sequence of multiple subimages is captured by an imaging sensor organized in multiple subsets of pixels. Each of the subsets of pixels is assigned to capturing a corresponding one of the subimages. For each of the subsets of pixels, a noise level of an output of the pixels of the subset is measured. Depending on the measured noise level, an exposure time for capturing the corresponding subimage is controlled.
Abstract:
The solution disclosed herein reduces the amount of time and computational resources necessary to determine a dominant gradient direction of an image area comprising a plurality of pixels of an image. To that end, the dominant gradient direction of an image area is determined based on two gradient magnitudes determined from four sample points in the image area, where a direction of one of the gradient magnitudes is perpendicular to a direction of the other of the gradient magnitudes. The dominant gradient direction is then determined by taking the arctangent of the computed gradient magnitudes.
Abstract:
The invention is directed to systems, methods and computer program products for capturing an image using an array camera. A method comprises determining an application associated with capturing an image using an array camera, wherein the array camera comprises a first sensor and at least one second sensor, wherein the first sensor comprises a red filter, a green filter, and a blue filter, and wherein each second sensor comprises a red filter, a green filter, or a blue filter; determining whether the application requires the image to have a first resolution equal to or greater than a predetermined resolution; determining whether the application requires depth information associated with the image; and in response to determining the application does not require the image to have the first resolution and does not require depth information, activating the first sensor, and capturing the image using the first sensor.
Abstract:
A method and apparatus for recording a composite image from a lens array are disclosed. A first composite image is recorded using a first lens providing imaging data in a first color, a second lens providing imaging data in a second color, and a third lens providing imaging data in a third color, wherein the first, second, and third colors are different colors. A second composite image is recorded using the first lens, a fourth lens providing imaging data in the second color, and a fifth lens providing imaging data in the third color. The first and second composite images are compared to identify color fringed areas. Based on the relative size and location of the color fringed areas in the first and second composite images, a de-fringing algorithm is applied to at least one of the composite images to mitigate the color fringing in the color fringed areas.
Abstract:
The invention is directed to exposure control in a camera. An exemplary method comprises determining depth information associated with a portion of an image frame; obtaining exposure data associated with the portion of the image frame; and controlling an amount of exposure for the portion of the image frame based on the depth information and the exposure data.