Abstract:
An encoder and method of encoding are provided for encoding at least one integral image representing at least one object in perspective in a scene and including a plurality of elemental images. The method of encoding includes generating a plurality of K sub-images on the basis of the plurality of basic images; arrangement of the sub-images in a predetermined pattern such as to form a multi-view image of the object, the views corresponding respectively to the sub-images; and adaptive compression of the multi-view image formed, as a function of the motion type of the object in the scene.
Abstract:
A plenoptic camera is proposed having a color filter array positioned on an image sensor with an array of pixels, the color filter array having a first filter with a set of unit elements, each unit element covering M×M pixels of the image sensor, with M an integer such that M≧2. The plenoptic camera further includes a set of micro-lens, each micro-lens delivering a micro-lens image on the image sensor with a diameter equal to p=k×M, with k being an integer greater than or equal to two. The first filter is remarkable in that the set of unit elements comprises an initialization unit element being associated with a matrix ( c m , n ) 0 ≤ m
Abstract:
Methods and apparatus for processing images captured by a camera device including multiple optical chains, e.g., camera modules, are described. Three, 4, 5 or more optical chains maybe used. Different optical chains capture different images due to different perspectives. Multiple images, e.g., corresponding to different perspectives, are captured during a time period and are combined to generate a composite image. In some embodiments one of the captured images or a synthesized image is used as a reference image during composite image generation. The image used as the reference image is selected to keep the perspective of sequentially generated composite images consistent despite unintentional came movement and/or in accordance with an expected path of travel. Thus, which camera module provides the reference image may vary over time taking into consideration unintended camera movement. Composite image generation may be performed external to the camera device or in the camera device.
Abstract:
An image capture device, such as a camera, has multiple modes including a light field image capture mode, a conventional 2D image capture mode, and at least one intermediate image capture mode. By changing position and/or properties of the microlens array (MLA) in front of the image sensor, changes in 2D spatial resolution and angular resolution can be attained. In at least one embodiment, such changes can be performed in a continuous manner, allowing a continuum of intermediate modes to be attained.
Abstract:
Imager arrays, array camera modules, and array cameras in accordance with embodiments of the invention utilize pixel apertures to control the amount of aliasing present in captured images of a scene. One embodiment includes a plurality of focal planes, control circuitry configured to control the capture of image information by the pixels within the focal planes, and sampling circuitry configured to convert pixel outputs into digital pixel data. In addition, the pixels in the plurality of focal planes include a pixel stack including a microlens and an active area, where light incident on the surface of the microlens is focused onto the active area by the microlens and the active area samples the incident light to capture image information, and the pixel stack defines a pixel area and includes a pixel aperture, where the size of the pixel apertures is smaller than the pixel area.
Abstract:
In an example embodiment, a method, apparatus and computer program product are provided. The method includes facilitating receipt of a plenoptic image associated with a scene, the plenoptic image including plenoptic micro-images and being captured by a focused plenoptic camera. The method includes generating plenoptic vectors for the plenoptic micro-images of the plenoptic image, where an individual plenoptic vector is generated for an individual plenoptic micro-image. The method includes assigning disparities for the plenoptic micro-images of the plenoptic image. A disparity for a plenoptic micro-image is assigned by accessing a plurality of subspaces associated with a set of pre-determined disparities, projecting a plenoptic vector for the plenoptic micro-image in the plurality of subspaces, calculating a plurality of residual errors based on projections of the plenoptic vector in the plurality of subspaces, and determining the disparity for the plenoptic micro-image based on a comparison of the plurality of residual errors.
Abstract:
A capture system may capture light-field data representative of an environment for use in virtual reality, augmented reality, and the like. The system may have a plurality of light-field cameras arranged to capture a light-field volume within the environment, and a processor. The processor may use the light-field volume to generate a first virtual view depicting the environment from a first virtual viewpoint. The light-field cameras may be arranged in a tiled array to define a capture surface with a ring-shaped, spherical, or other arrangement. The processor may map the pixels captured by the image sensors to light rays received in the light-field volume, and store data descriptive of the light rays in a coordinate system representative of the light-field volume.
Abstract:
Systems and methods for implementing array camera configurations that include a plurality of constituent array cameras, where each constituent array camera provides a distinct field of view and/or a distinct viewing direction, are described. In several embodiments, image data captured by the constituent array cameras is used to synthesize multiple images that are subsequently blended. In a number of embodiments, the blended images include a foveated region. In certain embodiments, the blended images possess a wider field of view than the fields of view of the multiple images.
Abstract:
Methods and apparatus for making and using environmental measurements are described. Environmental information captured using a variety of devices is processed and combined to generate an environmental model which is communicated to customer playback devices. A UV map which is used for applying, e.g., wrapping, images onto the environmental model is also provided to the playback devices. A playback device uses the environmental model and UV map to render images which are then displayed to a viewer as part of providing a 3D viewing experience. In some embodiments updated environmental model is generated based on more recent environmental measurements, e.g., performed during the event. The updated environmental model and/or difference information for updating the existing model, optionally along with updated UV map(s), is communicated to the playback devices for use in rendering and playback of subsequently received image content. By communicating updated environmental information improved 3D simulations are achieved.
Abstract:
Camera and/or lens calibration information is generated as part of a calibration process in video systems including 3-dimensional (3D) immersive content systems. The calibration information can be used to correct for distortions associated with the source camera and/or lens. A calibration profile can include information sufficient to allow the system to correct for camera and/or lens distortion/variation. This can be accomplished by capturing a calibration image of a physical 3D object corresponding to the simulated 3D environment, and creating the calibration profile by processing the calibration image. The calibration profile can then be used to project the source content directly into the 3D viewing space while also accounting for distortion/variation, and without first translating into an intermediate space (e.g., a rectilinear space) to account for lens distortion.