Abstract:
A biometrics authentication system having a small and simple configuration and being capable of implementing both of biometrics authentication and position detection is provided. A biometrics authentication system includes: a light source emitting light to an object; a microlens array section condensing light from the object; a light-sensing device obtaining light detection data of the object on the basis of the light condensed by the microlens array section; a position detection section detecting the position of the object on the basis of the light detection data obtained in the light-sensing device; and an authentication section, in the case where the object is a living body, performing authentication of the living body on the basis of the light detection data obtained in the light-sensing device.
Abstract:
There is provided an image processing device including a depth generation unit configured to generate, based on an image of a current frame and an image of a preceding frame of the current frame, a depth image indicating a position of a subject in a depth direction in the image of the preceding frame as a depth image of the current frame.
Abstract:
The present technology relates to an information processing apparatus and an image processing method that make it possible to accurately reproduce a blur degree of an optical lens. A ray generation section generates rays to be incident to a virtual lens having a synthetic aperture configured from a plurality of image pickup sections that pick up images of a plurality of visual points from a real space point in a real space. A luminance allocation section allocates a luminance to rays remaining as a result of a collision decision for deciding whether or not the rays collide with an object before the rays are incident to the virtual lens. The present technology can be applied to a light field technology for reconstructing, for example, images picked up using various optical lenses from images of a plurality of visual points.
Abstract:
Provided is an encoding device including a non-occlusion region encoding unit configured to encode a difference between an image of a neighboring viewpoint, which is a viewpoint different from a criterion viewpoint, and a predicted image of the neighboring viewpoint of a non-occlusion region of the image of the neighboring viewpoint according to a first encoding scheme, and an occlusion region encoding unit configured to encode an occlusion region of the image of the neighboring viewpoint according to a second encoding scheme different from the first encoding scheme.
Abstract:
The present technology relates to a data processing apparatus, a data processing method, and a program that are capable of generating calibration data for performing appropriate image processing. The data processing apparatus performs interpolation to generate calibration data for a predetermined focus position by using calibration data for a plurality of focus positions. The calibration data for the plurality of focus positions is generated from a calibration image captured at the plurality of focus positions. The calibration image is obtained by capturing an image of a known object in the plurality of focus positions with a multi-lens camera that captures an image from two or more viewpoints. The present technology is applicable, for example, to a multi-lens camera that captures an image from two or more viewpoints.
Abstract:
The present technology relates to an image processing apparatus and an image processing method that make it possible to accurately reproduce a blur degree of an optical lens with a small data amount.A light condensing process for condensing rays to be incident to a virtual lens having a synthetic aperture configured from a plurality of image pickup sections that pick up images of a plurality of visual points from a real space point in a real space on a virtual sensor through an emulation lens of an emulation target is performed using lens information that is generated for a real space point corresponding to a plurality of information points that are a plurality of positions of part of a plane of the virtual sensor and defines rays that pass the emulation lens. The present technology can be applied to a light field technology for reconstructing, for example, images picked up using various optical lenses from images of a plurality of visual points.
Abstract:
The present disclosure relates to an image processing apparatus and a method capable of generating high-definition viewpoint interpolation images at high speed. A space reconstruction unit reconstructs a space in which viewpoint images are photographed according to each viewpoint image and each disparity (pixel shift amount) map and supplying reconstruction data of the space to an interpolation position setting unit. The interpolation position setting unit sets an interpolation position in the reconstructed space while changing (an inclination of) a beam and supplies interpolation target coordinates indicating the set interpolation position to a data search unit. The data search unit generates an interpolation image at any viewpoint by sampling RGB values at interpolation target coordinates supplied from the interpolation position setting unit and outputs the generated interpolation image to a subsequent stage. The present disclosure is applicable to, for example, an image processing apparatus that performs image processing using multi-view images.