Abstract:
The systems and methods described can include approaches to calibrate head-mounted displays for improved viewing experiences. Some methods include receiving data of a first target image associated with an undeformed state of a first eyepiece of a head-mounted display device; receiving data of a first captured image associated with deformed state of the first eyepiece of the head-mounted display device; determining a first transformation that maps the first captured image to the image; and applying the first transformation to a subsequent image for viewing on the first eyepiece of the head-mounted display device.
Abstract:
A method and apparatus for performing two-dimensional video alignment onto three-dimensional point clouds. The system recovers camera pose from camera video, determines a depth map, converts the depth map to a Euclidean video point cloud, and registers two-dimensional video to the three-dimensional point clouds.
Abstract:
A method and apparatus for performing two-dimensional video alignment onto three-dimensional point clouds. The system recovers camera pose from camera video, determines a depth map, converts the depth map to a Euclidean video point cloud, and registers two-dimensional video to the three-dimensional point clouds.
Abstract:
The method and apparatus for performing iris recognition from at least one image is disclosed. A plurality of cameras is used to capture a plurality of images where at least one of the images contains a region having at least a portion of an iris. At least one of the plurality of images is then processed to perform iris recognition.
Abstract:
The method and apparatus for performing iris recognition from at least one image is disclosed. A plurality of cameras (105, 115) are used to capture a plurality of images where at least one of the images contains a region having at least a portion of an iris. At least one of the plurality of images is then processed to perform iris recognition (120, 145).
Abstract:
An apparatus and method for classifying input data, where differences between general characteristics of the input item and the training data are reduced by manipulating general characteristics of an original subspace defined by the training data, projecting the input item into the manipulated subspace before classifying the input item and determining projection coefficients used to project the input item into the manipulated subspace. The input item is classified by mapping the projection coefficients of the input item and the projection coefficients of the training data into a classification space. The input item and training data may correspond to images, sounds, colors or other data of varying dimension, where manipulation is performed by comparing one or more of the general characteristics of the original subspace including rotational orientation, translational orientation, scale, and illumination.