摘要:
The present invention includes a foveated wide-angle imaging system and method for capturing a wide-angle image and for viewing the captured wide-angle image in real time. In general, the foveated wide-angle imaging system includes a foveated wide-angle camera system having multiple cameras for capturing a scene and outputting raw output images, a foveated wide-angle stitching system for generating a stitch table, and a real-time wide-angle image correction system that creates a composed warp table from the stitch table and processes the raw output images using the composed warp table to correct distortion and perception problems. The foveated wide-angle imaging method includes using a foveated wide-angle camera system to capture a plurality of raw output images, generating a composed warp table, and processing the plurality of raw output images using the composed warp table to generate a corrected wide-angle image for viewing.
摘要:
The present invention includes a real-time wide-angle image correction system and a method for alleviating distortion and perception problems in images captured by wide-angle cameras. In general, the real-time wide-angle image correction method generates warp table from pixel coordinates of a wide-angle image and applies the warp table to the wide-angle image to create a corrected wide-angle image. The corrections are performed using a parametric class of warping functions that include Spatially Varying Uniform (SVU) scaling functions. The SVU scaling functions and scaling factors are used to perform vertical scaling and horizontal scaling on the wide-angle image pixel coordinates. A horizontal distortion correction is performed using the SVU scaling functions at and at least two different scaling factors. This processing generates a warp table that can be applied to the wide-angle image to yield the corrected wide-angle image.
摘要:
The system provides improved procedures to estimate head motion between two images of a face. Locations of a number of distinct facial features are identified in two images. The identified locations can correspond to the eye comers, mouth corners and nose tip. The locations are converted into as a set of physical face parameters based on the symmetry of the identified distinct facial features. The set of physical parameters reduces the number of unknowns as compared to the number of equations used to determine the unknowns. An initial head motion estimate is determined by: (a) estimating each of the set of physical parameters, (b) estimating a first head pose transform corresponding to the first image, and (c) estimating a second head pose transform corresponding to the second image. The head motion estimate can be incorporated into a feature matching algorithm to refine the head motion estimation and the physical facial parameters. In one implementation, an inequality constraint is placed on a particular physical parameter—such as a nose tip, such that the parameter is constrained within a predetermined minimum and maximum value. The inequality constraint is converted to an equality constraint by using a penalty function. Then, the inequality constraint is used during the initial head motion estimation to add additional robustness to the motion estimation.
摘要:
The present invention includes a real-time wide-angle image correction system and a method for alleviating distortion and perception problems in images captured by wide-angle cameras. In general, the real-time wide-angle image correction method generates warp table from pixel coordinates of a wide-angle image and applies the warp table to the wide-angle image to create a corrected wide-angle image. The corrections are performed using a parametric class of warping functions that include Spatially Varying Uniform (SVU) scaling functions. The SVU scaling functions and scaling factors are used to perform vertical scaling and horizontal scaling on the wide-angle image pixel coordinates. A horizontal distortion correction is performed using the SVU scaling functions at and at least two different scaling factors. This processing generates a warp table that can be applied to the wide-angle image to yield the corrected wide-angle image.
摘要:
Described herein is a technique for creating a 3D face model using images obtained from an inexpensive camera associated with a general-purpose computer. Two still images of the user are captured, and two video sequences. The user is asked to identify five facial features, which are used to calculate a mask and to perform fitting operations. Based on a comparison of the still images, deformation vectors are applied to a neutral face model to create the 3D model. The video sequences are used to create a texture map. The process of creating the texture map references the previously obtained 3D model to determine poses of the sequential video images.
摘要:
Systems and methods to estimate head motion between two images of a face are described. In one aspect, locations of a plurality of distinct facial features in the two images are identified. The locations correspond to a number of unknowns that are determined upon estimation of head motion. The number of unknowns are determined by a number of equations. The identified locations are converted into a set of physical face parameters based on the symmetry of the distinct facial features. The set of physical face parameters reduce the number of unknowns as compared to the number of equations used to determine the unknowns. An inequality constraint is added to a particular face parameter of the physical face parameters, such that the particular face parameter is constrained within a predetermined minimum and maximum value. The inequality constraint is converted to an equality constraint using a penalty function. Head motion is estimated from identified points in the two images. The identified points are based on the set of physical face parameters.
摘要:
The present invention combines a conventional audio microphone with an additional speech sensor that provides a speech sensor signal based on an input. The speech sensor signal is generated based on an action undertaken by a speaker during speech, such as facial movement, bone vibration, throat vibration, throat impedance changes, etc. A speech detector component receives an input from the speech sensor and outputs a speech detection signal indicative of whether a user is speaking. The speech detector generates the speech detection signal based on the microphone signal and the speech sensor signal.
摘要:
A method for constructing an avatar of a human subject includes acquiring a depth map of the subject, obtaining a virtual skeleton of the subject based on the depth map, and harvesting from the virtual skeleton a set of characteristic metrics. Such metrics correspond to distances between predetermined points of the virtual skeleton. In this example method, the characteristic metrics are provided as input to an algorithm trained using machine learning. The algorithm may be trained using a human model in a range of poses, and a range of human models in a single pose, to output a virtual body mesh as a function of the characteristic metrics. The method also includes constructing a virtual head mesh distinct from the virtual body mesh, with facial features resembling those of the subject, and connecting the virtual body mesh to the virtual head mesh.
摘要:
The subject disclosure is directed towards an immersive conference, in which participants in separate locations are brought together into a common virtual environment (scene), such that they appear to each other to be in a common space, with geometry, appearance, and real-time natural interaction (e.g., gestures) preserved. In one aspect, depth data and video data are processed to place remote participants in the common scene from the first person point of view of a local participant. Sound data may be spatially controlled, and parallax computed to provide a realistic experience. The scene may be augmented with various data, videos and other effects/animations.
摘要:
Multi-device capture and spatial browsing of conferences is described. In one implementation, a system detects cameras and microphones, such as the webcams on participants' notebook computers, in a conference room, group meeting, or table game, and enlists an ad-hoc array of available devices to capture each participant and the spatial relationships between participants. A video stream composited from the array is browsable by a user to navigate a 3-dimensional representation of the meeting. Each participant may be represented by a video pane, a foreground object, or a 3-D geometric model of the participant's face or body displayed in spatial relation to the other participants in a 3-dimensional arrangement analogous to the spatial arrangement of the meeting. The system may automatically re-orient the 3-dimensional representation as needed to best show the currently interesting event such as current speaker or may extend navigation controls to a user for manually viewing selected participants or nuanced interactions between participants.