摘要:
Multi-modal, multi-lingual devices can be employed to consolidate numerous items including, but not limited to, keys, remote controls, image capture devices, audio recorders, cellular telephone functionalities, location/direction detectors, health monitors, calendars, gaming devices, smart home inputs, pens, optical pointing devices or the like. For example, a corner of a cellular telephone can be used as an electronic pen. Moreover, the device can be used to snap multiple pictures stitching them together to create a panoramic image. A device can automate ignition of an automobile, initiate appliances, etc. based upon relative distance. The device can provide for near to eye capabilities for enhanced image viewing. Multiple cameras/sensors can be provided on a single device to provide for stereoscopic capabilities. The device can also provide assistance to blind, privacy, etc. by consolidating services.
摘要:
A real-time approximately 360 degree image correction system and a method for alleviating distortion and perception problems in images captured by omni-directional cameras. In general, the real-time panoramic image correction method generates a warp table from pixel coordinates of a panoramic image and applies the warp table to the panoramic image to create a corrected panoramic image. The corrections are performed using a parametric class of warping functions that include Spatially Varying Uniform (SVU) scaling functions. The SVU scaling functions and scaling factors are used to perform vertical scaling and horizontal scaling on the panoramic image pixel coordinates. A horizontal distortion correction is performed using the SVU scaling functions at at least two different scaling factors. This processing generates a warp table that can be applied to the panoramic image to yield the corrected panoramic image. In one embodiment the warp table is concatenated with a stitching table used to create the panoramic image.
摘要:
Described herein is a technique for creating a 3D face model using images obtained from an inexpensive camera associated with a general-purpose computer. Two still images of the user are captured, and two video sequences. The user is asked to identify five facial features, which are used to calculate a mask and to perform fitting operations. Based on a comparison of the still images, deformation vectors are applied to a neutral face model to create the 3D model. The video sequences are used to create a texture map. The process of creating the texture map references the previously obtained 3D model to determine poses of the sequential video images.
摘要:
A method and apparatus determine a likelihood of a speech state based on an alternative sensor signal and an air conduction microphone signal. The likelihood of the speech state is used, together with the alternative sensor signal and the air conduction microphone signal, to estimate a clean speech value for a clean speech signal.
摘要:
The system provides improved procedures to estimate head motion between two images of a face. Locations of a number of distinct facial features are identified in two images. The identified locations can correspond to the eye comers, mouth corners and nose tip. The locations are converted into as a set of physical face parameters based on the symmetry of the identified distinct facial features. The set of physical parameters reduces the number of unknowns as compared to the number of equations used to determine the unknowns. An initial head motion estimate is determined by: (a) estimating each of the set of physical parameters, (b) estimating a first head pose transform corresponding to the first image, and (c) estimating a second head pose transform corresponding to the second image. The head motion estimate can be incorporated into a feature matching algorithm to refine the head motion estimation and the physical facial parameters. In one implementation, an inequality constraint is placed on a particular physical parameter—such as a nose tip, such that the parameter is constrained within a predetermined minimum and maximum value. The inequality constraint is converted to an equality constraint by using a penalty function. Then, the inequality constraint is used during the initial head motion estimation to add additional robustness to the motion estimation.
摘要:
Described herein is a technique for creating a 3D face model using images obtained from an inexpensive camera associated with a general-purpose computer. Two still images of the user are captured, and two video sequences. The user is asked to identify five facial features, which are used to calculate a mask and to perform fitting operations. Based on a comparison of the still images, deformation vectors are applied to a neutral face model to create the 3D model. The video sequences are used to create a texture map. The process of creating the texture map references the previously obtained 3D model to determine poses of the sequential video images.
摘要:
Systems and methods to estimate head motion between two images of a face are described. In one aspect, locations of a plurality of distinct facial features in the two images are identified. The locations correspond to a number of unknowns that are determined upon estimation of head motion. The number of unknowns are determined by a number of equations. The identified locations are converted into a set of physical face parameters based on the symmetry of the distinct facial features. The set of physical face parameters reduce the number of unknowns as compared to the number of equations used to determine the unknowns. An inequality constraint is added to a particular face parameter of the physical face parameters, such that the particular face parameter is constrained within a predetermined minimum and maximum value. The inequality constraint is converted to an equality constraint using a penalty function. Head motion is estimated from identified points in the two images. The identified points are based on the set of physical face parameters.
摘要:
The subject disclosure is directed towards an immersive conference, in which participants in separate locations are brought together into a common virtual environment (scene), such that they appear to each other to be in a common space, with geometry, appearance, and real-time natural interaction (e.g., gestures) preserved. In one aspect, depth data and video data are processed to place remote participants in the common scene from the first person point of view of a local participant. Sound data may be spatially controlled, and parallax computed to provide a realistic experience. The scene may be augmented with various data, videos and other effects/animations.
摘要:
Multi-device capture and spatial browsing of conferences is described. In one implementation, a system detects cameras and microphones, such as the webcams on participants' notebook computers, in a conference room, group meeting, or table game, and enlists an ad-hoc array of available devices to capture each participant and the spatial relationships between participants. A video stream composited from the array is browsable by a user to navigate a 3-dimensional representation of the meeting. Each participant may be represented by a video pane, a foreground object, or a 3-D geometric model of the participant's face or body displayed in spatial relation to the other participants in a 3-dimensional arrangement analogous to the spatial arrangement of the meeting. The system may automatically re-orient the 3-dimensional representation as needed to best show the currently interesting event such as current speaker or may extend navigation controls to a user for manually viewing selected participants or nuanced interactions between participants.
摘要:
Head pose tracking technique embodiments are presented that use a group of sensors configured so as to be disposed on a user's head. This group of sensors includes a depth sensor apparatus used to identify the three dimensional locations of features within a scene, and at least one other type of sensor. Data output by each sensor in the group of sensors is periodically input, and each time the data is input it is used to compute a transformation matrix that when applied to a previously determined head pose location and orientation established when the first sensor data was input identifies a current head pose location and orientation. This transformation matrix is then applied to the previously determined head pose location and orientation to identify a current head pose location and orientation.