Abstract:
Techniques for facial expression capture for character animation are described. In one or more implementations, facial key points are identified in a series of images. Each image, in the series of images, is normalized from the identified facial key points. Facial features are determined from each of the normalized images. Then a facial expression is classified, based on the determined facial features, for each of the normalized images. In additional implementations, a series of images are captured that include performances of one or more facial expressions. The facial expressions in each image of the series of images are classified by a facial expression classifier. Then the facial expression classifications are used by a character animator system to produce a series of animated images of an animated character that include animated facial expressions that are associated with the facial expression classification of the corresponding image in the series of images.
Abstract:
Techniques are provided for incrementally aligning multiple scans of a three-dimensional subject. This can be accomplished by establishing an updated aligned set of scans as each new scan is sequentially processed and aligned with the existing scans. In such embodiments the pairwise and global alignment processes are effectively combined into a single collective alignment process. This collective alignment converges to an optimal alignment faster than the sequential pairwise alignment process that existing solutions use. The collective alignment enforces pairwise alignment between the individual scans in the aligned set of scans. This is because each scan comprising the aligned set can be aligned to the next incremental scan if any scan included in the aligned set can be aligned to the next incremental scan. The pairwise alignment between the scans comprising the aligned set is thus a known function.
Abstract:
Techniques for facial expression capture for character animation are described. In one or more implementations, facial key points are identified in a series of images. Each image, in the series of images, is normalized from the identified facial key points. Facial features are determined from each of the normalized images. Then a facial expression is classified, based on the determined facial features, for each of the normalized images. In additional implementations, a series of images are captured that include performances of one or more facial expressions. The facial expressions in each image of the series of images are classified by a facial expression classifier. Then the facial expression classifications are used by a character animator system to produce a series of animated images of an animated character that include animated facial expressions that are associated with the facial expression classification of the corresponding image in the series of images.
Abstract:
One embodiment involves receiving a fine mesh as input, the fine mesh representing a 3-Dimensional (3D) model and comprising fine mesh polygons. The embodiment further involves identifying, based on the fine mesh, near-planar regions represented by a coarse mesh of coarse mesh polygons, at least one of the near-planar regions corresponding to a plurality of the coarse mesh polygons. The embodiment further involves determining a deformation to deform the coarse mesh based on comparing normals between adjacent coarse mesh polygons. The deformation may involve reducing a first angle between coarse mesh polygons adjacent to one another in a same near-planar region. The deformation may additionally or alternatively involve increasing an angle between coarse mesh polygons adjacent to one another in different near-planar regions. The fine mesh can be deformed using the determined deformation.
Abstract:
Systems and methods are disclosed herein for 3-Dimensional portrait reconstruction from a single photo. A face portion of a person depicted in a portrait photo is detected and a 3-Dimensional model of the person depicted in the portrait photo constructed. In one embodiment, constructing the 3-Dimensional model involves fitting hair portions of the portrait photo to one or more helices. In another embodiment, constructing the 3-Dimensional model involves applying positional and normal boundary conditions determined based on one or more relationships between face portion shape and hair portion shape. In yet another embodiment, constructing the 3-Dimensional model involves using shape from shading to capture fine-scale details in a form of surface normals, the shape from shading based on an adaptive albedo model and/or a lighting condition estimated based on shape fitting the face portion.
Abstract:
Alignment techniques are described that automatically align multiple scans of an object obtained from different perspectives. Instead of relying solely on errors in local feature matching between a pair of scans to identify a best possible alignment, additional alignment possibilities may be considered. Grouped keypoint features of the pair of scans may be compared to keypoint features of an additional scan to determine an error between the respective keypoint features. Various alignment techniques may utilize the error to determine an optimal alignment for the scans.
Abstract:
Techniques for facial expression capture for character animation are described. In one or more implementations, facial key points are identified in a series of images. Each image, in the series of images, is normalized from the identified facial key points. Facial features are determined from each of the normalized images. Then a facial expression is classified, based on the determined facial features, for each of the normalized images. In additional implementations, a series of images are captured that include performances of one or more facial expressions. The facial expressions in each image of the series of images are classified by a facial expression classifier. Then the facial expression classifications are used by a character animator system to produce a series of animated images of an animated character that include animated facial expressions that are associated with the facial expression classification of the corresponding image in the series of images.
Abstract:
One embodiment involves receiving a fine mesh as input, the fine mesh representing a 3-Dimensional (3D) model and comprising fine mesh polygons. The embodiment further involves identifying, based on the fine mesh, near-planar regions represented by a coarse mesh of coarse mesh polygons, at least one of the near-planar regions corresponding to a plurality of the coarse mesh polygons. The embodiment further involves determining a deformation to deform the coarse mesh based on comparing normals between adjacent coarse mesh polygons. The deformation may involve reducing a first angle between coarse mesh polygons adjacent to one another in a same near-planar region. The deformation may additionally or alternatively involve increasing an angle between coarse mesh polygons adjacent to one another in different near-planar regions. The fine mesh can be deformed using the determined deformation.
Abstract:
Techniques are provided for incrementally aligning multiple scans of a three-dimensional subject. This can be accomplished by establishing an updated aligned set of scans as each new scan is sequentially processed and aligned with the existing scans. In such embodiments the pairwise and global alignment processes are effectively combined into a single collective alignment process. This collective alignment converges to an optimal alignment faster than the sequential pairwise alignment process that existing solutions use. The collective alignment enforces pairwise alignment between the individual scans in the aligned set of scans. This is because each scan comprising the aligned set can be aligned to the next incremental scan if any scan included in the aligned set can be aligned to the next incremental scan. The pairwise alignment between the scans comprising the aligned set is thus a known function.
Abstract:
Systems and methods are disclosed herein for 3-Dimensional portrait reconstruction from a single photo. A face portion of a person depicted in a portrait photo is detected and a 3-Dimensional model of the person depicted in the portrait photo constructed. In one embodiment, constructing the 3-Dimensional model involves fitting hair portions of the portrait photo to one or more helices. In another embodiment, constructing the 3-Dimensional model involves applying positional and normal boundary conditions determined based on one or more relationships between face portion shape and hair portion shape. In yet another embodiment, constructing the 3-Dimensional model involves using shape from shading to capture fine-scale details in a form of surface normals, the shape from shading based on an adaptive albedo model and/or a lighting condition estimated based on shape fitting the face portion.