Abstract:
Certain embodiments involve automatically generating a layered animatable puppet using a content stream. For example, a system identifies various frames of a content stream that includes a character performing various gestures usable for generating a layered puppet. The system separates the various frames of the content stream into various individual layers. The system extracts a face of the character from the various individual layers and creates the layered puppet by combining the individual layers and using the face of the character. The system can output the layered puppet for animation to perform a gesture of the various gestures.
Abstract:
One exemplary embodiment involves receiving a plurality of three-dimensional (3D) track points for a plurality of frames of a video, wherein the 3D track points are extracted from a plurality of two-dimensional source points. The embodiment further involves rendering the 3D track points across a plurality of frames of the video on a two-dimensional (2D) display. Additionally, the embodiment involves coloring each of the 3D track points wherein the color of each 3D track point visually distinguishes the 3D track point from a plurality of surrounding 3D track points, and wherein the color of each 3D track point is consistent across the frames of the video. The embodiment also involves sizing each of the 3D track points based on a distance between a camera that captured the video and a location of the 2D source points referenced by the respective one of the 3D track points.
Abstract:
Systems and methods for switching to different states of electronic content being developed in a content creation application. This involves storing different states of the electronic content using a content-addressable data store, where individual states are represented by identifiers that identify items of respective states stored in the content-addressable data store. Identical items that are included in multiple states are stored once in the content-addressable data store and referenced by common identifiers. Input is received to change the electronic content to a selected state of the different states and the electronic content is displayed in the selected state based on identifiers for the selected state. In this way, undo, redo, and other commands to switch to different states of electronic content being developed are provided.
Abstract:
Certain embodiments involve generating an appearance guide, a segmentation guide, and a positional guide and using one or more of the guides to synthesize a stylized image or animation. For example, a system obtains data indicating a target and a style exemplar image and generates a segmentation guide for segmenting the target image and the style exemplar image and identifying a feature of the target image and a corresponding feature of the style exemplar image. The system generates a positional guide for determining positions of the target feature and style feature relative to a common grid system. The system generates an appearance guide for modifying intensity levels and contrast values in the target image based on the style exemplar image. The system uses one or more of the guides to transfer a texture of the style feature to the corresponding target feature.
Abstract:
One exemplary embodiment involves identifying a plane defined by a plurality of three-dimensional (3D) track points rendered on a two-dimensional (2D) display, wherein the 3D track points are rendered at a plurality of corresponding locations of a video frame. The embodiment also involves displaying a target marker at the plane defined by the 3D track points to allow for visualization of the plane, wherein the target marker is displayed at an angle that corresponds with an angle of the plane. Additionally, the embodiment involves inserting a 3D object at a location in the plane defined by the 3D track points to be embedded into the video frame. The location of the 3D object is based at least in part on the target marker.
Abstract:
One exemplary embodiment involves receiving a plurality of three-dimensional (3D) track points for a plurality of frames of a video, wherein the 3D track points are extracted from a plurality of two-dimensional source points. The embodiment further involves rendering the 3D track points across a plurality of frames of the video on a two-dimensional (2D) display. Additionally, the embodiment involves coloring each of the 3D track points wherein the color of each 3D track point visually distinguishes the 3D track point from a plurality of surrounding 3D track points, and wherein the color of each 3D track point is consistent across the frames of the video. The embodiment also involves sizing each of the 3D track points based on a distance between a camera that captured the video and a location of the 2D source points referenced by the respective one of the 3D track points.
Abstract:
One exemplary embodiment involves identifying a plane defined by a plurality of three-dimensional (3D) track points rendered on a two-dimensional (2D) display, wherein the 3D track points are rendered at a plurality of corresponding locations of a video frame. The embodiment also involves displaying a target marker at the plane defined by the 3D track points to allow for visualization of the plane, wherein the target marker is displayed at an angle that corresponds with an angle of the plane. Additionally, the embodiment involves inserting a 3D object at a location in the plane defined by the 3D track points to be embedded into the video frame. The location of the 3D object is based at least in part on the target marker.