Abstract:
A computing system is presented including a processor and non-transient memory which includes instructions to execute a method including receiving a motion instruction message which includes graphical objects to be modified and instructions to be assigned to each of the graphical objects to be modified, where an instruction includes a property to be applied to a graphical object. The method also includes identifying actors to be assigned to each of the graphical objects based on the instructions assigned to each of the graphical objects, where an actor is a non-graphical object capable of executing one or more instructions. The method also includes generating the actors for each of the graphical objects, executing the instructions assigned to each of the graphical objects via the actors, and outputting the modified graphical objects for display.
Abstract:
Interaction between a 3D animation and a corresponding script includes: displaying a user interface that includes at least a 3D animation area and a script area, the 3D animation area including (i) a 3D view area for creating and playing a 3D animation and (ii) a timeline area for visualizing actions by one or more 3D animation characters, the script area comprising one or more objects representing lines from a script having one or more script characters; receiving a first user input corresponding to a user selecting at least one of the objects from the script area for assignment to a location in the timeline area; generating a timeline object at the location in response to the first user input, the timeline object corresponding to the selected object; and associating audio data with the generated timeline object, the audio data corresponding to a line represented by the selected object.
Abstract:
There is described a method and a system for controlling a cinematographic process in an animation. The method comprises: receiving a text descriptive of a scene, the text comprising an instruction relative to the cinematographic process, the instruction comprising natural language words written according to a control language; identifying the instruction in the text based on the natural language words and a set of keywords in a lexicon of the control language, the set of keywords expressing at least one action of the cinematographic process; generating a conceptual structure defining the at least one action relative to the scene, from the text and the instruction identified; transmitting the conceptual structure to an animation generator, to generate the animation in accordance with the instruction in the text; and displaying the animation with the cinematographic process.
Abstract:
A graphic character object temporary storage stores parameters of a character and associated default values in a hierarchical data structure and one or more animation object data represented in a hierarchical data structure, the one or more animation object data having an associated animation, the graphic character object temporary storage and the animation object data being part of a local memory of a computer system. A method includes receiving a vector graphic object having character part objects which are represented as geometric shapes, displaying a two dimensional character, changing the scale of a part of the displayed two dimensional character, and storing an adjusted parameter in the graphic character object temporary storage as a percentage change from the default value, displaying a customized two dimensional character, applying keyframe data in an associated animation object data to the character parts objects, and displaying an animation according to the keyframe data.
Abstract:
An apparatus comprising: a memory system storing a plurality of sequences, each sequence comprising data for reproducing a different pattern of interactions between a respective plurality of moving characters and storing a combination data structure defining for each sequence connectability of that sequence with other ones of the plurality of sequences; a processor configured to determine pair-wise combinations of the stored sequences, by selecting sequences for pair-wise combination that are defined as connectable by the stored combination data structure, wherein each pair- wise combination has in common at least one of their respective plurality of moving characters and configured to use determined pair-wise combinations of the stored sequences to produce and output video graphics comprising a series of sequences in which movable characters repetitively interact in different combinations.
Abstract:
A graphical display animation system (100) is disclosed that supports timed modification of element property values of elements within a graphical display. The animation system utilizes a display structure for maintaining a set of elements (202) corresponding to displayed objects within a graphically displayed scene. The elements include a variable property value. The animation system also utilizes a property system that maintains properties associated with elements maintained by the display structure. The properties include dynamic properties (410) that are capable of changing over time and thus affecting the appearance of the corresponding element on a graphical display. The animation system includes animation classes (222), from which animation objects are instantiated and associated with an element property at runtime. The animation object instances provide time varying values affecting values assigned to the dynamic properties maintained by the property system.
Abstract:
The present invention provides a technique for aquiring motion samples, labeling motion samples with labels based on a plurality of parameters, using the motion samples and the labels to learn a function that maps labels to motions generally, and using the fucntion to synthesize arbitrary motions. The synthesized motions may be portrayed through computer graphic images to provide realistic animation. The present invention allows the modeling of labeled motions samples in a manner that can accomodate the synthesis of motion of arbitrary location, speed, and style. The modeling can provide subtle details of the motion through the use of probabilistic sub-modeling incorporated into the modeling process. Motion samples may be labeled according to any relevant parameters. Labels may be used to differentiate between different styles to yield different models, or different styles of a motion may be consolidated into a single base line model with the labels used to embellish the baseline model. The invention allows automation of labeling to increase the efficiency of processing a large variety of motion samples. The invention also allows automation of the animation of synthetic characters by generating the animation based on a general description of the motion desired along with a specification of any embellishments desired.
Abstract:
Interaction between a 3D animation and a corresponding script includes: displaying a user interface that includes at least a 3D animation area and a script area, the 3D animation area including (i) a 3D view area for creating and playing a 3D animation and (ii) a timeline area for visualizing actions by one or more 3D animation characters, the script area comprising one or more objects representing lines from a script having one or more script characters; receiving a first user input corresponding to a user selecting at least one of the objects from the script area for assignment to a location in the timeline area; generating a timeline object at the location in response to the first user input, the timeline object corresponding to the selected object; and associating audio data with the generated timeline object, the audio data corresponding to a line represented by the selected object.
Abstract:
Animation coordination system and methods are provided that manage animation context transitions between and/or among multiple applications. A global coordinator can obtain initial information, such as initial graphical representations and object types, initial positions, etc., from initiator applications and final information, such as final graphical representations and object types, final positions, etc. from destination applications. The global coordination creates an animation context transition between initiator applications and destination applications based upon the initial information and the final information.
Abstract:
A method is described for presenting graphics to a user, said method comprising (1) providing a 3D graphics system comprising a 3D graphics environment and at least one virtual object positioned in the 3D graphics environment, wherein the 3D graphics system is configured to use 3D mathematics, (2) providing a 2D graphics rendering engine configured to use 2D mathematics, and providing a library of sprites for use by the 2D graphics rendering engine, wherein for each sprite in the library, there is provided an array of rendered views for that sprite, based on horizontal and vertical angles, with the rendered views being expressed in 2D mathematics, (3) selecting a camera perspective within the 3D graphics environment, (4) based on selected camera perspective, generating an appropriate 2D view of the 3D graphics environment, (5) based on the generated 2D view, selecting an appropriate sprite and, for that sprite, the appropriate rendered view for that sprite, (6) determining the appropriate screen location and scale for the selected rendered view for the sprite and (7) instructing the 2D graphics rendering engine to paint the selected rendered view for the sprite to the determined screen location and with the determined scale.