Abstract:
Systems, methods, and computer-readable media for enabling efficient control of a media application at a media electronic device by a user electronic device are provided.
Abstract:
Systems, methods, and computer-readable media for enabling efficient control of a media application at a media electronic device by a user electronic device are provided.
Abstract:
This disclosure relates generally to the field of image processing and, more particularly, to various techniques and animation tools for allowing 2D and 3D graphics rendering and animation infrastructures to be able to dynamically render customized animations—without the need for the customized animations to be explicitly tied to any particular graphical entity. These so-called entity agnostic animations may then be integrated into “mixed” graphical scenes (i.e., scenes with both two-dimensional and three-dimensional components), where they may be: applied to any suitable graphical entity; visualized in real-time by the programmer; edited dynamically by the programmer; and shared across various computing platforms and environments that support the entity agnostic animation tools described herein. The entity agnostic animations created by the techniques described herein may be output directly to the current scene file that a programmer is working on, or they may be output to standalone, reusable entity agnostic animation object files.
Abstract:
Systems, methods and program storage devices are disclosed, which comprise instructions to cause one or more processing units to analyze input images to a texture atlas and determine how each texture should be modified before being stored in the texture atlas to prevent undesirable drawing artifacts. For example, “tileable” images may be identified on a per-edge basis (e.g., by determining whether each edge pixel is above a certain opacity threshold). The tileable images may then be modified, e.g., by extruding a 1-pixel border identical to the outer row of pixels, before being stored in the texture atlas. “Character”-type sprites may also be identified on a per-edge basis (e.g., by determining whether each edge pixel is below the opacity threshold). The character-type sprites may then by modified by adding a single pixel transparent border around the outer rows of pixels before being stored in the texture atlas.
Abstract:
Systems, methods and program storage devices are disclosed, which comprise instructions to cause one or more processing units to dynamically generate refined normal maps for 2D texture maps, e.g., supplied by a programmer or artist. Generally speaking, there are two pertinent properties to keep in balance when generating normal vectors comprising a normal map: “smoothness” and “bumpiness.” The smoothness of the normal vectors is influenced by how many neighboring pixels are involved in the “smoothening” calculation. Incorporating the influence of a greater number of neighboring pixels' values reduces the overall bumpiness of the normal map, as each pixel's value takes weight from those neighboring pixels. Thus, the techniques described herein iteratively: downsample height maps; generate normal maps; scale the normal maps to maintain bumpiness; and blend the generated scaled normal maps with generated normal maps from previous iterations—until the smoothness of the resultant normal map has reached desired levels.
Abstract:
Devices, methods, and non-transitory program storage devices are disclosed to provide for the application of color treatments and/or color normalization operations to digital assets (DAs) in a set of DAs that are to be displayed as part of a multimedia presentation. The determination of the color treatment may be based on comparing one or more characteristics of an audio media item associated with the set of DAs to a corresponding one or more characteristics of a plurality of predetermined color treatments. Color normalization may be applied to the set of DAs prior to the determined color treatment. Techniques disclosed herein may also determine one or more parameters for a multimedia presentation of the set of DAs based on a characteristic of the associated audio media item. The parameters for the multimedia presentation may comprise one or more of: preferred DA sequences, portions, clusters, layouts, themes, transition types, or transition durations.
Abstract:
A set of tools, in the form of a software developers kit (SDK) for a graphics rendering system, is provided to improve overall graphics operations. In general, the tools are directed to analyzing a scene tree and optimizing its presentation to one or more graphics processing units (GPUs) so as to improve rendering operations. This overall goal is provided through a number of different capabilities, each of which is presented to software developers through a new applications programming interface (API).
Abstract:
Systems, methods, and computer-readable media for enabling efficient control of a media application at a media electronic device by a user electronic device are provided.
Abstract:
Systems, methods, and computer-readable media for enabling efficient control of a media application at a media electronic device by a user electronic device are provided.
Abstract:
Systems, methods, and computer readable media to improve the operation of graphics systems are described. In general, collision avoidance techniques are disclosed that operate even when the agent lacks a priori knowledge of its environment and is, further, agnostic as to whether the environment is two-dimensional (2D) or three-dimensional (3D), whether the obstacles are convex or concave, or whether the obstacles are moving or stationary. More particularly, techniques disclosed herein use simple geometry to identify which edges of which obstacles an agent is most likely to collide. With this known, the direction of an avoidance force is also known. The magnitude of the force may be fixed, based on the agent's maximum acceleration, and modulated by weighting agents