Abstract:
Systems, methods and program storage devices are disclosed, which comprise instructions to cause one or more processing units to dynamically render 3D lighting effects for a supplied 2D texture map—without the need for a programmer to supply a normal map along with the 2D texture map. According to some embodiments, an algorithm may inspect the pixel values (e.g., RGB values) of each individual pixel of the texture map, and, based on the pixel values, can accurately estimate where the lighting and shadow effects should be applied to the source 2D texture file to simulate 3D lighting. Further, because these effects are being rendered dynamically by the rendering and animation infrastructure, the techniques described herein work especially well for “dynamic content,” e.g., user-downloaded data, in-application user-created content, operating system (OS) icons, and other user interface (UI) elements for which programmers do not have access to normal maps a priori.
Abstract:
Systems, methods and program storage devices are disclosed, which comprise instructions to cause one or more processing units to analyze input images to a texture atlas and determine how each texture should be modified before being stored in the texture atlas to prevent undesirable drawing artifacts. For example, “tileable” images may be identified on a per-edge basis (e.g., by determining whether each edge pixel is above a certain opacity threshold). The tileable images may then be modified, e.g., by extruding a 1-pixel border identical to the outer row of pixels, before being stored in the texture atlas. “Character”-type sprites may also be identified on a per-edge basis (e.g., by determining whether each edge pixel is below the opacity threshold). The character-type sprites may then by modified by adding a single pixel transparent border around the outer rows of pixels before being stored in the texture atlas.
Abstract:
Systems, methods and program storage devices are disclosed, which comprise instructions to cause one or more processing units to dynamically generate refined normal maps for 2D texture maps, e.g., supplied by a programmer or artist. Generally speaking, there are two pertinent properties to keep in balance when generating normal vectors comprising a normal map: “smoothness” and “bumpiness.” The smoothness of the normal vectors is influenced by how many neighboring pixels are involved in the “smoothening” calculation. Incorporating the influence of a greater number of neighboring pixels' values reduces the overall bumpiness of the normal map, as each pixel's value takes weight from those neighboring pixels. Thus, the techniques described herein iteratively: downsample height maps; generate normal maps; scale the normal maps to maintain bumpiness; and blend the generated scaled normal maps with generated normal maps from previous iterations—until the smoothness of the resultant normal map has reached desired levels.
Abstract:
The refresh rate of a display of a portable display device is dependent on the degree of device motion detected by one or more motion sensors included in the portable display device, according to an embodiment of the invention. In an embodiment, when no device motion is detected by the one or more sensors, the display of the portable display device is refreshed at an initial refresh rate. When the one or more motion sensors detects a degree of device motion above a motion threshold, the refresh rate of the display is decreased to a motion-based refresh rate, according to an embodiment. In an embodiment, the degree of motion of moving content on the display is also taken into account when determining the display refresh rate.
Abstract:
Systems, methods and program storage devices are disclosed, which comprise instructions to cause one or more processing units to analyze input images to a texture atlas and determine how each texture should be modified before being stored in the texture atlas to prevent undesirable drawing artifacts. For example, “tileable” images may be identified on a per-edge basis (e.g., by determining whether each edge pixel is above a certain opacity threshold). The tileable images may then be modified, e.g., by extruding a 1-pixel border identical to the outer row of pixels, before being stored in the texture atlas. “Character”-type sprites may also be identified on a per-edge basis (e.g., by determining whether each edge pixel is below the opacity threshold). The character-type sprites may then by modified by adding a single pixel transparent border around the outer rows of pixels before being stored in the texture atlas.
Abstract:
The disclosure pertains to the operation of graphics systems and to a variety of architectures for design and/or operation of a graphics system spanning from the output of an application program and extending to the presentation of visual content in the form of pixels or otherwise. In general, many embodiments of the invention envision the processing of graphics programming according to an on-the-fly decision made regarding how best to use the specific available hardware and software. In some embodiments, a software arrangement may be used to evaluate the specific system hardware and software capabilities, then make a decision regarding what is the best graphics programming path to follow for any particular graphics request. The decision regarding the best path may be made after evaluating the hardware and software alternatives for the path in view of the particulars of the graphics program to be processed.
Abstract:
Embodiments of the present disclosure are directed to methods and systems for displaying an image on a user interface. The methods and systems include components modules and so on for determining a minimum feature width of the image and determining and a distance field of each region associated with the image. The distance field of each region may be based on the minimum feature width. A filter threshold associated with the distance field is then determined and the image is output using the determined filter threshold.
Abstract:
Techniques are disclosed for generating and using a conformal or UV mapping between an object's 3D representation (e.g., a polygonal mesh model) and a corresponding 2D representation (e.g., texture memory). More particularly, techniques disclosed herein generate a conformal mapping that allows the rapid identification of disparate locations in texture memory (e.g., those that span a seam) that are spatially similar at the corresponding 3D locations. The ability to perform 2D-to-3D-to-2D mappings quickly, grants the ability to filter across a conformal map's seams—an action that has previously been avoided due to its high computational cost.
Abstract:
Techniques are disclosed for providing easily computable representations of dynamic objects so that a graphic systems' physics engine can more accurately and realistically determine the result of physical actions on, or with, such dynamic objects. More particularly, disclosed techniques generate a convex decomposition of an arbitrarily complex polygonal shape that is then simplified in a manner that preserves physically significant details, resulting in an object having a relatively small number of convex shapes that cover the original polygonal shape. The salience of a physically significant detail may be controlled via a threshold value which may be user or system specified.
Abstract:
Techniques are disclosed for generating and using a conformal or UV mapping between an object's 3D representation (e.g., a polygonal mesh model) and a corresponding 2D representation (e.g., texture memory). More particularly, techniques disclosed herein generate a conformal mapping that allows the rapid identification of disparate locations in texture memory (e.g., those that span a seam) that are spatially similar at the corresponding 3D locations. The ability to perform 2D-to-3D-to-2D mappings quickly, grants the ability to filter across a conformal map's seams—an action that has previously been avoided due to its high computational cost.