Abstract:
The disclosure pertains to the operation of graphics systems and to a variety of architectures for design and/or operation of a graphics system spanning from the output of an application program and extending to the presentation of visual content in the form of pixels or otherwise. In general, many embodiments of the invention envision the processing of graphics programming according to an on-the-fly decision made regarding how best to use the specific available hardware and software. In some embodiments, a software arrangement may be used to evaluate the specific system hardware and software capabilities, then make a decision regarding what is the best graphics programming path to follow for any particular graphics request. The decision regarding the best path may be made after evaluating the hardware and software alternatives for the path in view of the particulars of the graphics program to be processed.
Abstract:
Systems, methods and program storage devices are disclosed, which comprise instructions to cause one or more processing units to dynamically generate refined normal maps for 2D texture maps, e.g., supplied by a programmer or artist. Generally speaking, there are two pertinent properties to keep in balance when generating normal vectors comprising a normal map: “smoothness” and “bumpiness.” The smoothness of the normal vectors is influenced by how many neighboring pixels are involved in the “smoothening” calculation. Incorporating the influence of a greater number of neighboring pixels' values reduces the overall bumpiness of the normal map, as each pixel's value takes weight from those neighboring pixels. Thus, the techniques described herein iteratively: downsample height maps; generate normal maps; scale the normal maps to maintain bumpiness; and blend the generated scaled normal maps with generated normal maps from previous iterations—until the smoothness of the resultant normal map has reached desired levels.
Abstract:
The refresh rate of a display of a portable display device is dependent on the degree of device motion detected by one or more motion sensors included in the portable display device, according to an embodiment of the invention. In an embodiment, when no device motion is detected by the one or more sensors, the display of the portable display device is refreshed at an initial refresh rate. When the one or more motion sensors detects a degree of device motion above a motion threshold, the refresh rate of the display is decreased to a motion-based refresh rate, according to an embodiment. In an embodiment, the degree of motion of moving content on the display is also taken into account when determining the display refresh rate.
Abstract:
Systems, methods and program storage devices are disclosed, which cause one or more processing units to: obtain one or more two-dimensional components and one or more three-dimensional components; convert the pixel color values of the two-dimensional components into luminance values; create height maps over the two-dimensional components using the converted luminance values; calculate a normal vector for each pixel in each of two-dimensional components; and cause one or more processing units to render three-dimensional lighting effects on the one or more two-dimensional components and one or more three-dimensional components in a mixed scene, wherein the calculated normal vectors are used as the normal maps for the two-dimensional components, the pixel color values are used as the texture maps for the two-dimensional components, and the one or more three-dimensional components are rendered in the scene according their respective depth values, textures, and/or vertices—along with the one or more two-dimensional components.
Abstract:
A set of tools, in the form of a software developers kit (SDK) for a graphics rendering system, is provided to improve overall graphics operations. In general, the tools are directed to analyzing a scene tree and optimizing its presentation to one or more graphics processing units (GPUs) so as to improve rendering operations. This overall goal is provided through a number of different capabilities, each of which is presented to software developers through a new applications programming interface (API).
Abstract:
A device for providing operating system managed group communication sessions may include a memory and at least one processor. The at least one processor may be configured to receive, by an operating system level process executing on a device and from an application process executing on a device, a request to initiate a group session between a user associated with the device and another user. The at least one processor may be further configured to identify, by the operating system level process, another device associated with the other user. The at least one processor may be further configured to initiate, by the operating system level process, the group session with the user via the other device. The at least one processor may be further configured to manage, by the operating system level process, the group session.
Abstract:
Various implementations disclosed herein include devices, systems, and methods that provide a CGR environment in which virtual objects from one or more apps are included. User interactions with the virtual objects are detected and interpreted by a system that is separate from the apps that provide the virtual objects. The system detects user interactions received via one or more input modalities and interprets those user interactions as events. These events provide a higher-level, input modality-independent, abstractions of the lower-level input-modality dependent user interactions that are detected. The system uses UI capability data provided by the apps to interpret user interactions with respect to the virtual object provided by the apps. For example, the UI capability data can identify whether a virtual object is moveable, actionable, hover-able, etc. and the system interprets user interactions at or near the virtual object accordingly.
Abstract:
Various implementations disclosed herein include devices, systems, and methods that provide a CGR environment in which virtual objects from one or more apps are included. User interactions with the virtual objects are detected and interpreted by a system that is separate from the apps that provide the virtual objects. The system detects user interactions received via one or more input modalities and interprets those user interactions as events. These events provide a higher-level, input modality-independent, abstractions of the lower-level input-modality dependent user interactions that are detected. The system uses UI capability data provided by the apps to interpret user interactions with respect to the virtual object provided by the apps. For example, the UI capability data can identify whether a virtual object is moveable, actionable, hover-able, etc. and the system interprets user interactions at or near the virtual object accordingly.
Abstract:
In an exemplary process for interacting with user interface objects using an eye gaze, an affordance associated with a first object is displayed. A gaze direction or a gaze depth is determined. While the gaze direction or the gaze depth is determined to correspond to a gaze at the affordance, a first input representing user instruction to take action on the affordance is received, and the affordance is selected responsive to receiving the first input.
Abstract:
Various implementations disclosed herein include devices, systems, and methods that enable a device to provide a view of virtual elements and a physical environment where the presentation of the virtual elements is based on positioning relative to the physical environment. In one example, a device is configured to detect a change in positioning of a virtual element, for example, when a virtual element is added, moved, or the physical environment around the virtual element is changed. The location of the virtual element in the physical environment is used to detect an attribute of the physical environment upon which the presentation of the virtual element depends. Thus, the device is further configured to detect an attribute (e.g., surface, table, mid-air, etc.) of the physical environment based on the placement of the virtual element and present the virtual element based on the detected attribute.