Abstract:
Various implementations disclosed herein include devices, systems, and methods that enable a device to provide a view of virtual elements and a physical environment where the presentation of the virtual elements is based on positioning relative to the physical environment. In one example, a device is configured to detect a change in positioning of a virtual element, for example, when a virtual element is added, moved, or the physical environment around the virtual element is changed. The location of the virtual element in the physical environment is used to detect an attribute of the physical environment upon which the presentation of the virtual element depends. Thus, the device is further configured to detect an attribute (e.g., surface, table, mid-air, etc.) of the physical environment based on the placement of the virtual element and present the virtual element based on the detected attribute.
Abstract:
Implementations use a first device (e.g., an HMD) to provide a CGR environment that augments the input and output capabilities of a second device, e.g., a laptop, smart speaker, etc. In some implementations, the first device communicates with a second device in its proximate physical environment to exchange input or output data. For example, an HMD may capture an image of a physical environment that includes a laptop. The HMD may detect the laptop, send a request the laptop's content, receive content from the laptop (e.g., the content that the laptop is currently displaying and additional content), identify the location of the laptop, and display a virtual object with the received content in the CGR environment on or near the laptop. The size, shape, orientation, or position of the virtual object (e.g., a virtual monitor or monitor extension) may also be configured to provide a better user experience.
Abstract:
The present disclosure relates generally to for displaying a set of images based on captured image data. In some embodiments, a system displays a first set of images corresponding to a first perspective in a virtual reality (VR) (or mixed reality (MR)) environment. The system receives a request while displaying the first set of images. The system captures image data corresponding to a second perspective in the VR (or MR) environment in response to receiving the request. The system displays a second set of images based on the captured image data.
Abstract:
A device implementing a system for managing multi-modal rendering of application content includes at least one processor configured to receive content, provided by an application running on a device, for displaying in a three-dimensional display mode. The at least one processor is further configured to determine that the content corresponds to two-dimensional content. The at least one processor is further configured to identify a portion of the two-dimensional content for enhancement by a three-dimensional render. The at least one processor is further configured to enhance, in response to the determining, the portion of the two-dimensional content by the three-dimensional renderer. The at least one processor is further configured to provide for display of the enhanced portion of the two-dimensional content on a display of the device in the three-dimensional display mode.
Abstract:
Various implementations disclosed herein include devices, systems, and methods that enable a device to provide a view of virtual elements and a physical environment where the presentation of the virtual elements is based on positioning relative to the physical environment. In one example, a device is configured to detect a change in positioning of a virtual element, for example, when a virtual element is added, moved, or the physical environment around the virtual element is changed. The location of the virtual element in the physical environment is used to detect an attribute of the physical environment upon which the presentation of the virtual element depends. Thus, the device is further configured to detect an attribute (e.g., surface, table, mid-air, etc.) of the physical environment based on the placement of the virtual element and present the virtual element based on the detected attribute.
Abstract:
Systems, methods and program storage devices are disclosed, which comprise instructions to cause one or more processing units to analyze input images to a texture atlas and determine how each texture should be modified before being stored in the texture atlas to prevent undesirable drawing artifacts. For example, “tileable” images may be identified on a per-edge basis (e.g., by determining whether each edge pixel is above a certain opacity threshold). The tileable images may then be modified, e.g., by extruding a 1-pixel border identical to the outer row of pixels, before being stored in the texture atlas. “Character”-type sprites may also be identified on a per-edge basis (e.g., by determining whether each edge pixel is below the opacity threshold). The character-type sprites may then by modified by adding a single pixel transparent border around the outer rows of pixels before being stored in the texture atlas.
Abstract:
Systems, methods and program storage devices are disclosed, which comprise instructions to cause one or more processing units to dynamically generate refined normal maps for 2D texture maps, e.g., supplied by a programmer or artist. Generally speaking, there are two pertinent properties to keep in balance when generating normal vectors comprising a normal map: “smoothness” and “bumpiness.” The smoothness of the normal vectors is influenced by how many neighboring pixels are involved in the “smoothening” calculation. Incorporating the influence of a greater number of neighboring pixels' values reduces the overall bumpiness of the normal map, as each pixel's value takes weight from those neighboring pixels. Thus, the techniques described herein iteratively: downsample height maps; generate normal maps; scale the normal maps to maintain bumpiness; and blend the generated scaled normal maps with generated normal maps from previous iterations—until the smoothness of the resultant normal map has reached desired levels.
Abstract:
Various implementations disclosed herein include devices, systems, and methods that enable a device to provide a view of virtual elements and a physical environment where the presentation of the virtual elements is based on positioning relative to the physical environment. In one example, a device is configured to detect a change in positioning of a virtual element, for example, when a virtual element is added, moved, or the physical environment around the virtual element is changed. The location of the virtual element in the physical environment is used to detect an attribute of the physical environment upon which the presentation of the virtual element depends. Thus, the device is further configured to detect an attribute (e.g., surface, table, mid-air, etc.) of the physical environment based on the placement of the virtual element and present the virtual element based on the detected attribute.
Abstract:
A mixed reality system that includes a device and a base station that communicate via a wireless connection The device may include sensors that collect information about the user's environment and about the user. The information collected by the sensors may be transmitted to the base station via the wireless connection. The base station renders frames or slices based at least in part on the sensor information received from the device, encodes the frames or slices, and transmits the compressed frames or slices to the device for decoding and display. The base station may provide more computing power than conventional stand-alone systems, and the wireless connection does not tether the device to the base station as in conventional tethered systems. The system may implement methods and apparatus to maintain a target frame rate through the wireless link and to minimize latency in frame rendering, transmittal, and display.
Abstract:
A first device coupled with a first display and an image sensor receives output data from a second device having a second display different from the first display. The output data represents content displayable by the second device on the second display. The first device determines, using the image sensor, a position of the second display relative to the first device and causes the first display to display content based on the output data received from the second device and the determined position of the second display relative to the first device.