Abstract:
Systems, methods and program storage devices are disclosed, which comprise instructions to cause one or more processing units to analyze input images to a texture atlas and determine how each texture should be modified before being stored in the texture atlas to prevent undesirable drawing artifacts. For example, “tileable” images may be identified on a per-edge basis (e.g., by determining whether each edge pixel is above a certain opacity threshold). The tileable images may then be modified, e.g., by extruding a 1-pixel border identical to the outer row of pixels, before being stored in the texture atlas. “Character”-type sprites may also be identified on a per-edge basis (e.g., by determining whether each edge pixel is below the opacity threshold). The character-type sprites may then by modified by adding a single pixel transparent border around the outer rows of pixels before being stored in the texture atlas.
Abstract:
A method may include receiving a communication from a device at an artificial intelligence controller including state information for a software application component running on the device, the state information including information corresponding to at least one potential state change available to the software application component, and metrics associated with at least one end condition, interpreting the state information using the artificial intelligence controller, and selecting an artificial intelligence algorithm from a plurality of artificial intelligence algorithms for use by the software application component based on the interpreted state information; and transmitting, to the device, an artificial intelligence algorithm communication, the artificial intelligence algorithm communication indicating the selected artificial intelligence algorithm for use in the software application component on the device.
Abstract:
A method may include receiving a communication from a device at an artificial intelligence controller including state information for a software application component running on the device, the state information including information corresponding to at least one potential state change available to the software application component, and metrics associated with at least one end condition, interpreting the state information using the artificial intelligence controller, and selecting an artificial intelligence algorithm from a plurality of artificial intelligence algorithms for use by the software application component based on the interpreted state information; and transmitting, to the device, an artificial intelligence algorithm communication, the artificial intelligence algorithm communication indicating the selected artificial intelligence algorithm for use in the software application component on the device.
Abstract:
A method includes determining an eye focus depth and determining a focus point relative to a viewing location in a virtual environment based on the eye focus depth, wherein the virtual environment includes a computer-generated object. The method also includes, upon determining that the focus point is located within a threshold distance from the computer-generated object, activating a function of a computer-executable code development interface relative to the computer-generated object.
Abstract:
Systems and methods for simulated reality view-based breakpoints are described. Some implementations may include accessing motion data captured using one or more motion sensors; determining, based at least on the motion data, a view within a simulated reality environment presented using a head-mounted display; detecting that the view is a member of a set of views associated with a breakpoint; based at least on the view being a member of the set of views, triggering the breakpoint; responsive to the breakpoint being triggered, performing a debug action associated with the breakpoint; and, while performing the debug action, continuing to execute a simulation process of the simulated reality environment to enable a state of at least one virtual object in the simulated reality environment to continue to evolve and be viewed with the head-mounted display.
Abstract:
Various implementations disclosed herein include devices, systems, and methods that enable two or more devices to simultaneously view or edit the same 3D model in the same or different settings/viewing modes (e.g., monoscopically, stereoscopically, in SR, etc.). In an example, one or more users are able to use different devices to interact in the same setting to view or edit the same 3D model using different views from different viewpoints. The devices can each display different views from different viewpoints of the same 3D model and, as changes are made to the 3D model, consistency of the views on the devices is maintained.
Abstract:
A method of assembling a tile map can include assigning each tile in a plurality of tiles to one or more color groups in correspondence with a measure of a color profile of the respective tile: A position of each tile in relation to one or more neighboring tiles can be determined from a position of a silhouette corresponding to each respective tile in relation to one or more neighboring silhouettes within a set containing a plurality of silhouettes. The plurality of tiles can be automatically assembled into a tile map, with a position of each tile in the tile map being determined from the color group to which the respective tile belongs and the determined position of the respective tile in relation to the one or more neighboring tiles. Tangible, non-transitory computer-readable media can include computer executable instructions that, when executed, cause a computing environment to implement disclosed methods.
Abstract:
Systems, methods, and computer-readable media for enabling efficient control of a media application at a media electronic device by a user electronic device are provided.
Abstract:
Systems, methods, and computer-readable media for enabling efficient control of a media application at a media electronic device by a user electronic device are provided.
Abstract:
Systems, methods, and computer readable media to improve the operation of graphics systems are described. In general, collision avoidance techniques are disclosed that operate even when the agent lacks a priori knowledge of its environment and is, further, agnostic as to whether the environment is two-dimensional (2D) or three-dimensional (3D), whether the obstacles are convex or concave, or whether the obstacles are moving or stationary. More particularly, techniques disclosed herein use simple geometry to identify which edges of which obstacles an agent is most likely to collide. With this known, the direction of an avoidance force is also known. The magnitude of the force may be fixed, based on the agent's maximum acceleration, and modulated by weighting agents