Abstract:
A method includes determining an eye focus depth and determining a focus point relative to a viewing location in a virtual environment based on the eye focus depth, wherein the virtual environment includes a computer-generated object. The method also includes, upon determining that the focus point is located within a threshold distance from the computer-generated object, activating a function of a computer-executable code development interface relative to the computer-generated object.
Abstract:
Systems, methods, and computer-readable media are provided for enabling efficient control of a media application at a media electronic device by a user electronic device, and, more particularly, for reducing perceived latency of and/or input response time to control data that may be provided by a user electronic device for a media application running on a media electronic device.
Abstract:
Systems, methods, and computer-readable media for enabling efficient control of a media application at a media electronic device by a user electronic device are provided. For example, a user control data request may be generated by a device application of the media electronic device based on a media control data request received from the media application, where the user control data request may be utilized by a controller application of the user electronic device to update the status of one or more components of the user electronic device and/or to communicate user control data back to the device application, whereby such user control data may be utilized by the device application to generate corresponding media control data for use by the media application (e.g., to control game play of a video game).
Abstract:
This disclosure relates generally to the field of image processing and, more particularly, to various techniques and animation tools for allowing 2D and 3D graphics rendering and animation infrastructures to be able to dynamically render customized animations—without the need for the customized animations to be explicitly tied to any particular graphical entity. These so-called entity agnostic animations may then be integrated into “mixed” graphical scenes (i.e., scenes with both two-dimensional and three-dimensional components), where they may be: applied to any suitable graphical entity; visualized in real-time by the programmer; edited dynamically by the programmer; and shared across various computing platforms and environments that support the entity agnostic animation tools described herein. The entity agnostic animations created by the techniques described herein may be output directly to the current scene file that a programmer is working on, or they may be output to standalone, reusable entity agnostic animation object files.
Abstract:
The subject technology provides for parsing a line of code in a project of an integrated development environment (IDE). The subject technology executes indirectly, using the interpreter, the parsed line of code. The interpreter references a translated source code document generated by a source code translation component from a machine learning (ML) document written in a particular data format. The translated source code document includes code in a chosen programming language specific to the IDE, and the code of the translated source code document is executable by the interpreter. Further the subject technology provides, by the interpreter, an output of the executed parsed line of code.
Abstract:
A method may include receiving, at an artificial intelligence cloud service, a plurality of artificial intelligence feedback communications from a plurality of devices, wherein each artificial intelligence feedback communication of the plurality of artificial intelligence feedback communications includes data generated by software application components running on respective ones of the plurality of devices, the software application components including respective current artificial intelligence models, deriving, from the data included with each artificial intelligence feedback communication, an associated artificial intelligence model update for each of the respective current artificial intelligence models on the plurality of devices, and transmitting, to the plurality of devices, a plurality of artificial intelligence model update communications, wherein each artificial intelligence model update communication of the plurality of artificial intelligence model update communications includes the derived associated artificial intelligence model update for updating a corresponding one of the respective current artificial intelligence models on the plurality of devices.
Abstract:
Systems, methods, and computer-readable media for enabling efficient control of a media application at a media electronic device by a user electronic device are provided.
Abstract:
Systems, methods, and computer-readable media for enabling efficient control of a media application at a media electronic device by a user electronic device are provided.
Abstract:
This disclosure relates generally to the field of image processing and, more particularly, to various techniques and animation tools for allowing 2D and 3D graphics rendering and animation infrastructures to be able to dynamically render customized animations—without the need for the customized animations to be explicitly tied to any particular graphical entity. These so-called entity agnostic animations may then be integrated into “mixed” graphical scenes (i.e., scenes with both two-dimensional and three-dimensional components), where they may be: applied to any suitable graphical entity; visualized in real-time by the programmer; edited dynamically by the programmer; and shared across various computing platforms and environments that support the entity agnostic animation tools described herein. The entity agnostic animations created by the techniques described herein may be output directly to the current scene file that a programmer is working on, or they may be output to standalone, reusable entity agnostic animation object files.
Abstract:
Systems, methods and program storage devices are disclosed, which comprise instructions to cause one or more processing units to analyze input images to a texture atlas and determine how each texture should be modified before being stored in the texture atlas to prevent undesirable drawing artifacts. For example, “tileable” images may be identified on a per-edge basis (e.g., by determining whether each edge pixel is above a certain opacity threshold). The tileable images may then be modified, e.g., by extruding a 1-pixel border identical to the outer row of pixels, before being stored in the texture atlas. “Character”-type sprites may also be identified on a per-edge basis (e.g., by determining whether each edge pixel is below the opacity threshold). The character-type sprites may then by modified by adding a single pixel transparent border around the outer rows of pixels before being stored in the texture atlas.