Abstract:
Techniques to automatically generate a navigation graph for a given environment and agent are disclosed. The environment may include an arbitrary number of polygonal obstacles arbitrarily arranged, concave or convex, static or dynamic. The disclosed operation extrudes (in a specified manner) the vertex of each obstacle. The extruded vertices comprise the navigation graph's nodes. Each object's extruded vertices may be joined to form a corresponding extruded object. Paths may then be identified by attempting to connect every extruded vertex with every other extruded vertex. Those paths intersecting any of the extruded objects are rejected as possible paths. In some embodiments, paths that are oriented in approximately the same direction having approximately the same length may be removed as being redundant.
Abstract:
Systems, methods and program storage devices are disclosed, which comprise instructions to cause one or more processing units to dynamically render 3D lighting effects for a supplied 2D texture map—without the need for a programmer to supply a normal map along with the 2D texture map. According to some embodiments, an algorithm may inspect the pixel values (e.g., RGB values) of each individual pixel of the texture map, and, based on the pixel values, can accurately estimate where the lighting and shadow effects should be applied to the source 2D texture file to simulate 3D lighting. Further, because these effects are being rendered dynamically by the rendering and animation infrastructure, the techniques described herein work especially well for “dynamic content,” e.g., user-downloaded data, in-application user-created content, operating system (OS) icons, and other user interface (UI) elements for which programmers do not have access to normal maps a priori.
Abstract:
A method includes determining an eye focus depth and determining a focus point relative to a viewing location in a virtual environment based on the eye focus depth, wherein the virtual environment includes a computer-generated object. The method also includes, upon determining that the focus point is located within a threshold distance from the computer-generated object, activating a function of a computer-executable code development interface relative to the computer-generated object.
Abstract:
The subject technology provides for parsing a line of code in a project of an integrated development environment (IDE). The subject technology executes indirectly, using the interpreter, the parsed line of code. The interpreter references a translated source code document generated by a source code translation component from a machine learning (ML) document written in a particular data format. The translated source code document includes code in a chosen programming language specific to the IDE, and the code of the translated source code document is executable by the interpreter. Further the subject technology provides, by the interpreter, an output of the executed parsed line of code.
Abstract:
A method includes determining an eye focus depth and determining a focus point relative to a viewing location in a virtual environment based on the eye focus depth, wherein the virtual environment includes a computer-generated object. The method also includes, upon determining that the focus point is located within a threshold distance from the computer-generated object, activating a function of a computer-executable code development interface relative to the computer-generated object.
Abstract:
A method includes determining an eye focus depth and determining a focus point relative to a viewing location in a virtual environment based on the eye focus depth, wherein the virtual environment includes a computer-generated object. The method also includes, upon determining that the focus point is located within a threshold distance from the computer-generated object, activating a function of a computer-executable code development interface relative to the computer-generated object.
Abstract:
One exemplary implementation involves performing operations at a device with one or more processors, a camera, and a computer-readable storage medium, such as a desktop computer, laptop computer, tablet, or mobile phone. The device receives a data object corresponding to three dimensional (3D) content from a separate device. The device receives input corresponding to a user selection to view the 3D content in a computer generated reality (CGR) environment, and in response, displays the CGR environment at the device. To display the CGR environment the device uses the camera to capture images and constructs the CGR environment using the data object and the captured images.
Abstract:
Systems, methods, and computer-readable media are provided for enabling efficient control of a media application at a media electronic device by a user electronic device, and, more particularly, for more practically handling initial and subsequent user touch events on a surface of a touchpad input component with respect to a potentially intended default center position and/or for more accurately enabling full saturation of a particular directional control.
Abstract:
Systems and techniques for generating an artificial terrain map can compute a region of a noise map in an N-dimensional space and define a terrain characteristic in correspondence with a value of the noise map at each of a selected plurality of positions within the region of the noise map. The terrain characteristic can be projected at each of a selected plurality of positions within the region on a lower-dimensional sub-space. A map of an artificial terrain can be rendered based on the projection. The map of the artificial terrain can be scaled or otherwise manipulated in correspondence with scaling or otherwise manipulating the lower-dimensional sub-space. Generated maps in machine-readable form can be converted to a human-perceivable form, and/or to a modulated signal form conveyed over a communication connection.
Abstract:
A method may include receiving a communication from a device at an artificial intelligence controller including state information for a software application component running on the device, the state information including information corresponding to at least one potential state change available to the software application component, and metrics associated with at least one end condition, interpreting the state information using the artificial intelligence controller, and selecting an artificial intelligence algorithm from a plurality of artificial intelligence algorithms for use by the software application component based on the interpreted state information; and transmitting, to the device, an artificial intelligence algorithm communication, the artificial intelligence algorithm communication indicating the selected artificial intelligence algorithm for use in the software application component on the device.