Abstract:
In various examples, at least partial control of a vehicle may be transferred to a control system remote from the vehicle. Sensor data may be received from a sensor(s) of the vehicle and the sensor data may be encoded to generate encoded sensor data. The encoded sensor data may be transmitted to the control system for display on a virtual reality headset of the control system. Control data may be received by the vehicle and from the control system that may be representative of a control input(s) from the control system, and actuation by an actuation component(s) of the vehicle may be caused based on the control input.
Abstract:
Embodiments of the invention may include receiving a plurality of bids, wherein each bid corresponds to an advertisement placement opportunity in a plurality of video frames generated by a graphics processing system in response to instructions from a software application. In addition, a winning bid may be determined from the plurality of bids by evaluating the plurality of bids. Further, an advertisement corresponding to the winning bid may be provided to the graphics processing system, wherein the graphics processing system is operable to include the advertisement in the plurality of video frames for display.
Abstract:
Embodiments of the invention may include receiving characteristics associated with an execution session of a software application, wherein the execution session of the software application includes generating a plurality of video frames by a graphics processing system. In addition, an advertisement may be determined based on the characteristics. Further, advertisement data corresponding to the advertisement may be provided to the graphics processing system, wherein the graphics processing system is operable to include the advertisement data in the plurality of video frames for display.
Abstract:
In various examples, at least partial control of a vehicle may be transferred to a control system remote from the vehicle. Sensor data may be received from a sensor(s) of the vehicle and the sensor data may be encoded to generate encoded sensor data. The encoded sensor data may be transmitted to the control system for display on a virtual reality headset of the control system. Control data may be received by the vehicle and from the control system that may be representative of a control input(s) from the control system, and actuation by an actuation component(s) of the vehicle may be caused based on the control input.
Abstract:
In various examples, at least partial control of a vehicle may be transferred to a control system remote from the vehicle. Sensor data may be received from a sensor(s) of the vehicle and the sensor data may be encoded to generate encoded sensor data. The encoded sensor data may be transmitted to the control system for display on a virtual reality headset of the control system. Control data may be received by the vehicle and from the control system that may be representative of a control input(s) from the control system, and actuation by an actuation component(s) of the vehicle may be caused based on the control input.
Abstract:
One aspect provides a system for cooperative application control. In one embodiment, the system includes a cloud application engine and a cloud interaction engine embodied in at least one server of the system. The cloud application engine is configured to execute the application and generate a video stream of the application. The cooperative interaction engine is configured to: receive the video stream of the application from the cloud application engine; multicast a view of the video stream to multiple clients connected to the cooperative interaction engine, the multiple clients corresponding to multiple users cooperatively interacting with the application on a shared workpiece; receive multiple separate response streams from the multiple clients; combine the multiple separate response streams into a joint response stream; and transmit the joint response stream to the cloud application engine which handles the joint response stream as a single response stream from a single user.
Abstract:
Traditionally, a software application is developed, tested, and then published for use by end users. Any subsequent update made to the software application is generally in the form of a human programmed modification made to the code in the software application itself, and further only becomes usable once tested, published, and installed by end users having the previous version of the software application. This typical software application lifecycle causes delays in not only generating improvements to software applications, but also to those improvements being made accessible to end users. To help avoid these delays and improve performance of software applications, deep learning models may be made accessible to the software applications for use in providing inferenced data to the software applications, which the software applications may then use as desired. These deep learning models can furthermore be improved independently of the software applications using manual and/or automated processes.
Abstract:
Traditionally, a software application is developed, tested, and then published for use by end users. Any subsequent update made to the software application is generally in the form of a human programmed modification made to the code in the software application itself, and further only becomes usable once tested, published, and installed by end users having the previous version of the software application. This typical software application lifecycle causes delays in not only generating improvements to software applications, but also to those improvements being made accessible to end users. To help avoid these delays and improve performance of software applications, deep learning models may be made accessible to the software applications for use in providing inferenced data to the software applications, which the software applications may then use as desired. These deep learning models can furthermore be improved independently of the software applications using manual and/or automated processes.
Abstract:
Traditionally, a software application is developed, tested, and then published for use to end users. Any subsequent update made to the software application is generally in the form of a human programmed modification made to the code in the software application itself, and further only becomes usable once tested and published by developers and/or publishers, and installed by end users having the previous version of the software application. This typical software application lifecycle causes delays in not only generating improvements to software applications, but also to those improvements being made accessible to end users. To help avoid these delays and improve performance of software applications, deep learning models may be made accessible to the software applications for use in performing inferencing operations to generate inferenced data output for the software applications, which the software applications may then use as desired. These deep learning models can furthermore be improved independently of the software applications using manual and/or automated processes.
Abstract:
A system for multi-client control of a common avatar. In one embodiment, the system includes: (1) a cloud game engine for executing game code configured to create a game, generate a video stream corresponding to a particular player and accept a response stream from the particular player to allow the particular player to play the game and (2) a cooperative play engine associated with the cloud game engine for communication therewith and configured to multicast the video stream from the cloud game engine to the particular player and at least one other player, combine separate response streams from the particular player and the at least one other player into a joint response stream based on avatar functions contained therein and provide the joint response stream to the cloud game engine.