Abstract:
Technologies are presented for reducing lag time via speculative graphics rendering in cloud based gaming. In some examples, historical data about statistically relevant large populations of players is provided. The historical data may include state transitions through various game locations or situations. In some of the game locations there may be a correlation between a probability of a particular upcoming scene and a player state. Example game locations or situations may include areas that players tend to cross in one or more straight lines, corners that players may round in a particular fashion, spots where certain player motions are commonly engaged in, such as looking up, and the like. The historical data may be tested against a certain predictive strength and rendered predicted game states may be prepared ahead of player need.
Abstract:
Technologies are provided for locally processing queue requests from co-located workers. In some examples, information about the usage of remote datacenter queues by co-located workers may be used to determine one or more matched queues. Messages from local workers to a remote datacenter queue classified as a matched queue may be stored locally. Subsequently, local workers that request messages from matched queues may be provided with the locally-stored messages.
Abstract:
Technologies related to secure system time reporting are generally described. In some examples, responses to some system time requests may be manipulated to prevent leaking information that may be of interest for timing attacks, while responses to other system time requests need not be manipulated. In particular, responses to system time requests that are separated from a previous system time request by a predetermined minimum value, or less, may be manipulated. Responses to system time requests that are separated from a previous system time request by more than the predetermined minimum value need not be manipulated. Furthermore, secure system time reporting may be adaptively deployed to servers in a data center on an as-needed basis.
Abstract:
Technologies are presented for addressing dependency interruptions due to inactivation of a service module in a modular datacenter environment through a diagnostic module. In some examples, the diagnostic module may substitute for one or more inactive service modules in a datacenter architecture. Messages and/or items that are directed to the inactive service module(s) may be intercepted by or rerouted to the diagnostic module and used to generate error reports and/or repair activity triggers.
Abstract:
Techniques described herein generally relate to a task management system for a chip multiprocessor having multiple processor cores. The task management system tracks the changing instruction set capabilities of each processor core and selects processor cores for use based on the tracked capabilities. In this way, a processor core with one or more failed processing elements can still be used effectively, since the processor core may be selected to process instruction sets that do not use the failed processing elements.
Abstract:
The present disclosure generally relates to instruction optimization (or otherwise improved execution of instructions) using voltage-based functional performance variation. In some examples, a method is described that includes instruction optimization (or otherwise improved execution of instructions) using voltage-based functional performance variation. In some examples, the method includes characterizing a workload for a multi-core processor to identify one or more subunits of individual cores of the multi-core processor for utilization by instructions included in the workload, selecting a voltage at which to operate cores of the multi-core processor, and assigning individual ones of the instructions of the workload to a core of the cores of the multi-core processor based on performance of the identified one or more subunits of the individual cores at the selected voltage.
Abstract:
In some examples, a computing system may gathering, from a machine learning unit associated with the computing system, data as a training data, label the data to identify the labeled data as the training data, which may be then recirculated for each of one or more analytics modules of the machine learning unit.
Abstract:
Systems and methods to wirelessly transmit power are provided. A coil assembly is provided. In some examples, the coil assembly is configured to generate a signal in response to an ambient field, and to magnetically couple with a device to deliver power to the device.
Abstract:
Technologies are generally described to develop and implement a searchable knowledge source to identify distributed user interface (DUI) elements. In some examples, a DUI identification system may receive a control record of an application and populate one or more searchable knowledge sources based on an application description retrieved. The application description may include keywords, input elements, and output elements, and the searchable knowledge sources may be generated from control records of a multitude of applications. The DUI identification system may execute a query on the searchable knowledge sources based on the received keywords, input elements, and output elements associated with a target workflow from a requesting client. A query result that includes one or more DUI elements may be provided to the requesting client. The DUI elements may connect the input elements to corresponding output elements and match the keywords associated with the target workflow.