Abstract:
The present disclosure relates to systems and methods implemented on a memory controller for detecting and mitigating memory attacks (e.g., row hammer attacks). For example, a memory controller may track activations of row addresses within a memory hardware (e.g., a DRAM device) and determine whether a pattern of activations is indicative of a row hammer attack. This is determined using a counting mode for corresponding memory sub-banks. Where a likely row hammer attack is detected, the memory controller may activate a sampling mode (rather than the counting mode) for a particular sub-bank to identify which of the row addresses should be refreshed on the memory hardware. The implementations described herein provide a low computational cost alternative to heavy-handed detection mechanisms that require access to significant computing resources to accurately detect and mitigate row hammer attacks.
Abstract:
Cryptographically-secured deferral tickets provided by a minting process that runs in a secure enclave on a computing device reset an authenticated watchdog timer that reboots the device from a hardware-protected recovery operating system to re-image the device into a known good state if the timer expires. The deferral tickets are written to a secure channel using a symmetric key that is provisioned by repurposing an existing Intel SGX (Software Guard Extension) Versioning Support protocol that enables migration of secrets between enclaves that have the same author. In an illustrative embodiment, the deferral ticket minting process and authenticated watchdog timer execute locally to enable automated recovery of the computing device when utilized in far edge infrastructure of a fifth generation (5G) network such as a distributed unit (DU) of a radio access network (RAN).
Abstract:
A server device and method are provided for use in predictive server-side rendering of scenes based on client-side user input. The server device may include a processor and a storage device holding instructions for an application program executable by the processor to receive, at the application program, a current navigation input in a stream of navigation inputs from a client device over a network, calculate a predicted future navigation input based on the current navigation input and a current application state of the application program, render a future scene based on the predicted future navigation input to a rendering surface, and send the rendering surface to the client device over the network.
Abstract:
Various technologies described herein pertain to a computing device that includes secure hardware (e.g., a TPM, a secure processor of a processing platform, protected memory that includes a software-based TPM, etc.). The secure hardware includes a shared secret, which is shared by the secure hardware and a server computing system. The shared secret is provisioned by the server computing system or a provisioning computing system of a party affiliated with the server computing system. The secure hardware further includes a cryptographic engine that can execute a cryptographic algorithm using the shared secret or a key generated from the shared secret. The cryptographic engine can execute the cryptographic algorithm to perform encryption, decryption, authentication, and/or attestation.
Abstract:
A classification system classifies different aspects of content of an input image stream, such as faces, landmarks, events, and so forth. The classification system includes a general classifier and at least one specialized classifier template. The general classifier is trained to classify a large number of different aspects of content, and a specialized classifier can be trained based on a specialized classifier template during operation of the classification system to classify a particular subset of the multiple different aspects of content. The classification system determines when to use the general classifier and when to use a specialized classifier based on class skew, which refers to the temporal locality of a subset of aspects of content in the image stream.
Abstract:
The claimed subject matter includes techniques for live migration of a graphics processing unit (GPU) state. An example method includes receiving recorded GPU commands from a relay at a destination GPU. The method also includes replaying the recorded GPU commands at the destination GPU. The method also includes detecting a downtime for the GPU commands. The method further includes establishing a connection between the destination GPU and the client during the detected downtime.
Abstract:
A server device and method are provided for use in predictive server-side rendering of scenes based on client-side user input. The server device may include a processor and a storage device holding instructions for an application program executable by the processor to receive, at the application program, a current navigation input in a stream of navigation inputs from a client device over a network, calculate a predicted future navigation input based on the current navigation input and a current application state of the application program, render a future scene based on the predicted future navigation input to a rendering surface, and send the rendering surface to the client device over the network.
Abstract:
Various technologies described herein pertain to a computing device that includes secure hardware (e.g., a TPM, a secure processor of a processing platform, protected memory that includes a software-based TPM, etc.). The secure hardware includes a shared secret, which is shared by the secure hardware and a server computing system. The shared secret is provisioned by the server computing system or a provisioning computing system of a party affiliated with the server computing system. The secure hardware further includes a cryptographic engine that can execute a cryptographic algorithm using the shared secret or a key generated from the shared secret. The cryptographic engine can execute the cryptographic algorithm to perform encryption, decryption, authentication, and/or attestation.
Abstract:
In a cloud computing environment, a production server virtualization stack is minimized to present fewer security vulnerabilities to malicious software running within a guest virtual machine. The minimal virtualization stack includes support for those virtual devices necessary for the operation of a guest operating system, with the code base of those virtual devices further reduced. Further, a dedicated, isolated boot server provides functionality to securely boot a guest operating system. The boot server is isolated through use of an attestation protocol, by which the boot server presents a secret to a network switch to attest that the boot server is operating in a clean mode. The attestation protocol may further employ a secure co-processor to seal the secret, so that it is only accessible when the boot server is operating in the clean mode.
Abstract:
A server device and method are provided for use in predictive server-side rendering of scenes based on client-side user input. The server device may include a processor and a storage device holding instructions for an application program executable by the processor to receive, at the application program, a current navigation input in a stream of navigation inputs from a client device over a network, calculate a predicted future navigation input based on the current navigation input and a current application state of the application program, render a future scene based on the predicted future navigation input to a rendering surface, and send the rendering surface to the client device over the network.