Abstract:
In one embodiment, the present invention includes a processor that has an on-die storage such as a static random access memory to store an architectural state of one or more threads that are swapped out of architectural state storage of the processor on entry to a system management mode (SMM). In this way communication of this state information to a system management memory can be avoided, reducing latency associated with entry into SMM. Embodiments may also enable the processor to update a status of executing agents that are either in a long instruction flow or in a system management interrupt (SMI) blocked state, in order to provide an indication to agents inside the SMM. Other embodiments are described and claimed.
Abstract:
In one embodiment, the present invention includes a processor that has an on-die storage such as a static random access memory to store an architectural state of one or more threads that are swapped out of architectural state storage of the processor on entry to a system management mode (SMM). In this way communication of this state information to a system management memory can be avoided, reducing latency associated with entry into SMM. Embodiments may also enable the processor to update a status of executing agents that are either in a long instruction flow or in a system management interrupt (SMI) blocked state, in order to provide an indication to agents inside the SMM. Other embodiments are described and claimed.
Abstract:
The present disclosure is directed to a protection scheme for remotely-stored data. A system may comprise, for example, at least one device including at least one virtual machine (VM) and a trusted execution environment (TEE). The TEE may include an encryption service to encrypt or decrypt data received from the at least one VM. In one embodiment, the at least one VM may include an encryption agent to interact with interfaces in the encryption service. For example, the encryption agent may register with the encryption service, at which time an encryption key corresponding to the at least one VM may be generated. After verifying the registration of the encryption agent, the encryption service may utilize the encryption key corresponding to the at least one VM to encrypt or decrypt data received from the encryption agent. The encryption service may then return the encrypted or decrypted data to the encryption agent.
Abstract:
A method and system for storing hints in poisoned data of a computer system memory includes receiving poisoned data in a component of the system; forwarding the poisoned data to a memory controller of the system; and forwarding additional data regarding the poisoned data to a memory controller. The memory controller writes the poisoned data to the system memory wherein the written poisoned data includes a poison signature and a hint based on the additional data regarding the poisoned data; and when the written poisoned data is read signaling a system error and returning the poison signature and the hint to a system software of the system.
Abstract:
Method, system, and apparatus for predicting imminent memory failures based on one or more adverse conditions being subjected to the memory. One embodiment of a method comprises: tracking one or more corrected memory errors (CEs) in a memory; tracking one or more generated tokens, wherein the tokens are being generated at an initial rate; detecting one or more adverse conditions being subjected to the memory and responsive to the detection, reduce the rate at which the tokens are being generated; decrementing the tracked CEs based on a reoccurring leak timer, wherein upon each expiration of the reoccurring leak timer, the tracked CEs is decremented by one so long as there is at least one tracked token; reducing the tracked tokens by one in response to the decrement of the tracked CEs; and triggering a CE overflow signal upon detecting a count of the tracked CEs exceeding an overflow limit.
Abstract:
A method and system for storing hints in poisoned data of a computer system memory includes receiving poisoned data in a component of the system; forwarding the poisoned data to a memory controller of the system; and forwarding additional data regarding the poisoned data to a memory controller. The memory controller writes the poisoned data to the system memory wherein the written poisoned data includes a poison signature and a hint based on the additional data regarding the poisoned data; and when the written poisoned data is read signaling a system error and returning the poison signature and the hint to a system software of the system.
Abstract:
The present disclosure is directed to a protection scheme for remotely-stored data. A system may comprise, for example, at least one device including at least one virtual machine (VM) and a trusted execution environment (TEE). The TEE may include an encryption service to encrypt or decrypt data received from the at least one VM. In one embodiment, the at least one VM may include an encryption agent to interact with interfaces in the encryption service. For example, the encryption agent may register with the encryption service, at which time an encryption key corresponding to the at least one VM may be generated. After verifying the registration of the encryption agent, the encryption service may utilize the encryption key corresponding to the at least one VM to encrypt or decrypt data received from the encryption agent. The encryption service may then return the encrypted or decrypted data to the encryption agent.
Abstract:
In an embodiment, a processor includes at least one core to execute instructions and a memory controller coupled to the at least one core. In turn, the memory controller includes a spare logic to cause a dynamic transfer of data stored on a first memory device coupled to the processor to a second memory device coupled to the processor, responsive to a temperature of the first memory device exceeding a thermal threshold. Other embodiments are described and claimed.
Abstract:
Methods and apparatus relating to electrical margining of multi-parameter high-speed interconnect links with multi-sample probing are described. In one embodiment, logic is provided to generate one or more parameter values, corresponding to an electrical operating margin of an interconnect. The one or more parameter values are generated based on a plurality of eye observation sets to be detected in response to operation of the interconnect in accordance with a plurality of parameter sets (e.g., by using quantitative optimization techniques). In turn, the interconnect is to be operated at the one or more parameter values if it is determined that the one or more parameter values cause the interconnect to operate at an optimum level relative to an operation of the interconnect in accordance with one or more less optimum parameter levels. Other embodiments are also disclosed and claimed.
Abstract:
In an embodiment, a processor includes at least one core to execute instructions and a memory controller coupled to the at least one core. In turn, the memory controller includes a spare logic to cause a dynamic transfer of data stored on a first memory device coupled to the processor to a second memory device coupled to the processor, responsive to a temperature of the first memory device exceeding a thermal threshold. Other embodiments are described and claimed.