Abstract:
A memory device includes a stack of circuit layers, each circuit layer having formed thereon a memory circuit configured to store data and a redundant resources circuit configured to provide redundant circuitry to correct defective circuitry on at least one memory circuit formed on at least one layer in the stack. The redundant resources circuit includes a partial bank of redundant memory cells, wherein an aggregation of the partial bank of redundant memory cells in each of the circuit layers of the stack includes at least one full bank of redundant memory cells and wherein the redundant resources circuit is configured to replace at least one defective bank of memory cells formed on any of the circuit layers in the stack with at least a portion of the partial bank of redundant memory cells formed on any of the circuit layers in the stack.
Abstract:
Disclosed are techniques for a memory buffer to track access to paged regions of a memory system at a configurable granularity finer than the size of the paged regions to provide more detailed statistics on memory access. The memory buffer may advertise its capabilities for fine-grained cold page tracking. The memory buffer may receive from the host information to configure a granularity of sub-regions of a paged region and a size of counters used to track access to the sub-regions. The memory buffer may track access requests to the sub-regions using the counters and to provide information on sub-region tracking to the host to identify individual hot or cold sub-regions. The host may make migration decisions for the paged regions with more granular information such as compaction of sub-regions to create a cold page or to treat each sub-region as a separately compressible entity to compress a mostly cold page.
Abstract:
A memory system selectively compresses and/or decompresses pages of a memory array based on requests from a host device. Upon performing compression, the memory buffer device returns compression context metadata to the host device for storing in the page table of the host device to enable the host device to subsequently obtain data from the compressed page. The host device may subsequently send a request for the memory buffer device to perform decompression to a free page in the memory array for accessing by the host device, or the host device may directly access the compressed page for local decompression and storage.
Abstract:
The creation, maintenance, and accessing of page tables is done by a virtual machine monitor running on a computing system rather than the guest operating systems. This allows page table walks to be completed in fewer memory accesses when compared to the guest operating system's maintenance of the page tables. In addition, the virtual machine monitor may utilize additional resources to offload page table access and maintenance functions from the CPU to another device, such as a page table management device or page table management node. Offloading some or all page table access and maintenance functions to a specialized device or node enables the CPU to perform other tasks during page table walks and/or other page table maintenance functions.
Abstract:
A multi-processor device is disclosed. The multi-processor device includes interface circuitry to receive requests from at least one host device. A primary processor is coupled to the interface circuitry to process the requests in the absence of a failure event associated with the primary processor. A secondary processor processes operations on behalf of the primary processor and selectively receives the requests from the interface circuitry based on detection of the failure event associated with the primary processor.
Abstract:
A buffer integrated circuit (IC) chip is disclosed. The buffer IC chip includes host interface circuitry to receive a read command to retrieve read data from a memory. Memory interface circuitry couples to the memory. Data freshness authentication circuitry performs a freshness verification operation on the read data. Read data forwarding circuitry, in a skid mode of operation, transmits the read data to the host prior to completion of the freshness verification operation.
Abstract:
A memory buffer device facilitates secure read and write operations associated with data that includes a predefined data pattern. For read operations, the memory buffer device detects a read data pattern in the read data that matches a predefined data pattern. The memory buffer device may then generate a read response that includes metadata identifying the read data pattern without sending the read data itself. The memory buffer device may also receive Write Request without Data (RwoD) commands from the host that include metadata identifying a write data pattern. The memory buffer device identifies the associated data pattern and writes the data pattern or the metadata to the memory array. The memory buffer device may include encryption and decryption logic for communicating the metadata in encrypted form.
Abstract:
Technologies for securing dynamic random access memory contents to nonvolatile memory in a persistent memory module are described. One persistent memory module includes an inline memory encryption (IME) circuit that receives a data stream from a host, encrypts the data stream into encrypted data, and stores the encrypted data in DRAM. A management processor transfers the encrypted data from the DRAM to persistent storage memory responsive to a signal associated with a power-loss or power-down event.
Abstract:
Memory pages are background-relocated from a low-latency local operating memory of a server computer to a higher-latency memory installation that enables high-resolution access monitoring and thus access-demand differentiation among the relocated memory pages. Higher access-demand memory pages are background-restored to the low-latency operating memory, while lower access-demand pages are maintained in the higher latency memory installation and yet-lower access-demand pages are optionally moved to yet higher-latency memory installation.
Abstract:
An integrated circuit comprises an interface controller to receive a message, wherein at least a portion of the message is encrypted, a primary processor coupled to the interface controller and configured to process the received message, and a secondary secure processor coupled to the primary processor and to the interface controller. The secondary secure processor is configured to decrypt the portion of the message that is encrypted on behalf of the primary processor, analyze the decrypted portion of the message to determine whether the decrypted portion comprises information pertaining to sensitive data, and responsive to determining that the decrypted portion comprises information pertaining to sensitive data, process the information pertaining to the sensitive data and provide the sensitive data to the interface controller via a secure private bus not accessible by the primary processor.