Abstract:
A memory device includes a stack of circuit layers, each circuit layer having formed thereon a memory circuit configured to store data and a redundant resources circuit configured to provide redundant circuitry to correct defective circuitry on at least one memory circuit formed on at least one layer in the stack. The redundant resources circuit includes a partial bank of redundant memory cells, wherein an aggregation of the partial bank of redundant memory cells in each of the circuit layers of the stack includes at least one full bank of redundant memory cells and wherein the redundant resources circuit is configured to replace at least one defective bank of memory cells formed on any of the circuit layers in the stack with at least a portion of the partial bank of redundant memory cells formed on any of the circuit layers in the stack.
Abstract:
Technologies for configurable adaptive double device data correction (ADDDC) are described. A memory buffer device includes error detection and correction (EDC) logic to autonomously detect errors in a region of memory and, upon detection of an error, calculate additional ECC check symbols for cache lines in the region and store the additional ECC check symbols as metadata associated with the cache lines.
Abstract:
Computing devices, methods, and systems for switch-based free memory tracking in data center environments are disclosed. An exemplary switch integrated circuit (IC), which is used in a switched fabric or a network, can include a processing device and a tracking structure that is distributed with at least a second switch IC. The tracking structure tracks free memory units that are accessible in a first set of nodes by the second switch IC. The processing device receives a request for a number of free memory units. The processing device forwards the request to a node in the first set of nodes that has at least the number of free memory units or forwards the request to the second switch IC that has at least the number of free memory units or responds to the request with a response that indicates that the request could not be fulfilled.
Abstract:
Memory pages are background-relocated from a low-latency local operating memory of a server computer to a higher-latency memory installation that enables high-resolution access monitoring and thus access-demand differentiation among the relocated memory pages. Higher access-demand memory pages are background-restored to the low-latency operating memory, while lower access-demand pages are maintained in the higher latency memory installation and yet-lower access-demand pages are optionally moved to yet higher-latency memory installation.
Abstract:
A buffer integrated circuit (IC) chip is disclosed. The buffer IC chip includes host interface circuitry to receive a request from at least one host. The request includes at least one command to access a memory. Memory interface circuitry couples to the memory. Message authentication circuitry performs a verification operation on the received request. Selective containment circuitry, during a containment mode of operation, (1) inhibits changes to the memory in response to the at least one command until completion of the verification operation, and (2) during performance of the verification operation, carries out at least one non-memory modifying sub-operation associated with the at least one command.
Abstract:
A memory allocation device for deployment within a host server computer includes control circuitry, a first interface to a local processing unit disposed within the host computer and local operating memory disposed within the host computer, and a second interface to a remote computer. The control circuitry allocates a first portion of the local memory to a first process executed by the local processing unit and transmits, to the remote computer via the second interface, a request to allocate to a second process executed by the local processing unit a first portion of a remote memory disposed within the remote computer. The control circuitry further receives instructions via the first interface to store data at a memory address within the first portion of the remote memory and transmits those instructions to the remote computer via the second interface.
Abstract:
A memory allocation device on an originating node requests an allocation of memory from a remote node. In response, the memory allocation device on the remote node returns a global system address that can be used to access the remote allocation from the originating node. Concurrent with the memory allocation device assigning (associating) a local (to its node) physical address to be used to access the remote allocation, the remote node allocates local physical memory to fulfill the remote allocation request. In this manner, the remote node has already completed the overhead operations associated with the remote allocation requested by the time the remote allocation is accessed by the originating node.
Abstract:
Described are self-learning systems and methods for adaptive management of memory resources within a memory hierarchy. Memory allocations associated with different active functions are organized into blocks for placement in alternative levels in a memory hierarchy optimized for different metrics of e.g. cost and performance. A host processor monitors a performance metric of the active functions, such as the number of instructions per clock cycle, and reorganizes the function-specific blocks among the levels of the hierarchy. Over time, this process tends toward block organizations that improve the performance metric.
Abstract:
Memory pages are background-relocated from a low-latency local operating memory of a server computer to a higher-latency memory installation that enables high-resolution access monitoring and thus access-demand differentiation among the relocated memory pages. Higher access-demand memory pages are background-restored to the low-latency operating memory, while lower access-demand pages are maintained in the higher latency memory installation and yet-lower access-demand pages are optionally moved to yet higher-latency memory installation.
Abstract:
A memory allocation device for deployment within a host server computer includes control circuitry, a first interface to a local processing unit disposed within the host computer and local operating memory disposed within the host computer, and a second interface to a remote computer. The control circuitry allocates a first portion of the local memory to a first process executed by the local processing unit and transmits, to the remote computer via the second interface, a request to allocate to a second process executed by the local processing unit a first portion of a remote memory disposed within the remote computer. The control circuitry further receives instructions via the first interface to store data at a memory address within the first portion of the remote memory and transmits those instructions to the remote computer via the second interface.