摘要:
Hybrid multi-level memory architecture technologies are described. A System on Chip (SOC) includes multiple functional units and a multi-level memory controller (MLMC) coupled to the functional units. The MLMC is coupled to a hybrid multi-level memory architecture including a first-level dynamic random access memory (DRAM) (near memory) that is located on-package of the SOC and a second-level DRAM (far memory) that is located off-package of the SOC. The MLMC presents the first-level DRAM and the second-level DRAM as a contiguous addressable memory space and provides the first-level DRAM to software as additional memory capacity to a memory capacity of the second-level DRAM. The first-level DRAM does not store a copy of contents of the second-level DRAM.
摘要:
An apparatus is described that includes a memory controller having an interface to couple to a multi-level system memory. The memory controller also includes a coherency buffer and coherency services logic circuitry. The coherency buffer is to keep cache lines for which read and/or write requests have been received. The coherency services logic circuitry is coupled to the interface and the coherency buffer. The coherency services logic circuitry is to merge a cache line that has been evicted from a level of the multi-level system memory with another version of the cache line within the coherency buffer before writing the cache line back to a deeper level of the multi-level system memory if at least one of the following is true: the another version of said cache line is in a modified state; the memory controller has a pending write request for the cache line.
摘要:
REUT (Robust Electrical Unified Testing) for memory links is introduced which speeds testing, tool development, and debug. In addition it provides training hooks that have enough performance to be used by BIOS to train parameters and conditions that have not been possible with past implementations. Address pattern generation circuitry is also disclosed.
摘要:
Data pin mapping and delay training techniques. Valid values are detected on a command/address (CA) bus at a memory device. A first part of the pattern (high phase) is transmitted via a first subset of data pins on the memory device in response to detecting values on the CA bus; a second part of the pattern (low phase) is transmitted via a second subset of data pins on the memory device in response to detecting values on the CA bus. Signals are sampled at the memory controller from the data pins while the CA pattern is being transmitted to obtain a first memory device's sample (high phase) and the second memory device's sample (low phase) by analyzing the first and the second subset of sampled data pins. The analysis combined with the knowledge of the transmitted pattern on the CA bus leads to finding the unknown data pins mapping. Varying the transmitted CA patterns and the resulting feedbacks sampled on memory controller data signals allows CA/CTRL/CLK signals delay training with and without priory data pins mapping knowledge.
摘要:
An apparatus is described. The apparatus includes a memory controller to interface with a multi-level system memory. The multi-level system memory has a near memory level and a far memory level. The near memory level has a sectored cache to cache super lines having multiple cache lines as a single cacheable item. The memory controller has tracker circuitry to track status information of an old request super line and a new request super-line that compete for a same slot within the sectored cache, wherein, the status information includes an identification of which one of the old and new super-lines is currently cached in the sectored cache.
摘要:
A memory subsystem includes memory hierarchy that performs selective prefetching based on prefetch hints. A lower level memory detects a cache miss for a requested cache line that is part of a superline. The lower level memory generates a request vector for the cache line that triggered the cache miss, including a field for each cache line of the superline. The request vector includes a demand request for the cache line that caused the cache miss, and the lower level memory modifies the request vector with prefetch hint information. The prefetch hint information can indicate a prefetch request for one or more other cache lines in the superline. The lower level memory sends the request vector to the higher level memory with the prefetch hint information, and the higher level memory services the demand request and selectively either services a prefetch hint or drops the prefetch hint.