Abstract:
Examples are disclosed for probabilistic dynamic random access memory (DRAM) row repair. In some examples, using a row hammer limit for DRAM and a maximum activation rate for the DRAM a probabilistic row hammer detection value may be determined. The probabilistic row hammer detection value may then be used such that a probability is acceptably low that a given activation to an aggressor row of the DRAM causes the row hammer limit to be exceeded before a scheduled row refresh is performed on one or more victim rows associated with the aggressor row. Other examples are described and claimed.
Abstract:
Data is sent from a memory buffer device to a host device over a link. An error in the data is determined. A read response cancellation signal is sent to the host device to indicate the error to the host device, where the read response cancellation signal is to be sent subsequent to the data being sent from the memory buffer device to the host device.
Abstract:
A first die has a port to couple the first die to a second die over a die-to-die interconnect. The port includes circuitry to implement a physical layer of the die-to-die interconnect, send first protocol identification data over the physical layer to identify a first protocol in a plurality of protocols, send first data over the interconnect to the second die, wherein the first data comprise data of the first protocol, send second protocol identification data over the physical layer to identify a different second protocol in the plurality of protocols, and send second data over the interconnect to the second die, wherein the second data comprise flits of the second protocol.
Abstract:
A shared memory controller receives a flit from another first shared memory controller over a shared memory link, where the flit includes a node identifier (ID) field and an address of a particular line of the shared memory. The node ID field identifies that the first shared memory controller corresponds to a source of the flit. Further, a second shared memory controller is determined from at least the address field of the flit, where the second shared memory controller is connected to a memory element corresponding to the particular line. The flit is forwarded to the second shared memory controller using a shared memory link according to a routing path
Abstract:
A shared memory controller receives a flit from another first shared memory controller over a shared memory link, where the flit includes a node identifier (ID) field and an address of a particular line of the shared memory. The node ID field identifies that the first shared memory controller corresponds to a source of the flit. Further, a second shared memory controller is determined from at least the address field of the flit, where the second shared memory controller is connected to a memory element corresponding to the particular line. The flit is forwarded to the second shared memory controller using a shared memory link according to a routing path
Abstract:
Provided are an apparatus and method to store data from a cache line at locations having errors in a sparing directory. In response to a write operation having write data for locations in one of the cache lines, the write data for a location in the cache line having an error is written to an entry in a sparing directory including an address of the cache line.
Abstract:
A plurality of completed writes to memory are identified corresponding to a plurality of write requests from a host device received over a buffered memory interface. A completion packet is sent to the host device that includes a plurality of write completions to correspond to the plurality of completed writes.
Abstract:
A server, processing device and/or processor includes a processing core and a memory controller, operatively coupled to the processing core, to access data in an off-chip memory. A memory encryption engine (MEE) may be operatively coupled to the memory controller and the off-chip memory. The MEE may store non-MEE metadata bits within a modified version line corresponding to ones of a plurality of data lines stored in a protected region of the off-chip memory, compute an embedded message authentication code (eMAC) using the modified version line, and detect an attempt to modify one of the non-MEE metadata bits by using the eMAC within a MEE tree walk to authenticate access to the plurality of data lines. The non-MEE metadata bits may store coherence bits that track changes to a cache line in a remote socket, poison bits that track error containment within the data lines, and possibly other metadata bits.
Abstract:
Data is sent from a memory buffer device to a host device over a link. An error in the data is determined. A read response cancellation signal is sent to the host device to indicate the error to the host device, where the read response cancellation signal is to be sent subsequent to the data being sent from the memory buffer device to the host device.
Abstract:
Provided are an apparatus and method for a non-power-of-2 size cache in a first level memory device to cache data present in a second level memory device having a 2n cache size. A request is to a target address having n bits directed to the second level memory device. A determination is made whether a target index, comprising m bits of the n bits of the target address, is within an index set of the first level memory device. A determination is made of a modified target index in the index set of the first level memory device having at least one index bit that differs from a corresponding at least one index bit in the target index. The request is processed with respect to data in a cache line at the modified target index in the first level memory device.