Abstract:
A shared memory controller is to service load and store operations received, over data links, from a plurality of independent nodes to provide access to a shared memory resource. Each of the plurality of independent nodes is to be permitted to access a respective portion of the shared memory resource. Interconnect protocol data and memory access protocol data are sent on the data links and transitions between the interconnect protocol data and memory access protocol data can be defined and identified.
Abstract:
A processor implementing techniques for supporting configurable security levels for memory address ranges is disclosed. In one embodiment, the processor includes a processing core a memory controller, operatively coupled to the processing core, to access data in an off-chip memory and a memory encryption engine (MEE) operatively coupled to the memory controller. The MEE is to responsive to detecting a memory access operation with respect to a memory location identified by a memory address within a memory address range associated with the off-chip memory, identify a security level indicator associated with the memory location based on a value stored on a security range register. The MEE is further to access at least a portion of a data item associated with the memory address range of the off-chip memory in view of the security level indicator.
Abstract:
Data is sent from a memory buffer device to a host device over a link. An error in the data is determined. A read response cancellation signal is sent to the host device to indicate the error to the host device, where the read response cancellation signal is to be sent subsequent to the data being sent from the memory buffer device to the host device.
Abstract:
A sequence of read returns are to be sent to a host device over a transactional buffered memory interface, where the sequence includes at least a first read return to a first read request and a second read return to a second read request. A tracker identifier of the second read return is encoded in the first read return and the first read return is sent with the tracker identifier of the second read return to the host device. The second read return is sent to the host device after the first read return is sent.
Abstract:
A sequence of read returns are to be sent to a host device over a transactional buffered memory interface, where the sequence includes at least a first read return to a first read request and a second read return to a second read request. A tracker identifier of the second read return is encoded in the first read return and the first read return is sent with the tracker identifier of the second read return to the host device. The second read return is sent to the host device after the first read return is sent.
Abstract:
Examples are disclosed for determining a logical address of one or more victim rows of a volatile memory based on a logical address of an aggressor row and address translation schemes associated with the volatile memory. Other examples are described and claimed.
Abstract:
A shared memory controller receives a flit from another first shared memory controller over a shared memory link, where the flit includes a node identifier (ID) field and an address of a particular line of the shared memory. The node ID field identifies that the first shared memory controller corresponds to a source of the flit. Further, a second shared memory controller is determined from at least the address field of the flit, where the second shared memory controller is connected to a memory element corresponding to the particular line. The flit is forwarded to the second shared memory controller using a shared memory link according to a routing path.
Abstract:
Electronic circuitry of a computing system is described where the computing system includes a multi-level system memory where the multi-level system memory includes a near memory cache. The computing system directs system memory access requests whose addresses map to a same near memory cache slot to a same home caching agent so that the same home caching agent can characterize individual cache lines as inclusive or non-inclusive before forwarding the requests to a system memory controller, and where the computing system directs other system memory access requests to the system memory controller without passing the other requests through a home caching agent. The electronic circuitry is to modify the respective original addresses of the other requests to include a special code that causes the other system memory access requests to map to a specific pre-determined set of slots within the near memory cache.
Abstract:
A memory subsystem includes a group of memory devices connected to an address bus. The memory subsystem includes logic to uniquely map a physical address of a memory access command to each memory device of the group. Thus, each physical address sent by an associated memory controller uniquely accesses a different row of each memory device, instead of being mapped to the same or corresponding row of each memory device.
Abstract:
Provided are an apparatus and method to store data from a cache line at locations having errors in a sparing directory. In response to a write operation having write data for locations in one of the cache lines, the write data for a location in the cache line having an error is written to an entry in a sparing directory including an address of the cache line.