Abstract:
Techniques are disclosed relating to an I/O agent circuit of a computer system. The I/O agent circuit may receive, from a peripheral component, a set of transaction requests to perform a set of read transactions that are directed to one or more of a plurality of cache lines. The I/O agent circuit may issue, to a first memory controller circuit configured to manage access to a first one of the plurality of cache lines, a request for exclusive read ownership of the first cache line such that data of the first cache line is not cached outside of the memory and the I/O agent circuit in a valid state. The I/O agent circuit may receive exclusive read ownership of the first cache line, including receiving the data of the first cache line. The I/O agent circuit may then perform the set of read transactions with respect to the data.
Abstract:
A system and method for efficiently allocating resources of destinations to sources conveying requests to the destinations. In various embodiments, a computing system includes multiple sources that generate requests and multiple destinations that service the requests. One or more transaction tables store requests received from the multiple sources. Arbitration logic selects requests and stores them in a processing table. When the logic selects a given request from the processing table, and determines resources for the corresponding destination is unavailable, the logic removes the given request from the processing table and allocates the request in a retry handling queue. When the retry handling queue has no data storage for the request, logic updates a transaction table entry and maintains a count of such occurrences. When the count exceeds a threshold, the logic stalls requests for that source. Requests in the retry handling queue have priority over requests in the transaction tables.
Abstract:
Systems, apparatuses, and methods for reducing memory cache control command hops through a fabric are disclosed. A system includes an interconnect fabric, a plurality of transaction processing queues, and a plurality of memory pipelines. Each memory pipeline includes an arbiter, a combined coherence point and memory cache controller unit, and a memory controller coupled to a memory channel. Each combined unit includes a memory cache controller, a memory cache, and a duplicate tag structure. A single arbiter per memory pipeline performs arbitration across the transaction processing queues to select a transaction address to feed the memory pipeline's combined unit. The combined unit performs coherence operations and a memory cache lookup for the selected transaction. Only after processing is completed in the combined unit is the transaction moved out of its transaction processing queue, reducing power consumption caused by data movement through the fabric.
Abstract:
Systems, apparatuses, and methods for reducing memory cache control command hops through a fabric are disclosed. A system includes an interconnect fabric, a plurality of transaction processing queues, and a plurality of memory pipelines. Each memory pipeline includes an arbiter, a combined coherence point and memory cache controller unit, and a memory controller coupled to a memory channel. Each combined unit includes a memory cache controller, a memory cache, and a duplicate tag structure. A single arbiter per memory pipeline performs arbitration across the transaction processing queues to select a transaction address to feed the memory pipeline's combined unit. The combined unit performs coherence operations and a memory cache lookup for the selected transaction. Only after processing is completed in the combined unit is the transaction moved out of its transaction processing queue, reducing power consumption caused by data movement through the fabric.
Abstract:
An embodiment of an apparatus includes a retry queue circuit, a transaction arbiter circuit, and a plurality of transaction buffers. The retry queue circuit may store one or more entries corresponding to one or more memory transactions. A position in the retry queue circuit of an entry of the one or more entries may correspond to a priority for processing a memory transaction corresponding to the entry. The transaction arbiter circuit may receive a real-time memory transaction from a particular transaction buffer. In response to a determination that the real-time memory transaction is unable to be processed, the transaction arbiter circuit may create an entry for the real-time memory transaction in the retry queue circuit. In response to a determination that a bulk memory transaction is scheduled for processing prior to the real-time memory transaction, the transaction arbiter circuit may upgrade the bulk memory transaction to use real-time memory resources.
Abstract:
In an embodiment, an apparatus includes multiple memory resources, and a resource table that includes entries that correspond to respective memory resources of the multiple memory resources. The apparatus also includes a circuit configured to receive a first memory command. The first memory command is associated with a subset of the multiple memory resources. For each memory resource of the subset, the circuit is also configured to set a respective indicator associated with the first memory command, and to store a first value in a first entry of the resource table in response to a determination that the respective memory resource is unavailable. The circuit is also configured to store a second value in each entry of the resource table that corresponds to a memory resource of the subset in response to a determination that an entry corresponding to a given memory resource of the subset includes the first value.
Abstract:
Techniques are disclosed relating to a split communications fabric topology. In some embodiments, an apparatus includes a communications fabric structure with multiple fabric units. The fabric units may be configured to arbitrate among control packets of different messages. In some embodiments, a processing element is configured to generate a message that includes a control packet and one or more data packets. In some embodiments, the processing element is configured to transmit the control packet to a destination processing element (e.g., a memory controller) via the communications fabric structure and transmit the data packets to a data buffer. In some embodiments, the destination processing element is configured to retrieve the data packets from the data buffer in response to receiving the control packet via the hierarchical fabric structure. In these embodiments, bypassing the fabric structure for data packets may reduce power consumption.
Abstract:
In an embodiment, an apparatus includes multiple memory resources, and a resource table that includes entries that correspond to respective memory resources of the multiple memory resources. The apparatus also includes a circuit configured to receive a first memory command. The first memory command is associated with a subset of the multiple memory resources. For each memory resource of the subset, the circuit is also configured to set a respective indicator associated with the first memory command, and to store a first value in a first entry of the resource table in response to a determination that the respective memory resource is unavailable. The circuit is also configured to store a second value in each entry of the resource table that corresponds to a memory resource of the subset in response to a determination that an entry corresponding to a given memory resource of the subset includes the first value.
Abstract:
Techniques are disclosed relating to a split communications fabric topology. In some embodiments, an apparatus includes a communications fabric structure with multiple fabric units. The fabric units may be configured to arbitrate among control packets of different messages. In some embodiments, a processing element is configured to generate a message that includes a control packet and one or more data packets. In some embodiments, the processing element is configured to transmit the control packet to a destination processing element (e.g., a memory controller) via the communications fabric structure and transmit the data packets to a data buffer. In some embodiments, the destination processing element is configured to retrieve the data packets from the data buffer in response to receiving the control packet via the hierarchical fabric structure. In these embodiments, bypassing the fabric structure for data packets may reduce power consumption.
Abstract:
An apparatus for processing coherency transactions in a computing system is disclosed. The apparatus may include a request queue circuit, a duplicate tag circuit, and a memory interface unit. The request queue circuit may be configured to generate a speculative read request dependent upon a received read transaction. The duplicate tag circuit may be configured to store copies of tag from one or more cache memories, and to generate a kill message in response to a determination that data requested in the received read transaction is stored in a cache memory. The memory interface unit may be configured to store the generated speculative read request dependent upon a stall condition. The stored speculative read request may be sent to a memory controller dependent upon the stall condition. The memory interface unit may be further configured to delete the speculative read request in response to the kill message.