Abstract:
In one embodiment, a cache coherent system includes one or more agents (e.g., coherent agents) that may cache data used by the system. The system may include a point of coherency in a memory controller in the system, and thus the agents may transmit read requests to the memory controller to coherently read data. The point of coherency may determine if the data is cached in another agent, and may transmit a copy back request to the other agent if the other agent has modified the data. The system may include an interconnect between the agents and the memory controller. At a point on the interconnect at which traffic from the agents converges, a copy back response may be converted to a fill for the requesting agent.
Abstract:
Techniques are disclosed relating to a split communications fabric topology. In some embodiments, an apparatus includes a communications fabric structure with multiple fabric units. The fabric units may be configured to arbitrate among control packets of different messages. In some embodiments, a processing element is configured to generate a message that includes a control packet and one or more data packets. In some embodiments, the processing element is configured to transmit the control packet to a destination processing element (e.g., a memory controller) via the communications fabric structure and transmit the data packets to a data buffer. In some embodiments, the destination processing element is configured to retrieve the data packets from the data buffer in response to receiving the control packet via the hierarchical fabric structure. In these embodiments, bypassing the fabric structure for data packets may reduce power consumption.
Abstract:
Techniques are disclosed relating to a split communications fabric topology. In some embodiments, an apparatus includes a communications fabric structure with multiple fabric units. The fabric units may be configured to arbitrate among control packets of different messages. In some embodiments, a processing element is configured to generate a message that includes a control packet and one or more data packets. In some embodiments, the processing element is configured to transmit the control packet to a destination processing element (e.g., a memory controller) via the communications fabric structure and transmit the data packets to a data buffer. In some embodiments, the destination processing element is configured to retrieve the data packets from the data buffer in response to receiving the control packet via the hierarchical fabric structure. In these embodiments, bypassing the fabric structure for data packets may reduce power consumption.
Abstract:
An apparatus for processing memory requests from a functional unit in a computing system is disclosed. The apparatus may include an interface that may be configured to receive a request from the functional. Circuitry may be configured initiate a speculative read access command to a memory in response to a determination that the received request is a request for data from the memory. The circuitry may be further configured to determine, in parallel with the speculative read access, if the speculative read will result in an ordering or coherence violation.
Abstract:
An apparatus for processing cache requests in a computing system is disclosed. The apparatus may include a single-port memory, a dual-port memory, and a control circuit. The single-port memory may be store tag information associated with a cache memory, and the dual-port memory may be configured to store state information associated with the cache memory. The control circuit may be configured to receive a request which includes a tag address, access the tag and state information stored in the single-port memory and the dual-port memory, respectively, dependent upon the received tag address. A determination of if the data associated with the received tag address is contained in the cache memory may be made the control circuit, and the control circuit may update and store state information in the dual-port memory responsive to the determination.
Abstract:
Systems and methods for maintaining an order of transactions in the coherence point. The coherence point stores attributes associated with received transactions in an input request queue (IRQ). When a new transaction is received by the coherence point, the IRQ is searched for other entries with the same request address or the same victim address as the new transaction. If one or more matches are found, the new transaction entry points to the entry storing the most recently received transaction with the same address. The new transaction is stalled until the transaction it points to has been completed in the coherence point.
Abstract:
A coherence system includes a storage array that may store duplicate tag information associated with a cache memory of a processor. The system may also include a pipeline unit that includes a number of stages to control accesses to the storage array. The pipeline unit may pass through the pipeline stages, without generating an access to the storage array, an input/output (I/O) request that is received on a fabric. The system may also include a debug engine that may reformat the I/O request from the pipeline unit into a debug request. The debug engine may send the debug request to the pipeline unit via a debug bus. In response to receiving the debug request, the pipeline unit may access the storage array. The debug engine may return to the source of the I/O request via the fabric bus, a result of the access to the storage array.
Abstract:
Systems and methods for maintaining an order of transactions in the coherence point. The coherence point stores attributes associated with received transactions in an input request queue (IRQ). When a new transaction is received by the coherence point, the IRQ is searched for other entries with the same request address or the same victim address as the new transaction. If one or more matches are found, the new transaction entry points to the entry storing the most recently received transaction with the same address. The new transaction is stalled until the transaction it points to has been completed in the coherence point.
Abstract:
In an embodiment, a system may support programmable hashing of address bits at a plurality of levels of granularity to map memory addresses to memory controllers and ultimately at least to memory devices. The hashing may be programmed to distribute pages of memory across the memory controllers, and consecutive blocks of the page may be mapped to physically distant memory controllers. In an embodiment, address bits may be dropped from each level of granularity, forming a compacted pipe address to save power within the memory controller. In an embodiment, a memory folding scheme may be employed to reduce the number of active memory devices and/or memory controllers in the system when the full complement of memory is not needed.
Abstract:
A scalable cache coherency protocol for system including a plurality of coherent agents coupled to one or more memory controllers is described. The memory controller may implement a precise directory for cache blocks from the memory to which the memory controller is coupled. Multiple requests to a cache block may be outstanding, and snoops and completions for requests may include an expected cache state at the receiving agent, as indicated by a directory in the memory controller when the request was processed, to allow the receiving agent to detect race conditions. In an embodiment, the cache states may include a primary shared and a secondary shared state. The primary shared state may apply to a coherent agent that bears responsibility for transmitting a copy of the cache block to a requesting agent. In an embodiment, at least two types of snoops may be supported: snoop forward and snoop back.