摘要:
A cache, system and method for improving the snoop bandwidth of a cache directory. A cache directory may be sliced into two smaller cache directories each with its own snooping logic. By having two cache directories that can be accessed simultaneously, the bandwidth can be essentially doubled. Furthermore, a “frequency matcher” may shift the cycle speed to a lower speed upon receiving snoop addresses from the interconnect thereby slowing down the rate at which requests are transmitted to the dispatch pipelines. Each dispatch pipeline is coupled to a sliced cache directory and is configured to search the cache directory to determine if data at the received addresses is stored in the cache memory. As a result of slowing down the rate at which requests are transmitted to the dispatch pipelines and accessing the two sliced cache directories simultaneously, the bandwidth or throughput of the cache directory may be improved.
摘要:
A data processing system includes a memory system, a plurality of masters that issue requests for access to memory blocks within the memory system, a plurality of snoopers that provide partial responses to requests by the masters, and response logic that generates combined responses for the requests in response to the partial responses provided by the plurality of snoopers. The plurality masters includes a winning master that issues a request for a particular memory block, and the plurality of snoopers includes a protecting snooper that, in response to receipt of the request, provides a partial response and protects a transfer of coherency ownership of the particular memory block to the winning master until expiration of a protection window extension following receipt from the response logic of a combined response for the request.
摘要:
A system and method for cache management in a data processing system having a memory hierarchy of upper memory and lower memory cache. A lower memory cache controller accesses a coherency state table to determine replacement policies of coherency states for cache lines present in the lower memory cache when receiving a cast-in request from one of the upper memory caches. The coherency state table implements a replacement policy that retains the more valuable cache coherency state information between the upper and lower memory caches for a particular cache line contained in both levels of memory at the time of cast-out from the upper memory cache.
摘要:
A materials system and method is provided to enable the formation of articles by three-dimensional printing. The materials system includes thermoplastic particulate filler material that allows the accurate definition of articles that are strong without being brittle.
摘要:
A system and method for cache management in a data processing system having a memory hierarchy of upper memory and lower memory cache. A lower memory cache controller accesses a coherency state table to determine replacement policies of coherency states for cache lines present in the lower memory cache when receiving a cast-in request from one of the upper memory caches. The coherency state table implements a replacement policy that retains the more valuable cache coherency state information between the upper and lower memory caches for a particular cache line contained in both levels of memory at the time of cast-out from the upper memory cache.
摘要:
A processor includes at least one instruction execution unit that executes store instructions to obtain store operations and a store queue coupled to the instruction execution unit. The store queue includes a queue entry in which the store queue gathers multiple store operations during a store gathering window to obtain a data portion of a write transaction directed to lower level memory. In addition, the store queue includes dispatch logic that varies a size of the store gathering window to optimize store performance for different store behaviors and workloads.
摘要:
A cache, system and method for reducing the number of rejected snoop requests. A “stall/reorder unit” in a cache receives a snoop request from an interconnect. Information, such as the address, of the snoop request is stored in a queue of the stall/reorder unit. The stall/reorder unit forwards the snoop request to a selector which also receives a request from a processor. An arbitration mechanism selects either the snoop request or the request from the processor. If the snoop request is denied by the arbitration mechanism, information, e.g., address, about the snoop request may be maintained in the stall/reorder unit. The request may be later resent to the selector. This process may be repeated up to “n” clock cycles. By providing the snoop request additional opportunities (n clock cycles) to be accepted by the arbitration mechanism, fewer snoop requests may ultimately be denied.
摘要:
A cache coherent data processing system includes at least first and second coherency domains. In a first cache memory within the first coherency domain of the data processing system, a coherency state field associated with a storage location and an address tag is set to a first data-invalid coherency state that indicates that the address tag is valid and that the storage location does not contain valid data. In response to snooping an exclusive access operation, the exclusive access request specifying a target address matching the address tag and indicating a relative domain location of a requester that initiated the exclusive access operation, the first cache memory updates the coherency state field from the first data-invalid coherency state to a second data-invalid coherency state that indicates that the address tag is valid, that the storage location does not contain valid data, and whether a target memory block associated with the address tag is cached within the first coherency domain upon successful completion of the exclusive access operation based upon the relative location of the requestor.
摘要:
A cache coherent data processing system includes at least first and second coherency domains. The first coherency domain includes a system memory controller for a system memory and a first processing unit having a first cache memory. The second coherency domain includes a second processing unit having a second cache memory. In the first cache memory, a coherency state field associated with a storage location and an address tag is set to a first coherency state. In response to snooping an exclusive access request specifying a target address matching the address tag, the first cache memory provides a first partial response to the exclusive access request based at least in part upon the first coherency state. In response to snooping the exclusive access request, the memory controller determines whether it is responsible for the target address and provides a second partial response to the exclusive access request based at least in part upon an outcome of the determination. At least the first and second partial responses are accumulated to obtain a combined response for the exclusive access request. The combined response includes an indication of whether or not a highest point of coherency and a memory controller of a home system memory for the target address reside within a same coherency domain. The first cache memory updates the coherency state field from the first coherency state to a second coherency state in response to the indication in the combined response.
摘要:
A cache coherent data processing system includes at least first and second coherency domains. In a first cache memory within the first coherency domain of the data processing system, a coherency state field associated with a storage location and an address tag is set to a first data-invalid coherency state that indicates that the address tag is valid and that the storage location does not contain valid data. In response to snooping a data-invalid state update request, the first cache memory updates the coherency state field from the first data-invalid coherency state to a second data-invalid coherency state that indicates that the address tag is valid, that the storage location does not contain valid data, and that a memory block associated with the address tag is likely cached within the first coherency domain.