摘要:
Scrubbing logic in a local coherency domain issues a domain query request to at least one cache hierarchy in a remote coherency domain. The domain query request is a non-destructive probe of a coherency state associated with a target memory block by the at least one cache hierarchy. A coherency response to the domain query request is received. In response to the coherency response indicating that the target memory block is not cached in the remote coherency domain, a domain indication in the local coherency domain is updated to indicate that the target memory block is cached, if at all, only within the local coherency domain.
摘要:
A cache coherent data processing system includes at least first and second coherency domains each including at least one processing unit and a cache memory. The cache memory includes a cache controller, a data array including a data storage location for caching a memory block, and a cache directory. The cache directory includes a tag field for storing an address tag in association with the memory block and a coherency state field associated with the tag field and the data storage location. The coherency state field has a plurality of possible states including a state that indicates that the address tag is valid, that the storage location does not contain valid data, and that the memory block is possibly cached outside of the first coherency domain.
摘要:
A data processing system includes a plurality of processing units coupled for communication by a communication link and a configuration register. The configuration register has a plurality of different settings each corresponding to a respective one of a plurality of different link information allocations. Information is communicated over the communication link in accordance with a particular link information allocation among the plurality of link information allocations determined by a respective setting of the configuration register.
摘要:
A cache, system and method for improving the snoop bandwidth of a cache directory. A cache directory may be sliced into two smaller cache directories each with its own snooping logic. By having two cache directories that can be accessed simultaneously, the bandwidth can be essentially doubled. Furthermore, a “frequency matcher” may shift the cycle speed to a lower speed upon receiving snoop addresses from the interconnect thereby slowing down the rate at which requests are transmitted to the dispatch pipelines. Each dispatch pipeline is coupled to a sliced cache directory and is configured to search the cache directory to determine if data at the received addresses is stored in the cache memory. As a result of slowing down the rate at which requests are transmitted to the dispatch pipelines and accessing the two sliced cache directories simultaneously, the bandwidth or throughput of the cache directory may be improved.
摘要:
A method, system, and device for enabling intervention across same-level cache memories. In a preferred embodiment, responsive to a cache miss in a first cache memory a direct intervention request is sent from the first cache memory to a second cache memory requesting a direct intervention that satisfies the cache miss. In an alternate embodiment, direct intervention is utilized to access a same-level victim cache.
摘要:
A cache coherent data processing system includes at least first and second coherency domains each including at least one processing unit. The first coherency domain includes a first cache memory and a second cache memory, and the second coherency domain includes a remote coherent cache memory. The first cache memory includes a cache controller, a data array including a data storage location for caching a memory block, and a cache directory. The cache directory includes a tag field for storing an address tag in association with the memory block and a coherency state field associated with the tag field and the data storage location. The coherency state field has a plurality of possible states including a state that indicates that the memory block is possibly shared with the second cache memory in the first coherency domain and cached only within the first coherency domain.
摘要:
A data processing system includes a plurality of processing units each having a respective point-to-point communication link with each of multiple others of the plurality of processing units but fewer than all of the plurality of processing units. Each of the plurality of processing units includes interconnect logic, coupled to each point-to-point communication link of that processing unit, that broadcasts requests received from one of the multiple others of the plurality of processing units to one or more of the plurality of processing units. The interconnect logic includes a partial response data structure including a plurality of entries each associating a partial response field with a plurality of flags respectively associated with each processing unit containing a snooper from which that processing unit will receive a partial response. The interconnect logic accumulates partial responses of processing units by reference to the partial response field to obtain an accumulated partial response, and when the plurality of flags indicate that all processing units from which partial responses are expected have returned a partial response, outputs the accumulated partial response.
摘要:
A data processing system includes a plurality of communication links and a plurality of processing units including a local master processing unit. The local master processing unit includes interconnect logic that couples the processing unit to one or more of the plurality of communication links and an originating master coupled to the interconnect logic. The originating master originates an operation by issuing a write-type request on at least one of the one or more communication links, receives from a snooper in the data processing system a destination tag identifying a route to the snooper, and, responsive to receipt of the combined response and the destination tag, initiates a data transfer including a data payload and a data tag identifying the route provided within the destination tag.
摘要:
A data processing system includes a memory system, a plurality of masters that issue requests for access to memory blocks within the memory system, a plurality of snoopers that provide partial responses to requests by the masters, and response logic that generates combined responses for the requests in response to the partial responses provided by the plurality of snoopers. The plurality masters includes a winning master that issues a request for a particular memory block, and the plurality of snoopers includes a protecting snooper that, in response to receipt of the request, provides a partial response and protects a transfer of coherency ownership of the particular memory block to the winning master until expiration of a protection window extension following receipt from the response logic of a combined response for the request.
摘要:
A cache memory logically partitions a cache array into at least two slices each having a plurality of cache lines, with a given cache line spread across two or more cache ways of contiguous bytes and a given cache way shared between the two cache slices, and if one a cache way is defective that is part of a first cache line in the first cache slice and part of a second cache line in the second cache slice, it is disabled while continuing to use at least one other cache way which is also part of the first cache line and part of the second cache line. In the illustrative embodiment the cache array is set associative and at least two different cache ways for a given cache line contain different congruence classes for that cache line. The defective cache way can be disabled by preventing an eviction mechanism from allocating any congruence class in the defective way. For example, half of the cache line can be disabled (i.e., half of the congruence classes). The cache array may be arranged with rows and columns of cache sectors (rows corresponding to the cache ways) wherein a given cache line is further spread across sectors in different rows and columns, with at least one portion of the given cache line being located in a first column having a first latency and another portion of the given cache line being located in a second column having a second latency greater than the first latency. The cache array can also output different sectors of the given cache line in successive clock cycles based on the latency of a given sector.