摘要:
The present disclosure provides a method for reducing memory traffic in a distributed memory system. The method may include storing a presence vector in a directory of a memory slice, said presence vector indicating whether a line in local memory has been cached. The method may further include protecting said memory slice from cache coherency violations via a home agent configured to transmit and receive data from said memory slice, said home agent configured to store a copy of said presence vector. The method may also include receiving a request for a block of data from at least one processing node at said home agent and comparing said presence vector with said copy of said presence vector stored in said home agent. The method may additionally include eliminating a write update operation between said home agent and said directory if said presence vector and said copy are equivalent. Of course, many alternatives, variations and modifications are possible without departing from this embodiment.
摘要:
The present disclosure provides a method for reducing memory traffic in a distributed memory system. The method may include storing a presence vector in a directory of a memory slice, said presence vector indicating whether a line in local memory has been cached. The method may further include protecting said memory slice from cache coherency violations via a home agent configured to transmit and receive data from said memory slice, said home agent configured to store a copy of said presence vector. The method may also include receiving a request for a block of data from at least one processing node at said home agent and comparing said presence vector with said copy of said presence vector stored in said home agent. The method may additionally include eliminating a write update operation between said home agent and said directory if said presence vector and said copy are equivalent. Of course, many alternatives, variations and modifications are possible without departing from this embodiment.
摘要:
Methods and apparatus relating to allocation and/or write policy for a glueless area-efficient directory cache for hotly contested cache lines are described. In one embodiment, a directory cache stores data corresponding to a caching status of a cache line. The caching status of the cache line is stored for each of a plurality of caching agents in the system. An write-on-allocate policy is used for the directory cache by using a special state (e.g., snoop-all state) that indicates one or more snoops are to be broadcasted to all agents in the system. Other embodiments are also disclosed.
摘要:
In an embodiment, a method is provided. The method of this embodiment provides receiving a request for data from a processor of a plurality of processors, determining a cache entry location based, at least in part, on the request, storing the data in a cache corresponding to the processor at the cache entry location, and storing a coherency record corresponding to the data in a snoop filter in accordance with one of the following, if there is a cache miss: at the cache entry location of a corresponding affinity in the snoop filter if the cache entry location is found in the corresponding affinity, or at a derived cache entry location of the corresponding affinity if the cache entry location is not found in the corresponding affinity.
摘要:
In an embodiment, a method is provided. The method of this embodiment provides receiving a request for data from a processor of a plurality of processors, determining a cache entry location based, at least in part, on the request, storing the data in a cache corresponding to the processor at the cache entry location, and storing a coherency record corresponding to the data in a snoop filter in accordance with one of the following, if there is a cache miss: at the cache entry location of a corresponding affinity in the snoop filter if the cache entry location is found in the corresponding affinity, or at a derived cache entry location of the corresponding affinity if the cache entry location is not found in the corresponding affinity.
摘要:
In an embodiment, a method is provided. The method of this embodiment provides receiving a request for data from a processor of a plurality of processors, determining a cache entry location based, at least in part, on the request, storing the data in a cache corresponding to the processor at the cache entry location, and storing a coherency record corresponding to the data in an affinity corresponding to the cache.
摘要:
In an embodiment, a method is provided. The method of this embodiment provides receiving a request for data from a processor of a plurality of processors, determining a cache entry location based, at least in part, on the request, storing the data in a cache corresponding to the processor at the cache entry location, and storing a coherency record corresponding to the data in an affinity corresponding to the cache.