摘要:
The disclosure relates to accessing memory content with a high temporal locality of reference. An embodiment of the disclosure stores the content in a data buffer, determines that the content of the data buffer has a high temporal locality of reference, and accesses the data buffer for each operation targeting the content instead of a cache storing the content.
摘要:
Embodiments of a data cache are disclosed that substantially decrease a number of accesses to a physically-tagged tag array of the data cache are provided. In general, the data cache includes a data array that stores data elements, a physically-tagged tag array, and a virtually-tagged tag array. In one embodiment, the virtually-tagged tag array receives a virtual address. If there is a match for the virtual address in the virtually-tagged tag array, the virtually-tagged tag array outputs, to the data array, a way stored in the virtually-tagged tag array for the virtual address. In addition, in one embodiment, the virtually-tagged tag array disables the physically-tagged tag array. Using the way output by the virtually-tagged tag array, a desired data element in the data array is addressed.
摘要:
The disclosure relates to accessing memory content with a high temporal locality of reference. An embodiment of the disclosure stores the content in a data buffer, determines that the content of the data buffer has a high temporal locality of reference, and accesses the data buffer for each operation targeting the content instead of a cache storing the content.
摘要:
Apparatuses and related systems and methods for determining cache hit/miss of aliased addresses in virtually-tagged cache(s) are disclosed. In one embodiment, virtual aliasing cache hit/miss detector for a VIVT cache is provided. The detector comprises a TLB configured to receive a first virtual address and a second virtual address from the VIVT cache resulting from an indexed read into the VIVT cache based on the first virtual address. The TLB is further configured to generate first and second physical addresses translated from the first and second virtual addresses, respectively. The detector further comprises a comparator configured to receive the first and second physical addresses and effectuate a generation of an aliased cache hit/miss indicator based on a comparison of the first and second physical addresses. In this manner, the virtual aliasing cache hit/miss detector correctly generates cache hits and cache misses, even in the presence of aliased addressing.
摘要:
Apparatuses and related systems and methods for determining cache hit/miss of aliased addresses in virtually-tagged cache(s) are disclosed. In one embodiment, virtual aliasing cache hit/miss detector for a VIVT cache is provided. The detector comprises a TLB configured to receive a first virtual address and a second virtual address from the VIVT cache resulting from an indexed read into the VIVT cache based on the first virtual address. The TLB is further configured to generate first and second physical addresses translated from the first and second virtual addresses, respectively, The detector further comprises a comparator configured to receive the first and second physical addresses and effectuate a generation of an aliased cache hit/miss indicator based on a comparison of the first and second physical addresses. In this manner, the virtual aliasing cache hit/miss detector correctly generates cache hits and cache misses, even in the presence of aliased addressing.
摘要:
Embodiments of a data cache are disclosed that substantially decrease a number of accesses to a physically-tagged tag array of the data cache are provided. In general, the data cache includes a data array that stores data elements, a physically-tagged tag array, and a virtually-tagged tag array. In one embodiment, the virtually-tagged tag array receives a virtual address. If there is a match for the virtual address in the virtually-tagged tag array, the virtually-tagged tag array outputs, to the data array, a way stored in the virtually-tagged tag array for the virtual address. In addition, in one embodiment, the virtually-tagged tag array disables the physically-tagged tag array. Using the way output by the virtually-tagged tag array, a desired data element in the data array is addressed.
摘要:
The disclosure is directed to a weakly-ordered processing system and method for enforcing strongly-ordered memory access requests in a weakly-ordered processing system. The processing system includes a plurality of memory devices and a plurality of processors. Each of the processors are configured to generate memory access requests to one or more of the memory devices, with each of the memory access requests having an attribute that can be asserted to indicate a strongly-ordered request. The processing system further includes a bus interconnect configured to interface the processors to the memory devices, the bus interconnect being further configured to enforce ordering constraints on the memory access requests based on the attributes.
摘要:
Techniques and methods are used to control allocations to a higher level cache of cache lines displaced from a lower level cache. The allocations of the displaced cache lines are prevented for displaced cache lines that are determined to be redundant in the next level cache, whereby castouts are controlled. To such ends, a line is selected to be displaced in a lower level cache. Information associated with the selected line is identified which indicates that the selected line is present in a higher level cache. An allocation of the selected line in the higher level cache is prevented based on the identified information.
摘要:
In a multiprocessor system, accesses to a given processor's banked cache are controlled such that shared data accesses are directed to one or more banks designated for holding shared data and/or non-shared data accesses are directed to one or more banks designated for holding non-shared data. A non-shared data bank may be designated exclusively for holding non-shared data, so that shared data accesses do not interfere with non-shared accesses to that bank. Also, a shared data bank may be designated exclusively for holding shared data, and one or more banks may be designated for holding both shared and non-shared data. An access control circuit directs shared and non-shared accesses to respective banks based on receiving a shared indication signal in association with the accesses. Further, in one or more embodiments, the access control circuit reconfigures one or more bank designations responsive to a bank configuration signal.
摘要:
Efficient techniques are described for controlling ordered accesses in a weakly ordered storage system. A stream of memory requests is split into two or more streams of memory requests and a memory access counter is incremented for each memory request. A memory request requiring ordered memory accesses is identified in one of the two or more streams of memory requests. The memory request requiring ordered memory accesses is stalled upon determining a previous memory request from a different stream of memory requests is pending. The memory access counter is decremented for each memory request guaranteed to complete. A count value in the memory access counter that is different from an initialized state of the memory access counter indicates there are pending memory requests. The memory request requiring ordered memory accesses is processed upon determining there are no further pending memory requests.