摘要:
Method and system for a multi-level virtual/real cache system with synonym resolution. An exemplary embodiment includes a multi-level cache hierarchy, including a set of L1 caches associated with one or more processor cores and a set of L2 caches, wherein the set of L1 caches are a subset of the set of L2 caches, wherein the set of L1 caches underneath a given L2 cache are associated with one or more of the processor cores.
摘要:
Method and system for a multi-level virtual/real cache system with synonym resolution. An exemplary embodiment includes a multi-level cache hierarchy, including a set of L1 caches associated with one or more processor cores and a set of L2 caches, wherein the set of L1 caches are a subset of the set of L2 caches, wherein the set of L1 caches underneath a given L2 cache are associated with one or more of the processor cores.
摘要:
A method for handling errors in a cache memory without processor core recovery includes receiving a fetch request for data from a processor and simultaneously transmitting fetched data and a parity matching the parity of the fetched data to the processor. The fetched data is received from a higher-level cache into a low level cache of the processor. Upon determining that the fetched data failed an error check indicating that the fetched data is corrupted, the method includes requesting an execution pipeline to discontinue processing and flush its contents, and initiating a clean up sequence, which includes sending an invalidation request to the low level cache causing the low level cache to remove lines associated with the corrupted data, and requesting the execution pipeline to restart. The execution pipeline accesses a copy of the requested data from a higher-level storage location.
摘要:
A method for handling errors in a cache memory without processor core recovery includes receiving a fetch request for data from a processor and simultaneously transmitting fetched data and a parity matching the parity of the fetched data to the processor. The fetched data is received from a higher-level cache into a low level cache of the processor. Upon determining that the fetched data failed an error check indicating that the fetched data is corrupted, the method includes requesting an execution pipeline to discontinue processing and flush its contents, and initiating a clean up sequence, which includes sending an invalidation request to the low level cache causing the low level cache to remove lines associated with the corrupted data, and requesting the execution pipeline to restart. The execution pipeline accesses a copy of the requested data from a higher-level storage location.
摘要:
A method for handling cache coherency includes allocating a tag when a cache line is not exclusive in a data cache for a store operation, and sending the tag and an exclusive fetch for the line to coherency logic. An invalidation request is sent within a minimum amount of time to an I-cache, preferably only if it has fetched to the line and has not been invalidated since, which request includes an address to be invalidated, the tag, and an indicator specifying the line is for a PSC operation. The method further includes comparing the request address against stored addresses of prefetched instructions, and in response to a match, sending a match indicator and the tag to an LSU, within a maximum amount of time. The match indicator is timed, relative to exclusive data return, such that the LSU can discard prefetched instructions following execution of the store operation that stores to a line subject to an exclusive data return, and for which the match is indicated.
摘要:
A method for handling cache coherency includes allocating a tag when a cache line is not exclusive in a data cache for a store operation, and sending the tag and an exclusive fetch for the line to coherency logic. An invalidation request is sent within a minimum amount of time to an I-cache, preferably only if it has fetched to the line and has not been invalidated since, which request includes an address to be invalidated, the tag, and an indicator specifying the line is for a PSC operation. The method further includes comparing the request address against stored addresses of prefetched instructions, and in response to a match, sending a match indicator and the tag to an LSU, within a maximum amount of time. The match indicator is timed, relative to exclusive data return, such that the LSU can discard prefetched instructions following execution of the store operation that stores to a line subject to an exclusive data return, and for which the match is indicated.
摘要:
A pipelined microprocessor includes circuitry for store forwarding by performing: for each store request, and while a write to one of a cache and a memory is pending; obtaining the most recent value for at least one complete block of data; merging store data from the store request with the complete block of data thus updating the block of data and forming a new most recent value and an updated complete block of data; and buffering the updated complete block of data into a store data queue; for each load request, where the load request may require at least one updated completed block of data: determining if store forwarding is appropriate for the load request on a block-by-block basis; if store forwarding is appropriate, selecting an appropriate block of data from the store data queue on a block-by-block basis; and forwarding the selected block of data to the load request.
摘要:
A pipelined processor includes circuitry adapted for store forwarding, including: for each store request, and while a write to one of a cache and a memory is pending; obtaining the most recent value for at least one block of data; merging store data from the store request with the block of data thus updating the block of data and forming a new most recent value and an updated complete block of data; and buffering the updated block of data into a store data queue; for each additional store request, where the additional store request requires at least one updated block of data: determining if store forwarding is appropriate for the additional store request on a block-by-block basis; if store forwarding is appropriate, selecting an appropriate block of data from the store data queue on a block-by-block basis; and forwarding the selected block of data to the additional store request.
摘要:
A pipelined processor includes circuitry adapted for store forwarding, including: for each store request, and while a write to one of a cache and a memory is pending; obtaining the most recent value for at least one block of data; merging store data from the store request with the block of data thus updating the block of data and forming a new most recent value and an updated complete block of data; and buffering the updated block of data into a store data queue; for each additional store request, where the additional store request requires at least one updated block of data: determining if store forwarding is appropriate for the additional store request on a block-by-block basis; if store forwarding is appropriate, selecting an appropriate block of data from the store data queue on a block-by-block basis; and forwarding the selected block of data to the additional store request.
摘要:
A pipelined microprocessor includes circuitry for store forwarding by performing: for each store request, and while a write to one of a cache and a memory is pending; obtaining the most recent value for at least one complete block of data; merging store data from the store request with the complete block of data thus updating the block of data and forming a new most recent value and an updated complete block of data; and buffering the updated complete block of data into a store data queue; for each load request, where the load request may require at least one updated completed block of data: determining if store forwarding is appropriate for the load request on a block-by-block basis; if store forwarding is appropriate, selecting an appropriate block of data from the store data queue on a block-by-block basis; and forwarding the selected block of data to the load request.