摘要:
A method, system and computer program product for optimizing EPN to RPN translation when a data miss occurs. The method, system and computer program product take advantage of the high-likelihood of finding the matching PTE in the first half of the PTEG and utilize early data-coming signals from the L2 cache to prime the data-flow pipe to the D-ERAT arrays and requesting a joint steal cycle for executing the write into the D-ERAT and a restart request for re-dispatching the next-to-complete instruction.
摘要:
A method, system and computer program product for optimizing EPN to RPN translation when a data miss occurs. The method, system and computer program product take advantage of the high-likelihood of finding the matching PTE in the first half of the PTEG and utilize early data-coming signals from the L2 cache to prime the data-flow pipe to the D-ERAT arrays and requesting a joint steal cycle for executing the write into the D-ERAT and a restart request for re-dispatching the next-to-complete instruction.
摘要:
A management system that controls a restart interface in a data processing system. The management system switches control of the interface from a distributed network managed by the caches to the management system. The management system is capable of detecting errors and seizing control of the interface in order to remedy any errors that occur within the interface.
摘要:
A method, apparatus, and computer program product are disclosed in a data processing system for sharing data in a cache among multiple threads in a simultaneous multi-threaded (SMT) processor. The SMT processor executes multiple threads concurrently during each clock cycle. The cache is dynamically allocated for use among the multiple threads. Portions of the cache are capable of being designated to store private data that is used exclusively by only a first one of the threads. The portions of the cache are capable of being designated to store shared data that can be used by any one of the multiple threads. The size of the portions can be changed dynamically during execution of the threads.
摘要:
A method and system for maintaining a best-case demand redispatch of an instruction to allow for maximizing the time a rejected thread may execute in lookahead execution mode, while maintaining the smallest L1 cache miss penalty supported by the memory subsystem. In response to a demand miss, a load/store unit sends a fetch request to the next level cache. The cache line of the demand miss is examined to identify the critical sector. Once the critical sector is identified, a best-case data return time is determined based on the fastest time the next level cache is able to return the critical sector of the cache line. The load/store unit then sends a speculative warning to the dispatch unit to coincide with the best-case data return, wherein the speculative warning prepares the dispatch unit to resend the instruction for execution as soon as data is available to the processor core.
摘要:
A method, system, and computer program product for supporting multiple fetch requests to the same congruence class in an n-way set associative cache. Responsive to receiving an incoming fetch instruction at a load/store unit, outstanding valid fetch entries in the n-way set associative cache that have the same cache congruence class as the incoming fetch instruction are identified. SetIDs in used by these identified outstanding valid fetch entries are determined. A resulting setID is assigned to the incoming fetch instruction based on the identified setIDs, wherein the resulting setID assigned is a setID not currently in use by the outstanding valid fetch entries. The resulting setID for the incoming fetch instruction is written in a corresponding entry in the n-way set associative cache.