摘要:
A method, system, and computer program product for enhancing performance of an in-order microprocessor with long stalls. In particular, the mechanism of the present invention provides a data structure for storing data within the processor. The mechanism of the present invention comprises a data structure including information used by the processor. The data structure includes a group of bits to keep track of which instructions preceded a rejected instruction and therefore will be allowed to complete and which instructions follow the rejected instruction. The group of bits comprises a bit indicating whether a reject was a fast or slow reject; and a bit for each cycle that represents a state of an instruction passing through a pipeline. The processor speculatively continues to execute a set bit's corresponding instruction during stalled periods in order to generate addresses that will be needed when the stall period ends and normal dispatch resumes.
摘要:
A method, apparatus and algorithm for quickly detecting an address match in a deeply pipelined processor design in a manner that may be implemented using a minimum of physical space in the critical area of the processor. The address comparison is split into two parts. The first part is a fast, partial address match comparator system. The second part is a slower, full address match comparator system. If a partial match between a requested address and a registry address is detected, then execution of the program or set of instructions requesting the address is temporarily suspended while a full address match check is performed. If the full address match check results in a full match between the requested address and a registry address, then the program or set of instructions is interrupted and stopped. Otherwise, the program or set of instructions continues execution.
摘要:
A method and system for maintaining a best-case demand redispatch of an instruction to allow for maximizing the time a rejected thread may execute in lookahead execution mode, while maintaining the smallest L1 cache miss penalty supported by the memory subsystem. In response to a demand miss, a load/store unit sends a fetch request to the next level cache. The cache line of the demand miss is examined to identify the critical sector. Once the critical sector is identified, a best-case data return time is determined based on the fastest time the next level cache is able to return the critical sector of the cache line. The load/store unit then sends a speculative warning to the dispatch unit to coincide with the best-case data return, wherein the speculative warning prepares the dispatch unit to resend the instruction for execution as soon as data is available to the processor core.
摘要:
A method and system for maintaining a best-case demand redispatch of an instruction to allow for maximizing the time a rejected thread may execute in lookahead execution mode, while maintaining the smallest L1 cache miss penalty supported by the memory subsystem. In response to a demand miss, a load/store unit sends a fetch request to the next level cache. The cache line of the demand miss is examined to identify the critical sector. Once the critical sector is identified, a best-case data return time is determined based on the fastest time the next level cache is able to return the critical sector of the cache line. The load/store unit then sends a speculative warning to the dispatch unit to coincide with the best-case data return, wherein the speculative warning prepares the dispatch unit to resend the instruction for execution as soon as data is available to the processor core.
摘要:
Method, system and computer program product for generating effective addresses in a data processing system. A method, in a data processing system, for generating an effective address includes generating a first portion of the effective address by calculating a first plurality of effective address bits of the effective address, and generating a second portion of the effective address by guessing a second plurality of effective address bits of the effective address. By intelligently guessing a plurality of the effective address bits that form the effective address, the effective address can be generated and sent to a translation unit more quickly than in a system in which all the effective address bits of the effective address are calculated. The method and system is particularly suitable for generating effective addresses in a CAM-based effective address translation design in a multi-threaded environment.
摘要:
Method, system and computer program product for generating effective addresses in a data processing system. A method, in a data processing system, for generating an effective address includes generating a first portion of the effective address by calculating a first plurality of effective address bits of the effective address, and generating a second portion of the effective address by guessing a second plurality of effective address bits of the effective address. By intelligently guessing a plurality of the effective address bits that form the effective address, the effective address can be generated and sent to a translation unit more quickly than in a system in which all the effective address bits of the effective address are calculated. The method and system is particularly suitable for generating effective addresses in a CAM-based effective address translation design in a multi-threaded environment.
摘要:
Method, system and computer program product for generating effective addresses in a data processing system. A method, in a data processing system, for generating an effective address includes generating a first portion of the effective address by calculating a first plurality of effective address bits of the effective address, and generating a second portion of the effective address by guessing a second plurality of effective address bits of the effective address. By intelligently guessing a plurality of the effective address bits that form the effective address, the effective address can be generated and sent to a translation unit more quickly than in a system in which all the effective address bits of the effective address are calculated. The method and system is particularly suitable for generating effective addresses in a CAM-based effective address translation design in a multi-threaded environment.
摘要:
A method and system for maintaining a best-case demand redispatch of an instruction to allow for maximizing the time a rejected thread may execute in lookahead execution mode, while maintaining the smallest L1 cache miss penalty supported by the memory subsystem. In response to a demand miss, a load/store unit sends a fetch request to the next level cache. The cache line of the demand miss is examined to identify the critical sector. Once the critical sector is identified, a best-case data return time is determined based on the fastest time the next level cache is able to return the critical sector of the cache line. The load/store unit then sends a speculative warning to the dispatch unit to coincide with the best-case data return, wherein the speculative warning prepares the dispatch unit to resend the instruction for execution as soon as data is available to the processor core.
摘要:
Method, system and computer program product for tracking changes in an L1 data cache directory. A method for tracking changes in an L1 data cache directory determines if data to be written to the L1 data cache is to be written to an address to be changed from an old address to a new address. If it is determined that the data to be written is to be written to an address to be changed, a determination is made if the data to be written is associated with the old address or the new address. If it is determined that the data is to be written to the new address, the data is allowed to be written to the new address following a prescribed delay after the address to be changed is changed. The method is preferably implemented in a system that provides a Store Queue (STQU) design that includes a Content Addressable Memory (CAM)-based store address tracking mechanism that includes early and late write CAM ports. The method eliminates time windows and the need for an extra copy of the L1 data cache directory.
摘要:
Method, system and computer program product for tracking changes in an L1 data cache directory. A method for tracking changes in an L1 data cache directory determines if data to be written to the L1 data cache is to be written to an address to be changed from an old address to a new address. If it is determined that the data to be written is to be written to an address to be changed, a determination is made if the data to be written is associated with the old address or the new address. If it is determined that the data is to be written to the new address, the data is allowed to be written to the new address following a prescribed delay after the address to be changed is changed. The method is preferably implemented in a system that provides a Store Queue (STQU) design that includes a Content Addressable Memory (CAM)-based store address tracking mechanism that includes early and late write CAM ports. The method eliminates time windows and the need for an extra copy of the L1 data cache directory.