摘要:
A method, system and computer program product for performing an implicit predicted return from a predicted subroutine are provided. The system includes a branch history table/branch target buffer (BHT/BTB) to hold branch information, including a target address of a predicted subroutine and a branch type. The system also includes instruction buffers, and instruction fetch controls to perform a method including fetching a branch instruction at a branch address and a return-point instruction. The method also includes receiving the target address and the branch type, and fetching a fixed number of instructions in response to the branch type. The method further includes referencing the return-point instruction within the instruction buffers such that the return-point instruction is available upon completing the fetching of the fixed number of instructions absent a re-fetch of the return-point instruction.
摘要:
A method, system and computer program product for performing an implicit predicted return from a predicted subroutine are provided. The system includes a branch history table/branch target buffer (BHT/BTB) to hold branch information, including a target address of a predicted subroutine and a branch type. The system also includes instruction buffers, and instruction fetch controls to perform a method including fetching a branch instruction at a branch address and a return-point instruction. The method also includes receiving the target address and the branch type, and fetching a fixed number of instructions in response to the branch type. The method further includes referencing the return-point instruction within the instruction buffers such that the return-point instruction is available upon completing the fetching of the fixed number of instructions absent a re-fetch of the return-point instruction.
摘要:
A branch prediction algorithm is used to generate a prediction of whether or not a branch will be taken. One or more instructions are fetched such that, for each of the fetched instructions, the prediction initiates a fetch of an instruction at a predicted target of the branch. A test is performed to ascertain whether or not the prediction was generated late relative to the fetched instructions, so that if the branch is later detected as mispredicted, that detection can be correlated to the late prediction. When the prediction is generated late relative to the fetched instructions, a latent prediction is selected by utilizing a fetching initiated by the latent prediction such that a new fetch is not started.
摘要:
A branch prediction algorithm is used to generate a prediction of whether or not a branch will be taken. One or more instructions are fetched such that, for each of the fetched instructions, the prediction initiates a fetch of an instruction at a predicted target of the branch. A test is performed to ascertain whether or not the prediction was generated late relative to the fetched instructions, so that if the branch is later detected as mispredicted, that detection can be correlated to the late prediction. When the prediction is generated late relative to the fetched instructions, a latent prediction is selected by utilizing a fetching initiated by the latent prediction such that a new fetch is not started.
摘要:
A method for reducing cache memory pollution including fetching an instruction stream from a cache line, preventing a fetching for the instruction stream from a sequential cache line, searching for a next predicted taken branch instruction, determining whether a length of the instruction stream extends beyond a length of the cache line based on the next predicted taken branch instruction, continuing preventing the fetching for the instruction stream from the sequential cache line if the length of the instruction stream does not extend beyond the length of the cache line, and allowing the fetching for the instruction stream from the sequential cache line if the length of the instruction stream extends beyond the length of the cache line, whereby the fetching from the sequential cache line and a resulting polluting of a cache memory that stores the instruction stream is minimized. A corresponding system and computer program product.
摘要:
A method for reducing cache memory pollution including fetching an instruction stream from a cache line, preventing a fetching for the instruction stream from a sequential cache line, searching for a next predicted taken branch instruction, determining whether a length of the instruction stream extends beyond a length of the cache line based on the next predicted taken branch instruction, continuing preventing the fetching for the instruction stream from the sequential cache line if the length of the instruction stream does not extend beyond the length of the cache line, and allowing the fetching for the instruction stream from the sequential cache line if the length of the instruction stream extends beyond the length of the cache line, whereby the fetching from the sequential cache line and a resulting polluting of a cache memory that stores the instruction stream is minimized. A corresponding system and computer program product.
摘要:
An error detection system is provided. The system includes a data array that includes one or more data entries. A copy datastore selectively stores a copy of a first single data entry of the data array. An index generator selectively increments an index that references the data array. A first comparator compares the copy with a second single data entry from the data array based on the index. An error generator generates an error signal based on a result from the first comparator.
摘要:
An error detection system is provided. The system includes a data array that includes one or more data entries. A copy datastore selectively stores a copy of a first single data entry of the data array. An index generator selectively increments an index that references the data array. A first comparator compares the copy with a second single data entry from the data array based on the index. An error generator generates an error signal based on a result from the first comparator.
摘要:
Embodiments relate to using a branch target buffer preload table. An aspect includes receiving a search request to locate branch prediction information associated with a branch instruction. Searching is performed for an entry corresponding to the search request in a branch target buffer and a branch target buffer preload table in parallel. Based on locating a matching entry in the branch target buffer preload table corresponding to the search request and failing to locate the matching entry in the branch target buffer, a victim entry is selected to overwrite in the branch target buffer. Branch prediction information of the matching entry is received from the branch target buffer preload table at the branch target buffer. The victim entry in the branch target buffer is overwritten with the branch prediction information of the matching entry.
摘要:
Embodiments of the invention relate to counter-based entry invalidation for a metadata previous write queue (PWQ). An aspect of the invention includes writing an address into an entry in the metadata PWQ, the address being associated with an instance of metadata received from a pipeline and setting a valid tag associated with the entry in the metadata PWQ to valid. Another aspect of the invention includes initializing a counter to zero and incrementing the counter based on receiving a count signal from the pipeline until the counter is equal to a threshold. Yet another aspect of the invention includes setting the valid tag to invalid based on the counter being equal to the threshold.