摘要:
A method and apparatus for maintaining content of registers of a processor which uses the registers for processing instructions. Entries are stored in a buffer for restoring register content in response to an interruption by an interruptible instruction. Entries include information for reducing the number of entries selected for the restoring. A set of the buffer entries is selected, in response to the interruption and the information, for restoring register content. The set includes only entries which are necessary for restoring the content in response to the interruption so that the content of the processor registers may be restored in a single processor cycle, even if multiple entries are stored for a first one of the registers and multiple entries are stored for a second one of the registers.
摘要:
A method and system are disclosed for reducing run-time delay during conditional branch instruction execution in a pipelined processor system. A series of queued sequential instructions and conditional branch instructions are processed wherein each conditional branch instruction specifies an associated conditional branch to be taken in response to a selected outcome of processing one or more sequential instructions. Upon detection of a conditional branch instruction within the queue, a group of target instructions are fetched based upon a prediction that an associated conditional branch will be taken. Sequential instructions within the queue following the conditional branch instruction are then purged and the target instructions loaded into the queue only in response to a successful a retrieval of the target instructions, such that the sequential instructions may be processed without delay if the prediction that the conditional branch is taken proves invalid prior to retrieval of the target instructions. Alternately, the purged sequential instructions may be refetched after loading the target instructions such that the sequential instructions may be executed with minimal delay if the prediction that the conditional branch is taken proves invalid after loading the target instructions. In yet another embodiment, the sequential instructions within the queue following the conditional branch instruction are purged only in response to a successful retrieval of the target instructions and an imminent execution of the conditional branch instruction.
摘要:
Associativity of a multi-core processor cache memory to a logical partition is managed and controlled by receiving a plurality of unique logical processing partition identifiers into registration of a multi-core processor, each identifier being associated with a logical processing partition on one or more cores of the multi-core processor; responsive to a shared cache memory miss, identifying a position in a cache directory for data associated with the address, the shared cache memory being multi-way set associative; associating a new cache line entry with the data and one of the registered unique logical processing partition identifiers; modifying the cache directory to reflect the association; and caching the data at the new cache line entry, wherein said shared cache memory is effectively shared on a line-by-line basis among said plurality of logical processing partitions of said multi-core processor.
摘要:
A method and system of modifying instructions forming a loop is provided. A method of modifying instructions forming a loop includes modifying instructions forming a loop including: determining static and dynamic characteristics for the instructions; selecting a modification factor for the instructions based on a number of separate equivalent sections forming a cache in a processor which is processing the instructions; and modifying the instructions to interleave the instructions in the loop according to the modification factor and the static and dynamic characteristics when the instructions satisfy a modification criteria based on the static and dynamic characteristics.
摘要:
A processor (100) includes a preload queue (160) for storing a plurality of preload entries. Each preload entry is associated with a preload instruction and includes the address and byte count defined by the respective preload and an identifier associated with the respective preload. A comparison unit (170) associated with the preload queue (160) identifies each conflicting preload entry, that is, each preload entry associated with a preload instruction that conflicts with an older store instruction. The oldest preload instruction associated with one of the conflicting preload entries represents a target preload. The processor (100) may flush this target preload along with all instructions executed after the target preload in order to correct for the conflict between the target preload and store instruction.
摘要:
In maintaining the state of a processor, a dispatched instruction is given an identification tag and an associated entry in an architectural register table. The identification tag of the dispatched instruction is written to the entry in the architectural register table, if the identification tag of the dispatched instruction is more recent than a prior instruction identification tag stored in the entry.
摘要:
One aspect of the invention relates to a super scalar processor having a memory which as addressable with respect to the combination of a page address and a page offset address, and provides a method for detecting an overlap condition between a present instruction and a previously executed instruction, the previously executed instruction being executed prior to execution of the present instruction. In one embodiment, the method comprises the steps of dividing the present instruction into a plurality of aligned memory accesses; determining the page offset for at least one of the aligned accesses; and comparing the page offset and byte count for the present instruction to a page offset and byte count for the previously executed instruction.
摘要:
An apparatus and method for processing multiple cache misses to a single cache line in an information handling system which includes a miss queue for storing requests for data not located in a level one cache and a comparator for comparing requests for data stored in the miss queue to determine if there are multiple requests for data located in the same cache line of a level two cache. Each new request for data from the same cache line of the level two cache as an older original request for data in the miss queue is marked as a load hit reload. The requests marked as load hit reloads are then grouped together with the matching original request and forwarded together to the level two cache wherein the original request requests the data from level two cache. The load hit reload requests do not access level two cache but instead bypass access of level two cache by extracting data from the cache line outputted from level two cache for the matching original request. The present invention reduces the number of accesses to the level two cache and allows data requests to be satisfied in parallel versus serially when multiple successive level one cache misses occur.
摘要:
The invention relates to a method for issuing instructions in a processor. In one version of the invention, the method includes the steps of assigning an identification tag to an instruction, and dispatching the instruction, the identification tag and source information to an execution queue.
摘要:
The invention relates to a method for issuing instructions in a processor. In one version of the invention, the method includes the steps of dispatching the instruction and source information to a queue, determining validity of the source information, and issuing the instruction for execution in response to the source information validity.