摘要:
A data prediction structure for a superscalar microprocessor is provided. The data prediction structure predicts a data address that a group of instructions is going to access while that group of instructions is being fetched from the instruction cache. The data bytes associated with the predicted address are placed in a relatively small, fast buffer. The decode stages of instruction processing pipelines in the microprocessor access the buffer with addresses generated from the instructions, and if the associated data bytes are found in the buffer they are conveyed to the reservation station associated with the requesting decode stage. Therefore, the implicit memory read associated with an instruction is performed prior to the instruction arriving in a functional unit. The functional unit is occupied by the instruction for a fewer number of clock cycles, since it need not perform the implicit memory operation. Instead, the functional unit performs the explicit operation indicated by the instruction.
摘要:
A technique for run-time tracking changes to variables and memory locations during code execution to increase efficiency of execution of the code and to facilitate in debugging the code. In one example embodiment, this is achieved by determining whether a received instruction in a trackable instruction during code execution. The trackable instructions can include one or more trackable variables. The trackable instruction is then decoded and a track instruction cache and a track variable cache are then updated with associated decoded trackable instruction and the one or more trackable variables, respectively.
摘要:
One embodiment of the present invention provides a system that generates prefetches by speculatively executing code during stalls through a technique known as 'hardware scout threading.' The system starts by executing code within a processor. Upon encountering a stall, the system speculatively executes the code from the point of the stall, without committing results of the speculative execution to the architectural state of the processor. If the system encounters a memory reference during this speculative execution, the system determines if a target address for the memory reference can be resolved. If so, the system issues a prefetch for the memory reference to load a cache line for the memory reference into a cache within the processor. In a variation on this embodiment, the processor supports simultaneous multithreading (SMT), which enables multiple threads to execute concurrently through time-multiplexed interleaving in a single processor pipeline. In this variation, the non-speculative execution is carried out by a first thread and the speculative execution is carried out by a second thread, wherein the first thread and the second thread simultaneously execute on the processor.
摘要:
A processor includes a history control unit (51) that stores a storage destination of a result obtained by executing a second instruction that is executed prior to a first instruction placed before the second instruction. When it is determined that the address of first data to be processed by the first instruction is included in the address region of second data to be processed by the second instruction, the history control unit (51) overwrites the result obtained by the execution of the first instruction on the second data corresponding to the address. The processor can perform a load operation prior to a store operation while avoiding ambiguous memory reference, and achieves high-speed operations.
摘要:
A prefetcher to prefetch data for an instruction based on the distance between cache misses caused by the instruction. In an embodiment, the prefetcher includes a memory to store a prefetch table that contains one or more entries that include the distance between cache misses caused by an instruction. In a further embodiment, the addresss of data element sprefetched are determined based on the distance betweeen cache misses recorded in the prefetch table for the instruction.
摘要:
A microprocessor with reduced context switching overhead and a corresponding method is disclosed. The microprocessor comprises a working register file that comprises dirty bit registers and working registers. The working registers including one or more corresponding working registers for each of the dirty bit registers. The microprocessor also comprises a decoder unit that is configured to decode an instruction that has a dirty bit register field specifying a selected dirty bit register of the dirty bit registers. The decoder unit is configured to generate decode signals in response. Furthermore, the working register file is configured to cause the selected dirty bit register to store a new dirty bit in response to the decode signals. The new dirty bit indicates that each operand stored by the one or more corresponding working registers is inactive and no longer needs to be saved to memory if a new context switch occurs.
摘要:
A method for prefetching structured data, and more particularly a mechanism for observing address references made by a processor, and learning from those references the patterns of accesses made to structured data. Structured data means aggregates of related data such as arrays, records, and data containing links and pointers. When subsequent accesses are made to data structured in the same way, the mechanism generates in advance the sequence of addresses that will be needed for the new accesses. This sequence is utilized by the memory to obtain the data somewhat earlier than the instructions would normally request it, and thereby eliminate idle time due to memory latency while awaiting the arrival of the data.
摘要:
A data prediction structure for a superscalar microprocessor is provided. The data prediction structure predicts a data address that a group of instructions is going to access while that group of instructions is being fetched from the instruction cache. The data bytes associated with the predicted address are placed in a relatively small, fast buffer. The decode stages of instruction processing pipelines in the microprocessor access the buffer with addresses generated from the instructions, and if the associated data bytes are found in the buffer they are conveyed to the reservation station associated with the requesting decode stage. Therefore, the implicit memory read associated with an instruction is performed prior to the instruction arriving in a functional unit. The functional unit is occupied by the instruction for a fewer number of clock cycles, since it need not perform the implicit memory operation. Instead, the functional unit performs the explicit operation indicated by the instruction.
摘要:
This invention implements a cache access system that shortens the address generation machine cycle of a digital computer, while simultaneously avoiding the synonym problem of logical addressing. The invention is based on the concept of predicting what the real address used in the cache memory will be, independent of the generation of the logical address. The prediction involves recalling the last real address used to access the cache memory for a particular instruction, and then using that real address to access the cache memory. Incorrect guesses are corrected and kept to a minimum through monitoring the history of instructions and real addresses called for in the computer. This allows the cache memory to retrieve the information faster than waiting for the virtual address to be generated and then translating the virtual address into a real address. The address generation machine cycle is faster because the delays associated with the adder of the virtual address generation means and the translation buffer are bypassed.