摘要:
A method for prefetching instructions into cache memory using a prefetch instruction. The prefetch instruction contains a target field, a count field, a cache level field, a flush field, and a trace field. The target field specifies the address at which prefetching begins. The count field specifies the number of instructions to prefetch. The flush field indicates whether earlier prefetches should be discarded and whether in-progress prefetches should be aborted. The level field specifies the level of the cache into which the instructions should be prefetched. The trace field establishes a trace vector that can be used to determine whether the prefetching operation specified by the operation should be aborted. The prefetch instruction may be used in conjunction with a branch predict instruction to prefetch a branch of instructions that is not predicted.
摘要:
Disclosed is a computer architecture with single-syllable IP-relative branch instructions and long IP-relative branch instructions (IP=instruction pointer). The architecture fetches instructions in multi-syllable, bundle form. Single-syllable IP-relative branch instructions occupy a single syllable in an instruction bundle, and long IP-relative branch instructions occupy two syllables in an instruction bundle. The additional syllable of the long branch carries with it additional IP-relative offset bits, which when merged with offset bits carried in a core branch syllable provide a much greater offset than is carried by a single-syllable branch alone. Thus, the long branch provides for greater reach within an address space. Use of the long branch to patch IA-64 architecture instruction bundles is also disclosed. Such a patch provides the reach of an indirect branch with the overhead of a single-syllable IP-relative branch.
摘要:
An apparatus for and a method of ensuring that a non-speculative instruction is not fetched into an execution pipeline, where the non-speculative instruction, if fetched, may cause a cache miss that causes potentially catastrophic speculative processing, e.g., speculative transfer of data from an I/O device. When a non-speculative instruction is scheduled for a fetch into the pipeline, a translation lookaside buffer (TLB) miss is made to occur, e.g., by preventing the lowest level TLB from storing any page table entry (PTE) associated with any of the non-speculative instructions. The TLB miss prevents the occurrence of any cache miss, and causes a micro-fault to be injected into the pipeline. The micro-fault includes an address corresponding to the subject non-speculative instruction, and when it reaches the end of the pipeline, causes a redirect of instruction flow of the pipeline to the address, and thus the non-speculative instruction is fetched and executed in a non-speculative manner.
摘要:
The present invention is a method for implementing two architectures on a single chip. The method uses a fetch engine to retrieve instructions. If the instructions are macroinstructions, then it decodes the macroinstructions into microinstructions, and then bundles those microinstructions using a bundler, within an emulation engine. The bundles are issued in parallel and dispatched to the execution engine and contain pre-decode bits so that the execution engine treats them as microinstructions. Before being transferred to the execution engine, the instructions may be held in a buffer. The method also selects between bundled microinstructions from the emulation engine and native microinstructions coming directly from the fetch engine, by using a multiplexor or other means. Both native microinstructions and bundled microinstructions may be held in the buffer. The method also sends additional information to the execution engine.
摘要:
In a cache which writes new data over less recently used data, methods and apparatus which dispense with the convention of marking new cache data as most recently used. Instead, non-referenced data is marked as less recently used when it is written into a cache, and referenced data is marked as more recently used when it is written into a cache. Referenced data may correspond to fetch data, and non-referenced data may correspond to prefetch data. Upon fetch of a data value from the cache, its use status may be updated to more recently used. The methods and apparatus have the affect of preserving (n−1)/n of a cache's entries for the storage of fetch data, while limiting the storage of prefetch data to 1/n of a cache's entries. Pollution which results from unneeded prefetch data is therefore limited to 1/n of the cache. In reality, however, pollution from unneeded prefetch data will be significantly less, as many prefetch data values will ultimately be fetched prior to their overwrite with new data, and upon their fetch, their use status can be upgraded to most recently used, thus ensuring their continued maintenance in the cache.
摘要:
A multi-level instruction cache memory system for a computer processor. A relatively large cache has both instructions and data. The large cache is the primary source of data for the processor. A smaller cache dedicated to instructions is also provided. The smaller cache is the primary source of instructions for the processor. Instructions are copied from the larger cache to the smaller cache during times when the processor is not accessing data in the larger cache. A prefetch buffer transfers instructions from the larger cache to the smaller cache. If a cache miss occurs for the smaller cache, and the instruction is in the prefetch buffer, the system provides the instruction with no delay relative to a fetch from the smaller instruction cache. If a cache miss occurs for the smaller cache, and the instruction is being fetched from the larger cache, or available in the larger cache, the system provides the instruction with minimal delay relative to a fetch from the smaller instruction cache.
摘要:
A method of improving the performance of a computer processor by recognizing that two consecutive register instructions can be executed simultaneously and executing the two instructions simultaneously while generating a single data address and while performing exception checking on a single data address. During an instruction fetch process, two consecutive instructions are tested to determine if both are either register load instructions or register save instructions. If both instructions are load or save register instructions, the corresponding data addresses are tested to see if both data addresses are in the same double word. If both data addresses are in the same double word, then the instructions are executed simultaneously. Only one data address generation is required and exception processing is performed on only one data address. In one example embodiment, a simplified test rapidly ensures that both data addresses are in the same double word, but also requires the base addresses to be at an even word boundary. In a second embodiment, where the processor includes an alignment test as a separate test, an even more simple test rapidly ensures that both data address are in the same double word without checking alignment.