摘要:
In response to a property of a conditional branch instruction associated with a loop, such as a property indicating that the branch is a loop-ending branch, a count of the number of iterations of the loop is maintained, and a multi-bit value indicative of the loop iteration count is stored in a Branch History Register (BHR). In one embodiment, the multi-bit value may comprise the actual loop count, in which case the number of bits is variable. In another embodiment, the number of bits is fixed (e.g., two) and loop iteration counts are mapped to one of a fixed number of multi-bit values (e.g., four) by comparison to thresholds. Separate iteration counts may be maintained for nested loops, and a multi-bit value stored in the BHR may indicate a loop iteration count of only an inner loop, only the outer loop, or both.
摘要:
A processor includes a cache memory having at least one entry managed according to a copy-back algorithm. A global modified indicator (GMI) indicates whether any copy-back entry in the cache contains modified data. On a cache miss, if the GMI indicates that no copy-back entry in the cache contains modified data, data fetched from memory are written to the selected entry without first reading the entry. In a banked cache, two or more bank-GMIs may be associated with two or more banks. In an n-way set associative cache, n set-GMIs may be associated with the n sets. Suppressing the read to determine if the copy-back cache entry contains modified data improves processor performance and reduces power consumption.
摘要:
Systems and arrangements promoting a line from shared to exclusive in cache are contemplated. Embodiments include a cache controller adapted to determine whether a memory line for which the processor is to issue an address-only kill request resides in a fill buffer for the cache line in a shared state. If so, the cache controller may mark the fill buffer as not having completed bus transactions and issue the address-only kill request for that fill buffer. The address-only kill request may transmit to other processors on the bus and the other processors may respond by invalidating the cache entries for the memory line. Upon confirmation from the other processors, a bus arbiter may confirm the kill request, promoting the memory line already in that fill buffer to exclusive state. Once promoted, the fill buffer may be marked as having completed the bus transactions and may be written into the cache.
摘要:
A processor capable of fetching and executing variable length instructions is described having instructions of at least two lengths. The processor operates in multiple modes. One of the modes restricts instructions that can be fetched and executed to the longer length instructions. An instruction cache is used for storing variable length instructions and their associated predecode bit fields in an instruction cache line and storing the instruction address and processor operating mode state information at the time of the fetch in a tag line. The processor operating mode state information indicates the program specified mode of operation of the processor. The processor fetches instructions from the instruction cache for execution. As a result of an instruction fetch operation, the instruction cache may selectively enable the writing of predecode bit fields in the instruction cache and may selectively enable the reading of predecode bit fields stored in the instruction cache based on the processor state at the time of the fetch.
摘要:
Techniques for controllably allocating a portion of a plurality of memory banks as cache memory are disclosed. To this end, a configuration tracker and a bank selector are employed. The configuration tracker configures whether each memory bank is to operate in a cache or not. The bank selector has a plurality of bank distributing functions. Upon receiving an incoming address, the bank selector determines the configuration of memory banks currently operating as the cache and applies an appropriate bank distributing function based on the configuration of memory banks. The applied bank distributing function utilizes bits in the incoming address to access one of the banks configured as being in the cache.
摘要:
Method and apparatus for reformatting instructions in a pipelined processor. An instruction register holds a plurality of instructions received from a cache memory external to the processor. A predecoder predecodes each of the instructions and determines from an instruction operation field where the instruction fields should be placed. A multiplexer reformats architecturally aligned instructions into hardware implementation aligned instructions prior to storing into L1 cache, so that the instructions are ready for dispatch to the pipeline execution units.
摘要:
A method and system for avoiding various hazards for instructions which are propagating through a microprocessor pipeline. When a plurality of instructions exist within the pipeline which read and write the same value, a vector is established to distinguish the older from the newer instructions. Further, before instructions are dispatched for execution, pointers are generated which identify the particular instruction which had the operand or parameter value needed. Accordingly, by monitoring both the recent vector and pointers, dated dependency hazards can be avoided.
摘要:
A processor having a multistage pipeline includes a TLB and a TLB controller. In response to a TLB miss signal, the TLB controller initiates a TLB reload, requesting address translation information from either a memory or a higher-level TLB, and placing that information into the TLB. The processor flushes the instruction having the missing virtual address, and refetches the instruction, resulting in re-insertion of the instruction at an initial stage of the pipeline above the TLB access point. The initiation of the TLB reload, and the flush/refetch of the instruction, are performed substantially in parallel, and without immediately stalling the pipeline. The refetched instruction is held at a point in the pipeline above the TLB access point until the TLB reload is complete, so that the refetched instruction generates a “hit” in the TLB upon its next access.
摘要:
An apparatus includes a memory configured to store data, a lower level TLB, an upper level TLB, and a TLB controller. The lower level TLB and the upper level TLB are configured to store a plurality of entries, each of the entries containing an address translation information that allows a virtual address to be translated into a corresponding physical address. The TLB controller retrieves from a page table in the memory an address translation information for a desired virtual address, if the desired virtual address generates a TLB miss from the lower level TLB and from the upper level TLB, Using a single TLB write instruction, the TLB controller updates both the lower level TLB and the upper level TLB by writing the address translation information, retrieved from the page table, into the lower level TLB as well as into the upper level TLB.
摘要:
In one or more embodiments, a processor includes a link return stack circuit used for storing branch return addresses, wherein a link return stack controller is configured to determine that one or more entries in the link return stack are invalid as being dependent on a mispredicted branch, and to reset the link return stack to a valid remaining entry, if any. In this manner, branch mispredictions cause dependent entries in the link return stack to be flushed from the link return stack, or otherwise invalidated, while preserving the remaining valid entries, if any, in the link return stack. In at least one embodiment, a branch information queue used for tracking predicted branches is configured to store a marker indicating whether a predicted branch has an associated entry in the link return stack, and it may store an index value identifying the specific, corresponding entry in the link return stack.