Abstract:
A technique for operating a processor includes translating, using an associated transaction lookaside buffer, a first virtual address into a first physical address through a first entry number in the transaction lookaside buffer. The technique also includes translating, using the transaction lookaside buffer, a second virtual address into a second physical address through a second entry number in the translation lookaside buffer. The technique further includes, in response to the first entry number being the same as the second entry number, determining that the first and second virtual addresses point to the same physical address in memory and reference the same data.
Abstract:
A processor uses a dedicated buffer to reduce the amount of time needed to execute memory copy operations. For each load instruction associated with the memory copy operation, the processor copies the load data from memory to the dedicated buffer. For each store operation associated with the memory copy operation, the processor retrieves the store data from the dedicated buffer and transfers it to memory. The dedicated buffer is separate from a register file and caches of the processor, so that each load operation associated with a memory copy operation does not have to wait for data to be loaded from memory to the register file. Similarly, each store operation associated with a memory copy operation does not have to wait for data to be transferred from the register file to memory.
Abstract:
A data processing system comprises a processor unit that includes an instruction decode/issue unit including a re-order buffer having entries that include an execution queue tag that indicates an execution queue location of an instruction to which a re-order buffer entry is assigned, a result valid indicator to indicate that a corresponding instruction has executed with a status bit valid result, and a forward indicator to indicate that the status bit can be forwarded to an execution queue of an instruction pointed to that is waiting to receive the status bit.
Abstract:
A data processor includes a branch target buffer (BTB) having a plurality of BTB entries grouped in ways. The BTB entries in one of the ways include a short tag address and the BTB entries in another one of the ways include a full tag address.
Abstract:
A processor comprises decode logic that determines an instruction type for each instruction fetched, a first level cache, a second level cache coupled to the first level cache, and control logic operatively coupled to the first and second level caches. The control logic preferably causes cache linefills to be performed to the first level cache upon cache misses for a first type of instruction, but precludes linefills from being performed to the first level cache for a second type of instruction.
Abstract:
An instruction alignment unit for aligning instructions in a digital processor having a pipelined architecture includes an instruction queue, a current instruction buffer and a next instruction buffer in a pipeline stage n, an aligned instruction buffer in a pipeline stage n+1, instruction fetch logic for loading instructions into the current instruction buffer from an instruction cache or from the next instruction buffer and for loading instructions into the next instruction buffer from the instruction cache or from the instruction queue, and alignment control logic responsive to instruction length information contained in the instructions for controlling transfer of instructions from the current instruction buffer and the next instruction buffer to the aligned instruction buffer. The alignment control logic includes predecoders for predecoding the instructions to provide instruction length information and pointer generation logic responsive to the instruction length information for generating a current instruction pointer for controlling transfer of instructions to the aligned instruction buffer.
Abstract:
A memory system for operation with a processor, such as a digital signal processor, includes a high speed pipelined memory, a store buffer for holding store access requests from the processor, a load buffer for holding load access requests from the processor, and a memory control unit for processing access requests from the processor, from the store buffer and from the load buffer. The memory control unit may include prioritization logic for selecting access requests in accordance with a priority scheme and bank conflict logic for detecting and handling conflicts between access requests. The pipelined memory may be configured to output two load results per clock cycle at very high speed.
Abstract:
A reorder buffer is configured into multiple lines of storage, wherein a line of storage includes sufficient storage for instruction results regarding a predefined maximum number of concurrently dispatchable instructions. A line of storage is allocated whenever one or more instructions are dispatched. A microprocessor employing the reorder buffer is also configured with fixed, symmetrical issue positions. The symmetrical nature of the issue positions may increase the average number of instructions to be concurrently dispatched and executed by the microprocessor. The average number of unused locations within the line decreases as the average number of concurrently dispatched instructions increases. One particular implementation of the reorder buffer includes a future file. The future file comprises a storage location corresponding to each register within the microprocessor. The reorder buffer tag (or instruction result, if the instruction has executed) of the last instruction in program order to update the register is stored in the future file. The reorder buffer provides the value (either reorder buffer tag or instruction result) stored in the storage location corresponding to a register when the register is used as a source operand for another instruction. Another advantage of the future file for microprocessors which allow access and update to portions of registers is that narrow-to-wide dependencies are resolved upon completion of the instruction which updates the narrower register.
Abstract:
A microprocessor employs a branch prediction unit including a branch prediction storage which stores the index portion of branch target addresses and an instruction cache which is virtually indexed and physically tagged. The branch target index (if predicted-taken, or the sequential index if predicted not-taken) is provided as the index to the instruction cache. The selected physical tag is provided to a reverse translation lookaside buffer (TLB) which translates the physical tag to a virtual page number. Concatenating the virtual page number to the virtual index from the instruction cache (and the offset portion, generated from the branch prediction) results in the branch target address being generated. In one embodiment, the process of reading an index from the branch prediction storage, accessing the instruction cache, selecting the physical tag, and reverse translating the physical tag to achieve a virtual page number may require more than a clock cycle to complete. Such an embodiment may employ a current page register which stores the most recently translated virtual page number and the corresponding real page number. The branch prediction unit predicts that each fetch address will continue to reside in the current page and uses the virtual page number from the current page to form the branch target address. The physical tag from the fetched cache line is compared to the corresponding real page number to verify that the fetch address is actually still within the current page. When a mismatch is detected between the corresponding real page number and the physical tag from the fetched cache line, the branch target address is corrected with the linear page number provided by the reverse TLB and the current page register is updated.
Abstract:
A branch prediction apparatus is provided which stores multiple branch selectors corresponding to instruction bytes within a cache line of instructions or portion thereof. The branch selectors identify a branch prediction to be selected if the corresponding instruction byte is the byte indicated by the offset of the fetch address used to fetch the cache line. Instead of comparing pointers to the branch instructions with the offset of the fetch address, the branch prediction is selected simply by decoding the offset of the fetch address and choosing the corresponding branch selector. The branch prediction apparatus may operate at a higher frequencies (i.e. lower clock cycles) than if the pointers to the branch instruction and the fetch address were compared (a greater than or less than comparison). The branch selectors directly determine which branch prediction is appropriate according to the instructions being fetched, thereby decreasing the amount of logic employed to select the branch prediction.