摘要:
A memory interface for use with a multiprocess memory system having a gating memory, the gating memory associating one or more memory access methods with each of a plurality of memory locations of the memory system wherein the gating memory returns a particular one access method for a particular one memory location responsive to a memory access instruction relating to the particular one memory location, the interface including: a request storage for storing a plurality of concurrent memory access instructions for one or more of the particular memory locations, each the memory access instruction issued from an associated independent thread context; an arbiter, coupled to the request storage, for selecting a particular one of the memory access instructions to apply to the gating memory; and a controller, coupled to the request storage and to the arbiter, for: storing the plurality of memory access instructions in the request storage; initiating application of the particular one memory access instruction selected by the arbiter to the gating memory; receiving the particular one access method associated with the particular one memory access method from the gating memory; and initiating a communication of the particular access method to the thread context associated with the particular one access instruction.
摘要:
A multithreading processor for concurrently executing multiple threads is provided. The processor includes an execution pipeline and a thread scheduler that dispatches instructions of the threads to the execution pipeline. The execution pipeline execution pipeline is configured for generating a thread context (TC) flush indicator associated with a thread context when one or more instructions of the thread context would stall in the execution pipeline. One or more instructions in the pipeline of the thread context associated with the thread context flush signal can be flushed or nullified.
摘要:
A bifurcated instruction scheduler for dispatching instructions of multiple threads concurrently executing in a multithreading processor is provided. The scheduler includes a first portion within a reusable core that is not customizable by a customer, a second portion outside the core that is customizable, and an interface coupling the second portion to the core. The second portion implements a thread scheduling policy that may be customized to the customer's particular application. The first portion may be scheduling policy-agnostic and issues instructions of the threads each clock cycle to execution units based on the scheduling policy communicated by the second portion. The second portion communicates the scheduling policy via a priority for each of the threads. When the core commits an instruction for execution, the core communicates to the second portion which thread the committed instruction is in to enable the second portion to update the priorities in response thereto.
摘要:
An apparatus for reducing instruction re-fetching in a multithreading processor configured to concurrently execute a plurality of threads is disclosed. The apparatus includes a buffer for each thread that stores fetched instructions of the thread, having an indicator for indicating which of the fetched instructions in the buffer have already been dispatched for execution. An input for each thread indicates that one or more of the already-dispatched instructions in the buffer has been flushed from execution. Control logic for each thread updates the indicator to indicate the flushed instructions are no longer already-dispatched, in response to the input. This enables the processor to re-dispatch the flushed instructions from the buffer to avoid re-fetching the flushed instructions. In one embodiment, there are fewer buffers than threads, and they are dynamically allocatable by the threads. In one embodiment, a single integrated buffer is shared by all the threads.
摘要:
A leaky-bucket style thread scheduler for scheduling concurrent execution of multiple threads in a microprocessor is provided. The execution pipeline notifies the scheduler when it has completed instructions. The scheduler maintains a virtual water level for each thread and decreases it each time the execution pipeline executes an instruction of the thread. The scheduler includes an instruction execution rate for each thread. The scheduler increases the virtual water level based on the requested rate per a predetermined number of clock cycles. The scheduler includes virtual water pressure parameters that define a set of virtual water pressure ranges over the height of the virtual water bucket. When a thread's virtual water level moves from one virtual water pressure range to the next higher range, the scheduler increases the instruction issue priority for the thread; conversely, when the level moves down, the scheduler decreases the instruction issue priority for the thread.
摘要:
An apparatus for scheduling dispatch of instructions among a plurality of threads being concurrently executed in a multithreading processor is provided. The apparatus includes an instruction decoder that generate register usage information for an instruction from each of the threads, a priority generator that generates a priority for each instruction based on the register usage information and state information of instructions currently executing in an execution pipeline, and selection logic that dispatches at least one instruction from at least one thread based on the priority of the instructions. The priority indicates the likelihood the instruction will execute in the execution pipeline without stalling. For example, an instruction may have a high priority if it has little or no register dependencies or its data is known to be available; or may have a low priority if it has strong register dependencies or is an uncacheable or synchronized storage space load instruction.
摘要:
A multithreading processor for concurrently executing multiple threads is provided. The processor includes an execution pipeline and a thread scheduler that dispatches instructions of the threads to the execution pipeline. The execution pipeline detects a stalling event caused by a dispatched instruction, and flushes the execution pipeline to enable instructions of other threads to continue executing. The execution pipeline communicates to the scheduler which thread caused the stalling event, and the scheduler stops dispatching instructions for the thread until the stalling condition terminates. In one embodiment, the execution pipeline only flushes the thread including the instruction that caused the event. In one embodiment, the execution pipeline stalls rather than flushing if the thread is the only runnable thread. In one embodiment, the processor includes skid buffers to which the flushed instructions are rolled back so the instruction fetch pipeline need not be flushed, only the execution pipeline.
摘要:
A method of extracting bits of a bit stream including retrieving bits from the bit stream into an accumulator, specifying a size value specifying a number of bits to extract, storing a position value into a control register, and executing a bit extraction instruction. The bit extraction instruction includes copying the size value number of bits from the accumulator beginning at the position value into a target register, setting any remaining bits of the target register to zero, and decrementing the position value by an amount based on the size value. The method may include loading bits from a bit stream into a register and moving the contents of the register into the accumulator to replenish the accumulator. The method may include determining, based on the position value, whether the accumulator needs to be replenished, and if not, branching to bypass replenishing functions.
摘要:
A multiprocessor system maintains cache coherence among processors in a coherent domain. Within the coherent domain, a first processor can receive a command to perform a cache maintenance operation. The first processor can determine whether the cache maintenance operation is a coherent operation. For coherent operations, the first processor sends a coherent request message for distribution to other processors in the coherent domain and can cancel execution of the cache maintenance operation pending receipt of intervention messages corresponding to the coherent request. The intervention messages can reflect a global ordering of coherence traffic in the multiprocessor system and can include instructions for maintaining a data cache and an instruction cache of the first processor. Cache maintenance operations that are determined to be non-coherent can be executed at the first processor without sending the coherent request.
摘要:
A modular subtraction instruction for execution on a microprocessor having at least one register. The instruction includes opcode bits for designating the instruction and operand bits for designating at least one register storing an offset index, a decrement value, and an address index. When the modular subtraction instruction is executed on the microprocessor, the address index is modified by the decrement value if the address index is not zero and is modified by the offset index if the address index is zero. For example, the address index is repeatedly decremented using the decrement value until it reaches zero, and then the address index is reset back to the offset index. The operand bits may include multiple fields identifying multiple registers selected from the general purpose registers of the microprocessor. The modular subtraction instruction enables access to a buffer in memory in circular fashion by virtue of its operation.