摘要:
A method of operating a processor includes concatenating a first word and a second word to produce an intermediate result, shifting the intermediate result by a specified shift amount and storing the shifted intermediate result in a third word, to create an address.
摘要:
A parallel hardware-based multithreaded processor is described. The processor includes a general purpose processor that coordinates system functions and a plurality of microengines that support multiple program threads. The processor also includes a memory control system that has a first memory controller that sorts memory references based on whether the memory references are directed to an even bank or an odd bank of memory and a second memory controller that optimizes memory references based upon whether the memory references are read references or write references. A program thread communication scheme for packet processing is also described.
摘要:
A parallel hardware-based multithreaded processor is described. The processor includes a general purpose processor that coordinates system functions and a plurality of microengines that support multiple program threads. The processor also includes a memory control system that has a first memory controller that sorts memory references based on whether the memory references are directed than even bank or an odd bank of memory and a second memory controller that optimizes memory references based upon whether the memory references are read references or write references. A program thread communication scheme for packet processing is also described.
摘要:
A method of and apparatus for associating units of data with threads of a multi-threaded processor for processing, and enabling each thread to perform processing for at least two of the data units during a thread execution period. The thread execution period is divided among phases, and each of the data units processed by a thread is processed by a different one of the phases.
摘要:
A mechanism to process units of data associated with a dependent data stream using different threads of execution and a common data structure in memory. Accessing the common data structure in memory for the processing uses a single read operation and a single write operation. The folding of multiple read-modify-write memory operations in such a manner for multiple multi-threaded stages of processing includes controlling a first stage, which operates on the same data unit as a second stage to pass context state information to the second stage for coherency.
摘要:
A parallel, multi-threaded processor system and technique for arbitrating command requests is described. The system includes a plurality of microengines, a plurality of shared system resources and a global command arbiter. The global command arbiter uses a command request protocol that is based on the shared system resources and command type to grant or deny a microengine command request for a shared resource.
摘要:
Managing memory access to random access memory includes fetching a read lock memory reference request and placing the read lock memory reference request at the end of a read lock miss queue if the read lock memory reference request is requesting access to an unlocked memory location and the read lock miss queue contains at least one read lock memory reference request.
摘要:
A parallel, multi-threaded processor system and technique for arbitrating command requests is described. The system includes a plurality of microengines, a plurality of shared system resources and a global command arbiter. The global command arbiter uses a command request protocol that is based on the shared system resources and command type to grant or deny a microengine command request for a shared resource.
摘要:
A parallel, multi-threaded processor system and technique for arbitrating command requests is described. The system includes a plurality of microengines, a plurality of shared system resources and a global command arbiter. The global command arbiter uses a command request protocol that is based on the shared system resources and command type to grant or deny a microengine command request for a shared resource.
摘要:
Managing memory access to random access memory includes fetching a read lock memory reference request and placing the read lock memory reference request at the end of a read lock miss queue if (1) the read lock memory reference request is requesting access to an unlocked memory location and (2) the read lock miss queue contains at least one read lock memory reference request.