摘要:
A mechanism controls a multi-thread processor so that when a first thread encounters a latency event for a first predefined time interval temporary control is transferred to an alternate execution thread for duration of the first predefined time interval and then back to the original thread. The mechanism grants full control to the alternate execution thread when a latency event for a second predefined time interval is encountered. The first predefined time interval is termed short latency event whereas the second time interval is termed long latency event.
摘要:
A control mechanism is established between a network processor and a tree search coprocessor to deal with latencies in accessing the data such as information formatted in a tree structure. A plurality of independent instruction execution threads are queued to enable them to have rapid access to the shared memory. If execution of a thread becomes stalled due to a latency event, full control is granted to the next thread in the queue. The grant of control is temporary when a short latency event occurs or full when a long latency event occurs. Control is returned to the original thread when a short latency event is completed. Each execution thread utilizes an instruction prefetch buffer that collects instructions for idle execution threads when the instruction bandwidth is not fully utilized by an active execution thread. The thread execution control is governed by the collective functioning of a FIFO, an arbiter and a thread control state machine.
摘要:
A mechanism controls a multi-thread processor so that when a fist thread encounters a latency event to a first predefined time interval temporary control is transferred to an alternate execution thread for duration of the first predefined time interval and then back to the original thread. The mechanism grants full control to the alternate execution thread when a latency event for a second predefined time interval is encountered. The first predefined time interval is termed short latency event whereas the second time interval is termed long latency event.
摘要:
A system and method of protocol and frame classification in a system for data processing is disclosed, including, analyzing a portion of the, packet or frame according to predetermined tests, and storing characteristics of the packet for use in subsequent processing of the frame. The characteristics are preferably obtained with hardware, which does so quickly and in a uniform time period. The stored characteristics of the packet are then used by the network processing complexes in further processing of the frame. The processor is preconditioned with a starting instruction address or cede entry point and the location of the beginning of the layer 3 header as well as flags for the type of frame.
摘要:
A system and method of protocol and frame classification in a system for data processing (e.g., switching or routing data packets or frames). The present invention includes analyzing a portion of the packet or frame according to predetermined tests, then storing key characteristics of the packet for use in subsequent processing of the frame. The key characteristics for the frame (or input information unit, such as the type of layer 3 protocol used in the frame, the layer 2 encapsulation technique, the starting instruction address and flags indicating whether the frame uses a virtual local area network, preferably using hardware to quickly and in a uniform time period. The stored key characteristics of the packet are then used by the network processing complexes in its further processing of the frame. The processor is preconditioned with a starting instruction address and the location of the beginning of the layer 3 header as well as flags for the type of frame. That is, the instruction address or code entry point is used by the processor to start processing for a frame at the right place, based on the type of frame. Additionally, additional instruction addresses can be stacked and used sequentially at branches to avoid additional tests and branching instructions.
摘要:
An embedded processor complex contains multiple protocol processor units (PPUs). Each unit includes at least one, and preferably two independently functioning core language processors (CLPs). Each CLP supports dual threads thread which interact through logical coprocessor execution or data interfaces with a plurality of special purpose coprocessors that serve each PPU. Operating instructions enable the PPU to identify long and short latency events and to control and shift priority for thread execution based on this identification. The instructions also enable the conditional execution of specific coprocessor operations upon the occurrence or non occurrence of certain specified events.
摘要:
An ordered semaphore management subsystem and method for use in an application system which includes a plurality of processors competing for shared resources each of which is controlled by a unique semaphore. The subsystem generates an ordered semaphore field (OSF) corresponding to each processor in a linked list of processors and assigns one of four statuses to the OSF depending on the position the processor occupies in the linked list of processors competing for the shared resources. The four states are (1) semaphore head (SH); (2) behind semaphore head (BSH); (3) semaphore head behind (SHB); and (4) skip (Skip). Only the SH processor is allocated the semaphore when requested. A processor not in the SH state will be denied the semaphore even if is available to assure sequential access.
摘要:
A generic method and apparatus for managing semaphores in a multi-threaded processing system has a storage area for each of the threads in the processing system. Each storage area includes a first part for storing at least one indicia for identifying at least one unique semaphore from a plurality of semaphores utilized by the multi-threaded processing system and a second part for storing an indicia for indicating a locked status for the stored semaphore. A thread requiring a semaphore sends a semaphore lock request to the semaphore manager which examines the contents of all of the storage areas to determine the status of the requested semaphore. If the requested semaphore is not locked, it is locked for the requesting thread by inserting the requested semaphore and locked status in the memory location assigned to the requesting thread.
摘要:
Systems and methods for distributing thread instructions in the pipeline of a multi-threading digital processor are disclosed. More particularly, hardware and software are disclosed for successively selecting threads in an ordered sequence for execution in the processor pipeline. If a thread to be selected cannot execute, then a complementary thread is selected for execution.
摘要:
Systems and methods for distributing thread instructions in the pipeline of a multi-threading digital processor are disclosed. More particularly, hardware and software are disclosed for successively selecting threads in an ordered sequence for execution in the processor pipeline. If a thread to be selected cannot execute, then a complementary thread is selected for execution.