摘要:
A computer system having a plurality of processing resources, including a sub-system for scheduling and dispatching processing jobs to a plurality of hardware accelerators, the subsystem further comprising a job requestor, for requesting jobs having bounded and varying latencies to be executed on the hardware accelerators; a queue controller to manage processing job requests directed to a plurality of hardware accelerators; and multiple hardware queues for dispatching jobs to the plurality of hardware acceleration engines, each queue having a dedicated head of queue entry, dynamically sharing a pool of queue entries, having configurable queue depth limits, and means for removing one or more jobs across all queues.
摘要:
A computer system having a plurality of processing resources, including a sub-system for scheduling and dispatching processing jobs to a plurality of hardware accelerators, the subsystem further comprising a job requestor, for requesting jobs having bounded and varying latencies to be executed on the hardware accelerators; a queue controller to manage processing job requests directed to a plurality of hardware accelerators; and multiple hardware queues for dispatching jobs to the plurality of hardware acceleration engines, each queue having a dedicated head of queue entry, dynamically sharing a pool of queue entries, having configurable queue depth limits, and means for removing one or more jobs across all queues.
摘要:
A technique for maintaining input/output (I/O) command ordering on a bus includes assigning a channel identifier to I/O commands of an I/O stream. In this case, the channel identifier indicates the I/O commands belong to the I/O stream. A command location indicator is assigned to each of the I/O commands. The command location indicator provides an indication of which one of the I/O commands is a start command in the I/O stream and which of the I/O commands are continue commands in the I/O stream. The I/O commands are issued in a desired completion order. When a first one of the I/O commands does not complete successfully, the I/O commands in the I/O stream are reissued on the bus starting at the first one of the I/O commands that did not complete successfully.
摘要:
A technique for maintaining input/output (I/O) command ordering on a bus includes assigning a channel identifier to I/O commands of an I/O stream. In this case, the channel identifier indicates the I/O commands belong to the I/O stream. A command location indicator is assigned to each of the I/O commands. The command location indicator provides an indication of which one of the I/O commands is a start command in the I/O stream and which of the I/O commands are continue commands in the I/O stream. The I/O commands are issued in a desired completion order. When a first one of the I/O commands does not complete successfully, the I/O commands in the I/O stream are reissued on the bus starting at the first one of the I/O commands that did not complete successfully.
摘要:
A network processor useful in network switch apparatus and methods of operating such a processor in which data flow handling and flexibility is enhanced by the cooperation of an embedded processor complex with a suite of peripherals, all formed on a common semiconductor substrate. The interface processors provide data paths for inbound and outbound data flow and operate under the control of instructions stored in an instruction store formed on the semiconductor substrate, while storage of transiting data flow portions is provided by memory peripherals and interfaces to external memory elements.
摘要:
An embedded processor complex contains multiple protocol processor units (PPUs). Each unit includes at least one, and preferably two independently functioning core language processors (CLPs). Each CLP supports dual threads thread which interact through logical coprocessor execution or data interfaces with a plurality of special purpose coprocessors that serve each PPU. Operating instructions enable the PPU to identify long and short latency events and to control and shift priority for thread execution based on this identification. The instructions also enable the conditional execution of specific coprocessor operations upon the occurrence or non occurrence of certain specified events.
摘要:
A system and method of protocol and frame classification in a system for data processing is disclosed, including, analyzing a portion of the, packet or frame according to predetermined tests, and storing characteristics of the packet for use in subsequent processing of the frame. The characteristics are preferably obtained with hardware, which does so quickly and in a uniform time period. The stored characteristics of the packet are then used by the network processing complexes in further processing of the frame. The processor is preconditioned with a starting instruction address or cede entry point and the location of the beginning of the layer 3 header as well as flags for the type of frame.
摘要:
A system and method of protocol and frame classification in a system for data processing (e.g., switching or routing data packets or frames). The present invention includes analyzing a portion of the packet or frame according to predetermined tests, then storing key characteristics of the packet for use in subsequent processing of the frame. The key characteristics for the frame (or input information unit, such as the type of layer 3 protocol used in the frame, the layer 2 encapsulation technique, the starting instruction address and flags indicating whether the frame uses a virtual local area network, preferably using hardware to quickly and in a uniform time period. The stored key characteristics of the packet are then used by the network processing complexes in its further processing of the frame. The processor is preconditioned with a starting instruction address and the location of the beginning of the layer 3 header as well as flags for the type of frame. That is, the instruction address or code entry point is used by the processor to start processing for a frame at the right place, based on the type of frame. Additionally, additional instruction addresses can be stacked and used sequentially at branches to avoid additional tests and branching instructions.
摘要:
A mechanism controls a multi-thread processor so that when a first thread encounters a latency event for a first predefined time interval temporary control is transferred to an alternate execution thread for duration of the first predefined time interval and then back to the original thread. The mechanism grants full control to the alternate execution thread when a latency event for a second predefined time interval is encountered. The first predefined time interval is termed short latency event whereas the second time interval is termed long latency event.
摘要:
In an ordered semaphore management system a pending state allows threads not competing for a locked semaphore to bypass one or more threads waiting for the same locked semaphore. The number of pending levels determines the number of consecutive threads vying for the same locked semaphore which can be bypassed. When more than one level is provided the pending levels are prioritized in the queued order.