摘要:
In an ordered semaphore management system a pending state allows threads not competing for a locked semaphore to bypass one or more threads waiting for the same locked semaphore. The number of pending levels determines the number of consecutive threads vying for the same locked semaphore which can be bypassed. When more than one level is provided the pending levels are prioritized in the queued order.
摘要:
In an ordered semaphore management system a pending state allows threads not competing for a locked semaphore to bypass one or more threads waiting for the same locked semaphore. The number of pending levels determines the number of consecutive threads vying for the same locked semaphore which can be bypassed. When more than one level is provided the pending levels are prioritized in the queued order.
摘要:
In an ordered semaphore management system a pending state allows threads not competing for a locked semaphore to bypass one or more threads waiting for the same locked semaphore. The number of pending levels determines the number of consecutive threads vying for the same locked semaphore which can be bypassed. When more than one level is provided the pending levels are prioritized in the queued order.
摘要:
An ordered semaphore management subsystem and method for use in an application system which includes a plurality of processors competing for shared resources each of which is controlled by a unique semaphore. The subsystem generates an ordered semaphore field (OSF) corresponding to each processor in a linked list of processors and assigns one of four statuses to the OSF depending on the position the processor occupies in the linked list of processors competing for the shared resources. The four states are (1) semaphore head (SH); (2) behind semaphore head (BSH); (3) semaphore head behind (SHB); and (4) skip (Skip). Only the SH processor is allocated the semaphore when requested. A processor not in the SH state will be denied the semaphore even if is available to assure sequential access.
摘要:
A generic method and apparatus for managing semaphores in a multi-threaded processing system has a storage area for each of the threads in the processing system. Each storage area includes a first part for storing at least one indicia for identifying at least one unique semaphore from a plurality of semaphores utilized by the multi-threaded processing system and a second part for storing an indicia for indicating a locked status for the stored semaphore. A thread requiring a semaphore sends a semaphore lock request to the semaphore manager which examines the contents of all of the storage areas to determine the status of the requested semaphore. If the requested semaphore is not locked, it is locked for the requesting thread by inserting the requested semaphore and locked status in the memory location assigned to the requesting thread.
摘要:
Processor threads in a multi-processor system can concurrently lock multiple semaphores by providing a lock command which includes the semaphore value and a semaphore number. Each processor is allocated two or more addressable semaphore stores, each of which include a multi-bit field identifying the requested semaphore and a one bit field identifying the locked status of the requested semaphore. The semaphore number determines which of the allocated semaphore stores are to be used for processing the lock command.
摘要:
A system and method of transmitting multiple output messages from a single input message system where the system is keeping the messages in order by correlating the output messages with the input messages. For each output message, an indicator is associated with the output message indicating whether this output message is the last message being generated for the given input message. This allows multicasting to occur in a system where the output is matched to the input by allowing multiple output messages to be associated with a single input message.
摘要:
A system and method of data flow management, particularly in a multiple network processor architecture where a plurality of independent processing units are simultaneously processing information from different frames of input information. The present invention includes first-in-first-out files identifying the individual frames and correlating the frames with the processor to which the frames have been assigned for processing as well as a first-in-first-out file of processed frames for each processor to allow the frames to be processed independently, then reassembled into the same order as the frames had been received without communication between the independent processors. Additionally, the present system supports newly-created frames as well as the concept of flushing the system without regard to frame order whereby frames are sent out to the network as the processing is completed without regard to input order, overriding the system of putting the output frames in the same order as the input frames were received from the network.
摘要:
The ability of network processors to move data to and from dynamic random access memory (DRAM) chips used in computer systems is enhanced in several respects. In one aspect of the invention, two double data rate DRAMS are used in parallel to double the bandwidth for increased throughput of data. The movement of data is further improved by setting 4 banks of full ‘read’ and 4 banks of full ‘write’ by the network processor for every repetition of the DRAM time clock. A scheme for randomized ‘read’ and ‘write’ access by the network processor is disclosed. This scheme is particularly applicable to networks such as Ethernet that utilize variable frame sizes.
摘要:
A system and method for handling speculative read requests for a memory controller in a computer system are provided. In one example, a method includes the steps of providing a speculative read threshold corresponding to a selected percentage of the total number of reads that can be speculatively issued, and intermixing demand reads and speculative reads in accordance with the speculative read threshold. In another example, a computer system includes a CPU, a memory controller, memory, a bus connecting the CPU, memory controller and memory, circuitry for providing a speculative read threshold corresponding to a selected percentage of the total number of reads that can be speculatively issued, and circuitry for intermixing demand reads and speculative reads in accordance with the speculative read threshold. In another example, a method includes the steps of providing a speculative dispatch time threshold corresponding to a selected percentage of a period of time required to search a cache of the computer system, and intermixing demand reads and speculative reads in accordance with the speculative dispatch time threshold.