摘要:
A system and method of data flow management, particularly in a multiple network processor architecture where a plurality of independent processing units are simultaneously processing information from different frames of input information. The present invention includes first-in-first-out files identifying the individual frames and correlating the frames with the processor to which the frames have been assigned for processing as well as a first-in-first-out file of processed frames for each processor to allow the frames to be processed independently, then reassembled into the same order as the frames had been received without communication between the independent processors. Additionally, the present system supports newly-created frames as well as the concept of flushing the system without regard to frame order whereby frames are sent out to the network as the processing is completed without regard to input order, overriding the system of putting the output frames in the same order as the input frames were received from the network.
摘要:
The ability of network processors to move data to and from dynamic random access memory (DRAM) chips used in computer systems is enhanced in several respects. In one aspect of the invention, two double data rate DRAMS are used in parallel to double the bandwidth for increased throughput of data. The movement of data is further improved by setting 4 banks of full ‘read’ and 4 banks of full ‘write’ by the network processor for every repetition of the DRAM time clock. A scheme for randomized ‘read’ and ‘write’ access by the network processor is disclosed. This scheme is particularly applicable to networks such as Ethernet that utilize variable frame sizes.
摘要:
A system and method of transmitting multiple output messages from a single input message system where the system is keeping the messages in order by correlating the output messages with the input messages. For each output message, an indicator is associated with the output message indicating whether this output message is the last message being generated for the given input message. This allows multicasting to occur in a system where the output is matched to the input by allowing multiple output messages to be associated with a single input message.
摘要:
An ordered semaphore management subsystem and method for use in an application system which includes a plurality of processors competing for shared resources each of which is controlled by a unique semaphore. The subsystem generates an ordered semaphore field (OSF) corresponding to each processor in a linked list of processors and assigns one of four statuses to the OSF depending on the position the processor occupies in the linked list of processors competing for the shared resources. The four states are (1) semaphore head (SH); (2) behind semaphore head (BSH); (3) semaphore head behind (SHB); and (4) skip (Skip). Only the SH processor is allocated the semaphore when requested. A processor not in the SH state will be denied the semaphore even if is available to assure sequential access.
摘要:
In an ordered semaphore management system a pending state allows threads not competing for a locked semaphore to bypass one or more threads waiting for the same locked semaphore. The number of pending levels determines the number of consecutive threads vying for the same locked semaphore which can be bypassed. When more than one level is provided the pending levels are prioritized in the queued order.
摘要:
In an ordered semaphore management system a pending state allows threads not competing for a locked semaphore to bypass one or more threads waiting for the same locked semaphore. The number of pending levels determines the number of consecutive threads vying for the same locked semaphore which can be bypassed. When more than one level is provided the pending levels are prioritized in the queued order.
摘要:
In an ordered semaphore management system a pending state allows threads not competing for a locked semaphore to bypass one or more threads waiting for the same locked semaphore. The number of pending levels determines the number of consecutive threads vying for the same locked semaphore which can be bypassed. When more than one level is provided the pending levels are prioritized in the queued order.
摘要:
A system and method for handling speculative read requests for a memory controller in a computer system are provided. In one example, a method includes the steps of providing a speculative read threshold corresponding to a selected percentage of the total number of reads that can be speculatively issued, and intermixing demand reads and speculative reads in accordance with the speculative read threshold. In another example, a computer system includes a CPU, a memory controller, memory, a bus connecting the CPU, memory controller and memory, circuitry for providing a speculative read threshold corresponding to a selected percentage of the total number of reads that can be speculatively issued, and circuitry for intermixing demand reads and speculative reads in accordance with the speculative read threshold. In another example, a method includes the steps of providing a speculative dispatch time threshold corresponding to a selected percentage of a period of time required to search a cache of the computer system, and intermixing demand reads and speculative reads in accordance with the speculative dispatch time threshold.
摘要:
A method for handling speculative access requests for a storage device in a computer system is provided. The method includes the steps of providing a speculative access threshold corresponding to a selected percentage of the total number of accesses to be speculatively issued, and intermixing demand accesses and speculative accesses in accordance with the speculative access threshold. In another embodiment, a method for reducing data access latency experienced by a user in a computer network is provided. The method includes the steps of providing a web page comprising a link to a data file stored on a database connected to the computer network, selecting a speculative access threshold corresponding to a selected percentage of data accesses which are to be speculatively provided to the user, and speculatively providing the data file in accordance with the speculative access threshold.
摘要:
A method for handling speculative access requests for a storage device in a computer system is provided. The method includes the steps of providing a speculative access threshold corresponding to a selected percentage of the total number of accesses to be speculatively issued, and intermixing demand accesses and speculative accesses in accordance with the speculative access threshold. In another embodiment, a method for reducing data access latency experienced by a user in a computer network is provided. The method includes the steps of providing a web page comprising a link to a data file stored on a database connected to the computer network, selecting a speculative access threshold corresponding to a selected percentage of data accesses which are to be speculatively provided to the user, and speculatively providing the data file in accordance with the speculative access threshold.