摘要:
A mechanism controls a multi-thread processor so that when a first thread encounters a latency event for a first predefined time interval temporary control is transferred to an alternate execution thread for duration of the first predefined time interval and then back to the original thread. The mechanism grants full control to the alternate execution thread when a latency event for a second predefined time interval is encountered. The first predefined time interval is termed short latency event whereas the second time interval is termed long latency event.
摘要:
A network processor utilizes protocol processor units (PPUs) to provide instruction communication for the network. Each PPU includes a core language processor (CLP). Each CLP contains general purpose registers and includes a coprocessor that contains scalar registers and array registers. The CLP controls and instructs a plurality of coprocessors that run in parallel with the CLP. Each coprocessor is a specialized hardware assist engine having direct access to the CLP registers and arrays through two sets of interface signals, a coprocessor execution interface and a coprocessor data interface.
摘要:
A method and structure is provided for buffering data packets having a header and a remainder in a network processor system. The network processor system has a processor on a chip and at least one buffer on the chip. Each buffer on the chip is configured to buffer the header of the packets in a preselected order before execution in the processor, and the remainder of the packet is stored in an external buffer apart from the chip. The method comprises utilizing the header information to identify the location and extent of the remainder of the packet. The entire selected packet is stored in the external buffer when the buffer of the stored header of the given packet is full, and moving only the header of a selected packet stored in the external buffer to the buffer on the chip when the buffer on the chip has space therefor.
摘要:
Systems and methods for distributing thread instructions in the pipeline of a multi-threading digital processor are disclosed. More particularly, hardware and software are disclosed for successively selecting threads in an ordered sequence for execution in the processor pipeline. If a thread to be selected cannot execute, then a complementary thread is selected for execution.
摘要:
Access arbiters are used to prioritize read and write access requests to individual memory banks in DRAM memory devices, particularly fast cycle DRAMs. This serves to optimize the memory bandwidth available for the read and the write operations by avoiding consecutive accesses to the same memory bank and by minimizing dead cycles. The arbiter first divides DRAM accesses into write accesses and read accesses. The access requests are divided into accesses per memory bank with a threshold limit imposed on the number of accesses to each memory bank. The write receive packets are rotated among the banks based on the write queue status. The status of the write queue for each memory bank may also be used for system flow control. The arbiter also typically includes the ability to determine access windows based on the status of the command queues, and to perform arbitration on each access window.
摘要:
A pipeline configuration is described for use in network traffic management for the hardware scheduling of events arranged in a hierarchical linkage. The configuration reduces costs by minimizing the use of external SRAM memory devices. This results in some external memory devices being shared by different types of control blocks, such as flow queue control blocks, frame control blocks and hierarchy control blocks. Both SRAM and DRAM memory devices are used, depending on the content of the control block (Read-Modify-Write or ‘read’ only) at enqueue and dequeue, or Read-Modify-Write solely at dequeue. The scheduler utilizes time-based calendars and weighted fair queueing calendars in the egress calendar design. Control blocks that are accessed infrequently are stored in DRAM memory while those accessed frequently are stored in SRAM.
摘要:
Access arbiters are used to prioritize read and write access requests to individual memory banks in DRAM memory devices, particularly fast cycle DRAMs. This serves to optimize the memory bandwidth available for the read and the write operations by avoiding consecutive accesses to the same memory bank and by minimizing dead cycles. The arbiter first divides DRAM accesses into write accesses and read accesses. The access requests are divided into accesses per memory bank with a threshold limit imposed on the number of accesses to each memory bank. The write receive packets are rotated among the banks based on the write queue status. The status of the write queue for each memory bank may also be used for system flow control. The arbiter also typically includes the ability to determine access windows based on the status of the command queues, and to perform arbitration on each access window.
摘要:
The present invention relates to a method and system for storing a plurality of multi-field classification rules in a computer system. Each multi-field classification rule includes a rule specification that itself includes a plurality of fields and a plurality of field definitions corresponding to the fields. The method of the present invention includes providing a virtual rule table, where the table stores a plurality of field definitions, and for each of the plurality of multi-field classification rules, compressing the rule specification by replacing at least one field definition with an associated index into the virtual rule table. The method also includes storing each of the compressed rule specifications and the virtual rule table in a shared segment of memory.
摘要:
A technique is provided to delete a leaf from a Patricia tree having a direct table and a plurality of PSCB's which decode portions of the pattern of a leaf in the tree without shutting down the functioning of the tree. A leaf having a pattern is identified as a leaf to be deleted. Using the pattern, the tree is walked to identify the location of the leaf to be deleted. The leaf to be deleted is identified and deleted, and any relevant PSCB modified, if necessary. The technique also is applicable to deleting a prefix of a prefix.
摘要:
The present invention relates to a method and system for compressing a tree structure. The method of the present invention includes providing a compressed format block for representing a plurality of levels of the tree structure, where the plurality of levels comprises a set of nodes. The method also includes compressing each node in the set of nodes into the compressed format block, such that the plurality of levels is traversed in a single memory access.