Queue manager for a buffer
    3.
    发明授权
    Queue manager for a buffer 失效
    队列管理器为缓冲区

    公开(公告)号:US06557053B1

    公开(公告)日:2003-04-29

    申请号:US09477179

    申请日:2000-01-04

    IPC分类号: G06F1314

    CPC分类号: G06F13/1673

    摘要: A bandwidth conserving queue manager for a FIFO buffer is provided, preferably on an ASIC chip and preferably including separate DRAM storage that maintains a FIFO queue which can extend beyond the data storage space of the FIFO buffer to provide additional data storage space as needed. FIFO buffers are used on the ASIC chip to store and retrieve multiple queue entries. As long as the total size of the queue does not exceed the storage available in the buffers, no additional data storage is needed. However, when some predetermined amount of the buffer storage space in the FIFO buffers is exceeded, data are written to and read from the additional data storage, and preferably in packets which are of optimum size for maintaining peak performance of the data storage device and which are written to the data storage device in such a way that they are queued in a first-in, first-out (FIFO) sequence of addresses. Preferably, the data are written to and are read from the DRAM in burst mode.

    摘要翻译: 提供了用于FIFO缓冲器的带宽保存队列管理器,优选地在ASIC芯片上,并且优选地包括分离的DRAM存储器,其维持FIFO队列,其可以超出FIFO缓冲器的数据存储空间,以根据需要提供附加的数据存储空间。 在ASIC芯片上使用FIFO缓冲器来存储和检索多个队列条目。 只要队列的总大小不超过缓冲区中可用的存储空间,则不需要额外的数据存储。 然而,当超过FIFO缓冲器中的一些预定量的缓冲存储空间时,数据被写入附加数据存储器并从其中读出,并且优选地是具有用于保持数据存储设备的峰值性能的最佳尺寸的数据包,以及哪个 被写入数据存储设备,使得它们以先入先出(FIFO)地址序列排队。 优选地,以突发模式将数据写入DRAM并从DRAM读取。

    Cycle saving technique for managing linked lists
    4.
    发明授权
    Cycle saving technique for managing linked lists 失效
    用于管理链表的循环保存技术

    公开(公告)号:US06584518B1

    公开(公告)日:2003-06-24

    申请号:US09479751

    申请日:2000-01-07

    IPC分类号: G06F1314

    CPC分类号: G06F12/023

    摘要: A method and system for queueing data within a data storage device including a set of storage blocks each having an address, a pointer field, and a data field. This set of storage blocks comprises a linked list of associated storage blocks and also a free pool of available storage blocks. The storage device further includes a tail register for tracking an empty tail block from which a data object is enqueued into the linked list. A request to enqueue a data object into the linked list is received within the data storage system. In response to the data enqueue request, an available storage block from the free pool is selected and associated with the tail register. A single write operation is then required to write the data object into the data field of a current tail block and to write the address of the selected storage block into the pointer field of the current tail block, such that the selected storage block becomes a new tail block to which the tail register points.

    摘要翻译: 一种用于在数据存储设备内排队数据的方法和系统,包括一组存储块,每个存储块具有地址,指针字段和数据字段。 这组存储块包括相关联的存储块的链表以及可用存储块的空闲池。 存储装置还包括用于跟踪空尾部块的尾部寄存器,数据对象从该尾部块排入链接列表。 在数据存储系统内接收到将数据对象排入链表的请求。 响应于数据排入请求,从空闲池中选择一个可用的存储块并将其与尾部寄存器相关联。 然后需要单个写入操作来将数据对象写入当前尾部块的数据字段,并将所选择的存储块的地址写入当前尾部块的指针字段,使得所选择的存储块变为新的 尾部寄存器指向的尾部块。

    Method and system for network data flow management with improved completion unit
    5.
    发明授权
    Method and system for network data flow management with improved completion unit 失效
    网络数据流管理方法与系统改进完成单元

    公开(公告)号:US06633920B1

    公开(公告)日:2003-10-14

    申请号:US09479028

    申请日:2000-01-07

    IPC分类号: G06F1300

    CPC分类号: H04L29/06 H04L69/18 H04L69/22

    摘要: A system and method of data flow management, particularly in a multiple network processor architecture where a plurality of independent processing units are simultaneously processing information from different frames of input information. The present invention includes first-in-first-out files identifying the individual frames and correlating the frames with the processor to which the frames have been assigned for processing as well as a first-in-first-out file of processed frames for each processor to allow the frames to be processed independently, then reassembled into the same order as the frames had been received without communication between the independent processors. Additionally, the present system supports newly-created frames as well as the concept of flushing the system without regard to frame order whereby frames are sent out to the network as the processing is completed without regard to input order, overriding the system of putting the output frames in the same order as the input frames were received from the network.

    摘要翻译: 特别是在多个独立处理单元同时处理来自不同输入信息帧的信息的多网络处理器架构中的数据流管理系统和方法。 本发明包括识别各个帧的先进先出文件,并且将帧与已被分配帧进行处理的处理器相关联,以及每个处理器的处理帧的先进先出文件 以允许帧被独立地处理,然后重新组装成与已经被接收的帧相同的顺序,而没有独立处理器之间的通信。 此外,本系统支持新创建的帧以及刷新系统的概念,而不考虑帧顺序,由此在不考虑输入顺序的情况下处理完成而将帧发送到网络,覆盖输出的系统 与从网络接收输入帧相同顺序的帧。

    Full match (FM) search algorithm implementation for a network processor
    7.
    发明授权
    Full match (FM) search algorithm implementation for a network processor 失效
    网络处理器的完全匹配(FM)搜索算法实现

    公开(公告)号:US07139753B2

    公开(公告)日:2006-11-21

    申请号:US10650327

    申请日:2003-08-28

    IPC分类号: G06F7/00 H04L12/28

    摘要: Novel data structures, methods and apparatus for finding a full match between a search pattern and a pattern stored in a leaf of the search tree. A key is input, a hash function is performed on the key, a direct table (DT) is accessed, and a tree is walked through pattern search control blocks (PSCBs) until reaching a leaf. The search mechanism uses a set of data structures that can be located in a few registers and regular memory, and then used to build a Patricia tree structure that can be manipulated by a relatively simple hardware macro. Both keys and corresponding information needed for retrieval are stored in the Patricia tree structure. The hash function provides an n->n mapping of the bits of the key to the bits of the hash key. The data structure that is used to store the hash key and the related information in the tree is called a leaf. Each leaf corresponds to a single key that matches exactly with the input key. The leaf contains the key as well as additional information. The length of the leaf is programmable, as is the length of the key. The leaf is stored in random access memory and is implemented as a single memory entry. If the key is located in the direct table then it is called a direct leaf.

    摘要翻译: 用于在搜索图案和存储在搜索树的叶中的模式之间找到完全匹配的新型数据结构,方法和装置。 键输入,对密钥执行哈希函数,访问直接表(DT),并通过模式搜索控制块(PSCB),树直到达到叶。 搜索机制使用一组可以位于几个寄存器和常规内存中的数据结构,然后用于构建可由相对简单的硬件宏操作的Patricia树结构。 检索所需的两个密钥和相应的信息都存储在Patricia树结构中。 散列函数提供密钥的比特到散列密钥的比特的n> n映射。 用于存储散列键和树中相关信息的数据结构称为叶。 每个叶对应于与输入键完全匹配的单个键。 叶包含关键以及其他信息。 叶片的长度是可编程的,密钥的长度也是可编程的。 叶存储在随机存取存储器中,并被实现为单个存储器条目。 如果键位于直接表中,则称为直接叶。

    Method and apparatus for processing frame classification information between network processors
    8.
    发明授权
    Method and apparatus for processing frame classification information between network processors 失效
    用于处理网络处理器之间帧分类信息的方法和装置

    公开(公告)号:US07106730B1

    公开(公告)日:2006-09-12

    申请号:US09546833

    申请日:2000-04-11

    IPC分类号: H04L12/56

    CPC分类号: H04L49/30

    摘要: A network device including an ingress processor and egress processor which receives frames of data over the network on an input port, and transfers it to an appropriate output port. The received frame is processed by an ingress processor which prepares an intra-switch frame for delivery to an egress processor serving a relevant output port of the switch. The intra-switch frame includes a frame header having parameters which have been determined by the ingress processor, as well as data indicating an address for the egress processor for beginning processing of the frame. By identifying to the egress processor processing which has already taken place, the egress processor is relieved of any redundant processing of the frame. The egress processor provides a hardware frame classifier which decodes the information contained in the intra-frame header to derive parameters which have been previously computed as well as a starting address for the egress processor. By reducing the amount of redundant processing of the egress processor, total device throughput delay is reduced.

    摘要翻译: 一种网络设备,包括入口处理器和出口处理器,其在输入端口上通过网络接收数据帧,并将其传送到适当的输出端口。 接收到的帧由入口处理器处理,入口处理器准备一个内部交换帧,用于传送到服务于交换机的相关输出端口的出口处理器。 帧内切换帧包括具有由入口处理器确定的参数的帧报头,以及指示用于开始处理该帧的出口处理器的地址的数据。 通过识别已经发生的出口处理器处理,出口处理器免除了帧的任何冗余处理。 出口处理器提供硬件帧分类器,其对包含在帧内报头中的信息进行解码以导出先前已经计算的参数以及出口处理器的起始地址。 通过减少出口处理器的冗余处理量,减少了总设备吞吐量延迟。

    Method and system for network processor scheduling outputs based on multiple calendars
    9.
    发明授权
    Method and system for network processor scheduling outputs based on multiple calendars 失效
    基于多个日历的网络处理器调度输出的方法和系统

    公开(公告)号:US06862292B1

    公开(公告)日:2005-03-01

    申请号:US09548910

    申请日:2000-04-13

    IPC分类号: H04L12/56 H04Q11/04 H04L12/28

    摘要: A system and method of moving information units from a network processor toward a data transmission network in a prioritized sequence which accommodates several different levels of service. The present invention includes a method and system for scheduling the egress of processed information units (or frames) from a network processing unit according to stored priorities associated with the various sources of the information units. The priorities in the preferred embodiment include a low latency service, a minimum bandwidth, a weighted fair queueing and a system for preventing a user from continuing to exceed his service levels over an extended period. The present invention includes a plurality of calendars with different service rates to allow a user to select the service rate which he desires. If a customer has chosen a high bandwidth for service, the customer will be included in a calendar which is serviced more often than if the customer has chosen a lower bandwidth.

    摘要翻译: 一种将信息单元从网络处理器移动到数据传输网络的系统和方法,其以容纳几个不同级别的服务的优先顺序排列。 本发明包括一种方法和系统,用于根据存储的与信息单元的各种源相关联的优先级来调度来自网络处理单元的处理的信息单元(或帧)的出口。 优选实施例中的优先级包括低延迟服务,最小带宽,加权公平排队以及用于在较长时间内防止用户继续超过其服务水平的系统。 本发明包括具有不同服务速率的多个日历,以允许用户选择他所期望的服务速率。 如果客户选择了高带宽的服务,客户将被包含在比客户选择较低带宽的情况下更频繁地服务的日历。