Method and structure for enqueuing data packets for processing
    11.
    发明申请
    Method and structure for enqueuing data packets for processing 失效
    排队处理数据包的方法和结构

    公开(公告)号:US20060039376A1

    公开(公告)日:2006-02-23

    申请号:US10868725

    申请日:2004-06-15

    IPC分类号: H04L12/56 H04L12/28

    摘要: A method and structure is provided for buffering data packets having a header and a remainder in a network processor system. The network processor system has a processor on a chip and at least one buffer on the chip. Each buffer on the chip is configured to buffer the header of the packets in a preselected order before execution in the processor, and the remainder of the packet is stored in an external buffer apart from the chip. The method comprises utilizing the header information to identify the location and extent of the remainder of the packet. The entire selected packet is stored in the external buffer when the buffer of the stored header of the given packet is full, and moving only the header of a selected packet stored in the external buffer to the buffer on the chip when the buffer on the chip has space therefor.

    摘要翻译: 提供了一种在网络处理器系统中缓冲具有报头和余数的数据分组的方法和结构。 网络处理器系统在芯片上具有处理器和芯片上的至少一个缓冲器。 芯片上的每个缓冲器被配置为在处理器中执行之前以预先选择的顺序缓冲数据包的报头,并且数据包的剩余部分存储在与芯片分离的外部缓冲器中。 该方法包括利用报头信息来识别分组的其余部分的位置和范围。 当给定分组的存储报头的缓冲器已满时,整个所选分组被存储在外部缓冲器中,并且当芯片上的缓冲器仅将存储在外部缓冲器中的选定分组的报头移动到芯片上的缓冲器时 有空间。

    Apparatus and method for efficiently modifying network data frames

    公开(公告)号:US20060146881A1

    公开(公告)日:2006-07-06

    申请号:US11030344

    申请日:2005-01-06

    IPC分类号: H04J3/00

    摘要: Apparatus and method for storing network frame data which is to be modified. A plurality of buffers stores the network data which is arranged in a data structure identified by a frame control block and buffer control block. A plurality of buffer control blocks associated with each buffer storing the frame data establishes a sequence of the buffers. Each buffer control block has data for identifying a subsequent buffer within the sequence. The first buffer is identified by a field of a frame control block as well as the beginning and ending address of the frame data. The frame data can be modified without rewriting the data to memory by altering the buffer control block and/or frame control block contents without having to copy or rewrite the data in order to modify it.

    Systems and methods for rate-limited weighted best effort scheduling

    公开(公告)号:US20060245443A1

    公开(公告)日:2006-11-02

    申请号:US11119329

    申请日:2005-04-29

    IPC分类号: H04L12/28 G01R31/08

    摘要: Systems and methods for scheduling data packets in a network processor are disclosed. Embodiments provide a network processor that comprises a best-effort scheduler with a minimal calendar structure for addressing schedule control blocks. In one embodiment, a four-entry calendar structure provides for rate-limited weighted best effort scheduling. Each of a plurality of different flows has associated schedule control blocks. Schedule control blocks are stored as linked lists in a last-in-first-out buffer. Each calendar entry is associated with a different linked list by storing in the calendar entry the address of the first-out schedule control block in the linked list. Each schedule control block has a counter and is assigned a rate limit according to the bandwidth priority of the flow to which the corresponding packet belongs. Each time a schedule control block is accessed from a last-in-first-out buffer storing the linked list, the scheduler generates a scheduling event and the counter of the schedule control block is incremented. When an incremented counter of a schedule control block equals its rate limit, the schedule control block is temporarily removed from further scheduling until a time interval concludes.

    Structure and method for scheduler pipeline design for hierarchical link sharing
    14.
    发明申请
    Structure and method for scheduler pipeline design for hierarchical link sharing 失效
    用于分层链路共享的调度器流水线设计的结构和方法

    公开(公告)号:US20050177644A1

    公开(公告)日:2005-08-11

    申请号:US10772737

    申请日:2004-02-05

    IPC分类号: G06F15/16 H04L12/56

    摘要: A pipeline configuration is described for use in network traffic management for the hardware scheduling of events arranged in a hierarchical linkage. The configuration reduces costs by minimizing the use of external SRAM memory devices. This results in some external memory devices being shared by different types of control blocks, such as flow queue control blocks, frame control blocks and hierarchy control blocks. Both SRAM and DRAM memory devices are used, depending on the content of the control block (Read-Modify-Write or ‘read’ only) at enqueue and dequeue, or Read-Modify-Write solely at dequeue. The scheduler utilizes time-based calendars and weighted fair queueing calendars in the egress calendar design. Control blocks that are accessed infrequently are stored in DRAM memory while those accessed frequently are stored in SRAM.

    摘要翻译: 描述了用于网络流量管理中的流水线配置,用于以分层链接排列的事件的硬件调度。 该配置通过最小化外部SRAM存储器件的使用来降低成本。 这导致一些外部存储器设备被不同类型的控制块共享,例如流队列控制块,帧控制块和层次控制块。 使用SRAM和DRAM存储器件,这取决于控制块的内容(仅读取 - 修改 - 写入或仅读取)在排队和出队,或仅读出 - 修改 - 写出。 调度器在出口日历设计中使用基于时间的日历和加权公平排队日历。 不频繁访问的控制块存储在DRAM存储器中,而频繁访问的控制块存储在SRAM中。

    Method for sharing single data buffer by several packets
    15.
    发明申请
    Method for sharing single data buffer by several packets 审中-公开
    通过多个数据包共享单个数据缓冲区的方法

    公开(公告)号:US20060187963A1

    公开(公告)日:2006-08-24

    申请号:US11062036

    申请日:2005-02-18

    IPC分类号: H04J3/24

    摘要: Methods, computer readable programs and network processor systems appropriate for IP fragmentation and reassembly on network processors comprising a plurality of buffers and buffer control blocks, the buffer control blocks comprising a buffer usage field, the buffer usage field having a value set responsive to a quantity of frame data fragments, wherein the network processor system associates a buffer control block with each buffer and frees a first buffer after reading a frame data fragment responsive to the first buffer control block buffer usage field value indicating only one frame data fragment is present in the first buffer.

    摘要翻译: 方法,适用于包括多个缓冲器和缓冲器控制块的网络处理器上的IP分段和重组的计算机可读程序和网络处理器系统,所述缓冲器控制块包括缓冲器使用场,所述缓冲器使用场具有响应于数量的值 其中所述网络处理器系统将缓冲器控制块与每个缓冲器相关联,并且在读取帧数据片段之后释放第一缓冲器,所述第一缓冲器响应于所述第一缓冲器控制块缓冲器使用字段值指示仅存在一个帧数据片段 第一缓冲区。

    Method and system for flexible network processor scheduler and data flow
    16.
    发明申请
    Method and system for flexible network processor scheduler and data flow 失效
    灵活的网络处理器调度器和数据流的方法和系统

    公开(公告)号:US20070011223A1

    公开(公告)日:2007-01-11

    申请号:US11133477

    申请日:2005-05-18

    IPC分类号: G06F15/16

    摘要: A network processor dataflow chip and method for flexible dataflow are provided. The dataflow chip comprises a plurality of on-chip data transmission and scheduling circuit structures. The data transmission and scheduling circuit structures are selected responsive to indicators. Data transmission circuit structures may comprise selectable frame processing and data transmission functions. Selectable frame processing may comprise cut and paste, full dispatch and store and dispatch frame processing. Scheduling functions include full internal scheduling, calendar scheduling in communication with an external scheduler, and external calendar scheduling. In another aspect of the present invention, data transmission functions may comprise low latency and normal latency external processor interfaces for selectively providing privileged access to dataflow chip resources.

    摘要翻译: 提供了一种用于灵活数据流的网络处理器数据流芯片和方法。 数据流芯片包括多个片上数据传输和调度电路结构。 响应于指标选择数据传输和调度电路结构。 数据传输电路结构可以包括可选择的帧处理和数据传输功能。 可选择的帧处理可以包括剪切和粘贴,完全调度和存储和调度帧处理。 调度功能包括完整的内部调度,与外部调度器进行通信的日历调度以及外部日历调度。 在本发明的另一方面,数据传输功能可以包括用于选择性地提供对数据流芯片资源的特权访问的低延迟和正常等待时间的外部处理器接口。

    DRAM ACCESS COMMAND QUEUING METHOD
    17.
    发明申请
    DRAM ACCESS COMMAND QUEUING METHOD 有权
    DRAM访问命令队列方法

    公开(公告)号:US20070294471A1

    公开(公告)日:2007-12-20

    申请号:US11832220

    申请日:2007-08-01

    IPC分类号: G06F12/00

    CPC分类号: G06F13/1642

    摘要: Access arbiters are used to prioritize read and write access requests to individual memory banks in DRAM memory devices, particularly fast cycle DRAMs. This serves to optimize the memory bandwidth available for the read and the write operations by avoiding consecutive accesses to the same memory bank and by minimizing dead cycles. The arbiter first divides DRAM accesses into write accesses and read accesses. The access requests are divided into accesses per memory bank with a threshold limit imposed on the number of accesses to each memory bank. The write receive packets are rotated among the banks based on the write queue status. The status of the write queue for each memory bank may also be used for system flow control. The arbiter also typically includes the ability to determine access windows based on the status of the command queues, and to perform arbitration on each access window.

    摘要翻译: 访问仲裁器被用于将对DRAM存储器件,特别是快速循环DRAM中的各个存储体的读取和写入访问请求进行优先级排序。 这用于通过避免对同一存储体的连续访问并且通过最小化死循环来优化用于读取和写入操作的存储器带宽。 仲裁器首先将DRAM访问划分为写访问和读访问。 访问请求被划分为每个存储体的访问,并且对每个存储体的访问次数施加了阈值限制。 基于写入队列状态,写入接收数据包在存储体之间旋转。 每个存储体的写入队列的状态也可以用于系统流控制。 仲裁器还通常包括基于命令队列的状态来确定访问窗口的能力,并且在每个访问窗口上执行仲裁。

    Systems and methods for implementing counters in a network processor with cost effective memory
    18.
    发明申请
    Systems and methods for implementing counters in a network processor with cost effective memory 失效
    在具有成本效益的存储器的网络处理器中实现计数器的系统和方法

    公开(公告)号:US20060209827A1

    公开(公告)日:2006-09-21

    申请号:US11070060

    申请日:2005-03-02

    IPC分类号: H04L12/56 H04L12/28

    CPC分类号: H04L49/901 H04L49/90

    摘要: Systems and methods for implementing counters in a network processor with cost effective memory are disclosed. Embodiments include systems and methods for implementing counters in a network processor using less expensive memory such as DRAM. A network processor receives packets and implements accounting functions including counting packets in each of a plurality of flow queues. Embodiments include a counter controller that may increment counter values more than once during a R-M-W cycle. Each time a counter controller receives a request to update a counter during a R-M-W cycle that has been initiated for the counter, the counter controller increments the counter value received from memory. The incremented value is written to memory during the write cycle of the R-M-W cycle. A write disable unit disables writes that would otherwise occur during R-M-W cycles initiated for the counter during the earlier initiated R-M-W cycle.

    摘要翻译: 公开了在具有成本效益的存储器的网络处理器中实现计数器的系统和方法。 实施例包括用于在使用诸如DRAM的廉价存储器的网络处理器中实现计数器的系统和方法。 网络处理器接收分组并实现计费功能,包括在多个流队列中的每一个中计数分组。 实施例包括可以在R-M-W周期期间多次增加计数器值的计数器控制器。 每当计数器控制器在已经为计数器启动的R-M-W周期期间接收到更新计数器的请求时,计数器控制器增加从存储器接收的计数器值。 在R-M-W周期的写周期期间,递增的值被写入存储器。 写禁止单元禁用在较早启动的R-M-W周期期间为计数器启动的R-M-W周期期间将发生的写入。

    Merging Result from a Parser in a Network Processor with Result from an External Coprocessor
    19.
    发明申请
    Merging Result from a Parser in a Network Processor with Result from an External Coprocessor 失效
    从具有外部协处理器结果的网络处理器中的解析器合并结果

    公开(公告)号:US20120204190A1

    公开(公告)日:2012-08-09

    申请号:US13365778

    申请日:2012-02-03

    IPC分类号: G06F9/46

    CPC分类号: G06F9/546 G06F9/544

    摘要: A mechanism is provided for merging in a network processor results from a parser and results from an external coprocessor providing processing support requested by said parser. The mechanism enqueues in a result queue both parser results needing to be merged with a coprocessor result and parser results which have no need to be merged with a coprocessor result. An additional queue is used to enqueue the addresses of the result queue where the parser results are stored. The result from the coprocessor is received in a simple response register. The coprocessor result is read by the result queue management logic from the response register and merged to the corresponding incomplete parser result read in the result queue at the address enqueued in the additional queue.

    摘要翻译: 提供了一种机制,用于在来自解析器的网络处理器结果和来自提供由所述解析器请求的处理支持的外部协处理器的结果中合并。 结果队列中的机制排队,解析器结果需要与协处理器结果合并,并且不需要与协处理器结果合并的解析器结果。 使用一个附加队列来排列存储解析器结果的结果队列的地址。 协处理器的结果是在简单的响应寄存器中接收的。 协处理器结果由响应寄存器的结果队列管理逻辑读取,并被合并到在附加队列中排队的地址的结果队列中读取的相应的不完整解析器结果。

    Host Ethernet Adapter for Handling Both Endpoint and Network Node Communications
    20.
    发明申请
    Host Ethernet Adapter for Handling Both Endpoint and Network Node Communications 失效
    用于处理端点和网络节点通信的主机以太网适配器

    公开(公告)号:US20120192190A1

    公开(公告)日:2012-07-26

    申请号:US13011663

    申请日:2011-01-21

    IPC分类号: G06F9/46

    CPC分类号: G06F15/1735

    摘要: A host Ethernet adapter (HEA) and method of managing network communications is provided. The HEA includes a host interface configured for communication with a multi-core processor over a processor bus. The host interface comprises a receive processing element including a receive processor, a receive buffer and a scheduler for dispatching packets from the receive buffer to the receive processor; a send processing element including a send processor and a send buffer; and a completion queue scheduler (CQS) for dispatching completion queue elements (CQE) from the head of the completion queue (CQ) to threads of the multi-core processor in a network node mode. The method comprises operatively coupling an Ethernet adapter to a multi-core processor system via a processor bus, selectively assigning a first plurality of packets to a first queue pair for servicing in an endpoint mode, running a device driver on the multi-core processing system, the device driver controlling the servicing of the first queue pair by dispatching the first plurality of packets to only one processor core of the multi-core processor system, selectively assigning a second plurality of packets to a second queue pair for servicing in a network node mode; and the Ethernet adapter controlling the servicing of the second queue pair by dispatching the second plurality of packets to multiple processor threads.

    摘要翻译: 提供主机以太网适配器(HEA)和管理网络通信的方法。 HEA包括被配置为通过处理器总线与多核处理器进行通信的主机接口。 所述主机接口包括接收处理元件,所述接收处理元件包括接收处理器,接收缓冲器和用于从所述接收缓冲器向所述接收处理器分发分组的调度器; 包括发送处理器和发送缓冲器的发送处理元件; 以及用于从完成队列(CQ)的头部将网络节点模式中的多核处理器的线程调度完成队列元素(CQE)的完成队列调度器(CQS)。 该方法包括经由处理器总线可操作地将以太网适配器耦合到多核处理器系统,选择性地将第一多个分组分配到第一队列对以在端点模式下进行服务,在多核处理系统上运行设备驱动程序 所述设备驱动程序通过将所述第一多个分组分派到所述多核处理器系统的一个处理器核心来控制所述第一队列对的服务,选择性地将第二多个分组分配给第二队列对以在网络节点中进行服务 模式; 以及所述以太网适配器通过将所述第二多个分组分派到多个处理器线程来控制所述第二队列对的服务。