Split socket send queue apparatus and method with efficient queue flow control, retransmission and sack support mechanisms
    1.
    发明授权
    Split socket send queue apparatus and method with efficient queue flow control, retransmission and sack support mechanisms 失效
    拆分套接字发送队列设备和方法,具有高效的队列流控制,重传和备份支持机制

    公开(公告)号:US07818362B2

    公开(公告)日:2010-10-19

    申请号:US11418606

    申请日:2006-05-05

    IPC分类号: G06F15/16 G06F3/00

    摘要: A mechanism for offloading the management of send queues in a split socket stack environment, including efficient split socket queue flow control and TCP/IP retransmission support. An Upper Layer Protocol (ULP) creates send work queue entries (SWQEs) for writing to the send work queue (SWQ). The Internet Protocol Suite Offload Engine (IPSOE) is notified of a new entry to the SWQ and it subsequently reads this entry that contains pointers to the data that is to be transmitted. After the data is transmitted and acknowledgments are received, the IPSOE creates a completion queue entry (CQE) that is written into the completion queue (CQ). The flow control between the ULP and the IPSOE is credit based. The passing of CQ credits is the only explicit mechanism required to manage flow control of both the SWQ and the CQ between the ULP and the IPSOE.

    摘要翻译: 一种卸载分组套接字堆栈环境中发送队列管理的机制,包括高效的分裂套接字队列流控制和TCP / IP重传支持。 上层协议(ULP)创建用于写入发送工作队列(SWQ)的发送工作队列条目(SWQE)。 Internet协议套件卸载引擎(IPSOE)被通知一个新的条目到SWQ,它随后读取这个条目,其中包含指向要发送的数据的指针。 在发送数据并接收到确认之后,IPSOE创建写入完成队列(CQ)的完成队列条目(CQE)。 ULP和IPSOE之间的流量控制是基于信用的。 CQ信用证的通过是在ULP和IPSOE之间管理SWQ和CQ两者流量控制所需的唯一明确的机制。

    Split socket send queue apparatus and method with efficient queue flow control, retransmission and sack support mechanisms
    2.
    发明授权
    Split socket send queue apparatus and method with efficient queue flow control, retransmission and sack support mechanisms 失效
    拆分套接字发送队列设备和方法,具有高效的队列流控制,重传和备份支持机制

    公开(公告)号:US07519650B2

    公开(公告)日:2009-04-14

    申请号:US10235689

    申请日:2002-09-05

    IPC分类号: G06F15/16

    摘要: A mechanism for offloading the management of send queues in a split socket stack environment, including efficient split socket queue flow control and TCP/IP retransmission support. As consumers initiate send operations, send work queue entries (SWQEs) are created by an Upper Layer Protocol (ULP) and written to the send work queue (SWQ). The Internet Protocol Suite Offload Engine (IPSOE) is notified of a new entry to the SWQ and it subsequently reads this entry that contains pointers to the data that is to be transmitted. After the data is transmitted and acknowledgments are received, the IPSOE creates a completion queue entry (CQE) that is written into the completion queue (CQ). After the CQE is written, the ULP subsequently processes the entry and removes it from the CQE, freeing up a space in both the SWQ and CQ. The number of entries available in the SWQ are monitored by the ULP so that it does not overwrite any valid entries. Likewise, the IPSOE monitors the number of entries available in the CQ, so as not overwrite the CQ. The flow control between the ULP and the IPSOE is credit based. The passing of CQ credits is the only explicit mechanism required to manage flow control of both the SWQ and the CQ between the ULP and the IPSOE.

    摘要翻译: 一种卸载分组套接字堆栈环境中发送队列管理的机制,包括高效的分裂套接字队列流控制和TCP / IP重传支持。 当消费者发起发送操作时,发送工作队列条目(SWQE)由上层协议(ULP)创建并写入发送工作队列(SWQ)。 互联网协议套件卸载引擎(IPSOE)被通知一个新条目到SWQ,它随后读取这个条目,其中包含指向要发送的数据的指针。 在发送数据并接收到确认之后,IPSOE创建写入完成队列(CQ)的完成队列条目(CQE)。 在编写CQE后,ULP随后处理该条目并将其从CQE中删除,从而释放了SWQ和CQ两者的空间。 SWQ中可用的条目数由ULP进行监视,以使其不会覆盖任何有效的条目。 同样,IPSOE监视CQ中可用条目的数量,以免覆盖CQ。 ULP和IPSOE之间的流量控制是基于信用的。 CQ信用证的通过是在ULP和IPSOE之间管理SWQ和CQ两者流量控制所需的唯一明确的机制。

    Receive queue device with efficient queue flow control, segment placement and virtualization mechanisms
    3.
    发明授权
    Receive queue device with efficient queue flow control, segment placement and virtualization mechanisms 有权
    接收具有高效队列流控制,段放置和虚拟化机制的队列设备

    公开(公告)号:US07912988B2

    公开(公告)日:2011-03-22

    申请号:US11487265

    申请日:2006-07-14

    IPC分类号: G06F15/16

    摘要: A mechanism for offloading the management of receive queues in a split (e.g. split socket, split iSCSI, split DAFS) stack environment, including efficient queue flow control and TCP/IP retransmission support. An Upper Layer Protocol (ULP) creates receive work queues and completion queues that are utilized by an Internet Protocol Suite Offload Engine (IPSOE) and the ULP to transfer information and carry out send operations. As consumers initiate receive operations, receive work queue entries (RWQEs) are created by the ULP and written to the receive work queue (RWQ). The ISPOE is notified of a new entry to the RWQ and it subsequently reads this entry that contains pointers to the data that is to be received. After the data is received, the IPSOE creates a completion queue entry (CQE) that is written into the completion queue (CQ). After the CQE is written, the ULP subsequently processes the entry and removes it from the CQE, freeing up a space in both the RWQ and CQ. The number of entries available in the RWQ are monitored by the ULP so that it does not overwrite any valid entries. Likewise, the IPSOE monitors the number of entries available in the CQ, so as not overwrite the CQ.

    摘要翻译: 一种用于卸载分裂(例如,分裂式插座,拆分式iSCSI,拆分式DAFS)堆栈环境中接收队列管理的机制,包括有效的队列流控制和TCP / IP重传支持。 上层协议(ULP)创建互联网协议套件卸载引擎(IPSOE)和ULP利用的接收工作队列和完成队列,以传输信息并执行发送操作。 当消费者开始接收操作时,接收工作队列条目(RWQE)由ULP创建并写入接收工作队列(RWQ)。 通知ISPOE对RWQ的新条目,并随后读取包含要接收的数据的指针的该条目。 接收到数据后,IPSOE创建写入完成队列(CQ)的完成队列条目(CQE)。 在编写CQE之后,ULP随后处理该条目并将其从CQE中移除,释放了RWQ和CQ两者中的空间。 RWQ中可用的条目数由ULP监视,以便它不会覆盖任何有效的条目。 同样,IPSOE监视CQ中可用条目的数量,以免覆盖CQ。

    Scheduler pipeline design for hierarchical link sharing
    5.
    发明授权
    Scheduler pipeline design for hierarchical link sharing 失效
    调度器管道设计用于分层链路共享

    公开(公告)号:US07929438B2

    公开(公告)日:2011-04-19

    申请号:US12175479

    申请日:2008-07-18

    IPC分类号: H04J1/16

    摘要: A pipeline configuration is described for use in network traffic management for the hardware scheduling of events arranged in a hierarchical linkage. The configuration reduces costs by minimizing the use of external SRAM memory devices. This results in some external memory devices being shared by different types of control blocks, such as flow queue control blocks, frame control blocks and hierarchy control blocks. Both SRAM and DRAM memory devices are used, depending on the content of the control block (Read-Modify-Write or ‘read’ only) at enqueue and dequeue, or Read-Modify-Write solely at dequeue. The scheduler utilizes time-based calendars and weighted fair queueing calendars in the egress calendar design. Control blocks that are accessed infrequently are stored in DRAM memory while those accessed frequently are stored in SRAM.

    摘要翻译: 描述了用于网络流量管理中的流水线配置,用于以分层链接排列的事件的硬件调度。 该配置通过最小化外部SRAM存储器件的使用来降低成本。 这导致一些外部存储器设备被不同类型的控制块共享,例如流队列控制块,帧控制块和层次控制块。 使用SRAM和DRAM存储器件,这取决于控制块的内容(仅读取 - 修改 - 写入或仅读取)在排队和出队,或仅读出 - 修改 - 写出。 调度器在出口日历设计中使用基于时间的日历和加权公平排队日历。 不频繁访问的控制块存储在DRAM存储器中,而频繁访问的控制块存储在SRAM中。

    Structure for scheduler pipeline design for hierarchical link sharing
    6.
    发明授权
    Structure for scheduler pipeline design for hierarchical link sharing 失效
    用于分层链路共享的调度器流水线设计的结构

    公开(公告)号:US07457241B2

    公开(公告)日:2008-11-25

    申请号:US10772737

    申请日:2004-02-05

    IPC分类号: H04J1/16

    摘要: A pipeline configuration is described for use in network traffic management for the hardware scheduling of events arranged in a hierarchical linkage. The configuration reduces costs by minimizing the use of external SRAM memory devices. This results in some external memory devices being shared by different types of control blocks, such as flow queue control blocks, frame control blocks and hierarchy control blocks. Both SRAM and DRAM memory devices are used, depending on the content of the control block (Read-Modify-Write or ‘read’ only) at enqueue and dequeue, or Read-Modify-Write solely at dequeue. The scheduler utilizes time-based calendars and weighted fair queueing calendars in the egress calendar design. Control blocks that are accessed infrequently are stored in DRAM memory while those accessed frequently are stored in SRAM.

    摘要翻译: 描述了用于网络流量管理中的流水线配置,用于以分层链接排列的事件的硬件调度。 该配置通过最小化外部SRAM存储器件的使用来降低成本。 这导致一些外部存储器设备被不同类型的控制块共享,例如流队列控制块,帧控制块和层次控制块。 使用SRAM和DRAM存储器件,这取决于控制块的内容(仅读取 - 修改 - 写入或仅读取)在排队和出队,或仅读出 - 修改 - 写出。 调度器在出口日历设计中使用基于时间的日历和加权公平排队日历。 不频繁访问的控制块存储在DRAM存储器中,而频繁访问的控制块存储在SRAM中。

    Method and structure for enqueuing data packets for processing
    7.
    发明授权
    Method and structure for enqueuing data packets for processing 失效
    排队处理数据包的方法和结构

    公开(公告)号:US07406080B2

    公开(公告)日:2008-07-29

    申请号:US10868725

    申请日:2004-06-15

    IPC分类号: H04L12/56

    摘要: A method and structure is provided for buffering data packets having a header and a remainder in a network processor system. The network processor system has a processor on a chip and at least one buffer on the chip. Each buffer on the chip is configured to buffer the header of the packets in a preselected order before execution in the processor, and the remainder of the packet is stored in an external buffer apart from the chip. The method comprises utilizing the header information to identify the location and extent of the remainder of the packet. The entire selected packet is stored in the external buffer when the buffer of the stored header of the given packet is full, and moving only the header of a selected packet stored in the external buffer to the buffer on the chip when the buffer on the chip has space therefor.

    摘要翻译: 提供了一种在网络处理器系统中缓冲具有报头和余数的数据分组的方法和结构。 网络处理器系统在芯片上具有处理器和芯片上的至少一个缓冲器。 芯片上的每个缓冲器被配置为在处理器中执行之前以预先选择的顺序缓冲数据包的报头,并且数据包的剩余部分存储在与芯片分离的外部缓冲器中。 该方法包括利用报头信息来识别分组的其余部分的位置和范围。 当给定分组的存储报头的缓冲器已满时,整个所选分组被存储在外部缓冲器中,并且当芯片上的缓冲器仅将存储在外部缓冲器中的选定分组的报头移动到芯片上的缓冲器时 有空间。

    Providing to a parser and processors in a network processor access to an external coprocessor
    8.
    发明授权
    Providing to a parser and processors in a network processor access to an external coprocessor 有权
    向网络处理器中的解析器和处理器提供对外部协处理器的访问

    公开(公告)号:US09088594B2

    公开(公告)日:2015-07-21

    申请号:US13365679

    申请日:2012-02-03

    IPC分类号: G06F9/30 H04L29/06

    CPC分类号: H04L69/12

    摘要: A mechanism is provided for sharing a communication used by a parser (parser path) in a network adapter of a network processor for sending requests for a process to be executed by an external coprocessor. The parser path is shared by processors of the network processor (software path) to send requests to the external processor. The mechanism uses for the software path a request mailbox comprising a control address and a data field accessed by MMIO for sending two types of messages, one message type to read or write resources and one message type to trigger an external process in the coprocessor and a response mailbox for receiving response from the external coprocessor comprising a data field and a flag field. The other processors of the network poll the flag until set and get the coprocessor result in the data field.

    摘要翻译: 提供了一种用于共享由网络处理器的网络适配器中的解析器(解析器路径)使用的通信的机制,用于发送对要由外部协处理器执行的进程的请求。 解析器路径由网络处理器(软件路径)的处理器共享,以将请求发送到外部处理器。 该机制用于软件路径,包括由MMIO访问的控制地址和数据字段的请求邮箱,用于发送两种类型的消息,一种用于读取或写入资源的消息类型和一个消息类型以触发协处理器中的外部进程, 用于从包括数据字段和标志字段的外部协处理器接收响应的响应邮箱。 网络的其他处理器轮询该标志直到设置,并获得协处理器结果的数据字段。

    Assigning work from multiple sources to multiple sinks given assignment constraints
    9.
    发明授权
    Assigning work from multiple sources to multiple sinks given assignment constraints 失效
    给定分配约束将工作从多个源分配给多个汇点

    公开(公告)号:US08532129B2

    公开(公告)日:2013-09-10

    申请号:US12650120

    申请日:2009-12-30

    IPC分类号: H04L12/28

    CPC分类号: H04L49/9047

    摘要: Assigning work, such as data packets, from a plurality of sources, such as data queues in a network processing device, to a plurality of sinks, such as processor threads in the network processing device is provided. In a given processing period, sinks that are available to receive work are identified and sources qualified to send work to the available sinks are determined taking into account any assignment constraints. A single source is selected from an overlap of the qualified sources and sources having work available. This selection may be made using a hierarchical source scheduler for processing subsets of supported sources simultaneously in parallel. A sink to which work from the selected source may be assigned is selected from available sinks qualified to receive work from the selected source.

    摘要翻译: 提供了诸如数据分组的工作,诸如诸如网络处理设备中的数据队列的多个源到网络处理设备中的诸如处理器线程的多个接收器。 在给定的处理期间,确定可用于接收工作的接收器,并且考虑到任何分配约束来确定用于将工作发送到可用接收器的资源。 从具有可用工作的合格来源和源的重叠中选择单个来源。 可以使用用于并行同时处理所支持的源的子集的分级源调度器来进行该选择。 从可选择的来源可以分配工作的接收端从有资格从所选源接收工作的可用接收器中选择。