Optimized back-to-back enqueue/dequeue via physical queue parallelism
    1.
    发明授权
    Optimized back-to-back enqueue/dequeue via physical queue parallelism 有权
    通过物理队列并行性优化背对背排队/排队

    公开(公告)号:US07336675B2

    公开(公告)日:2008-02-26

    申请号:US10743392

    申请日:2003-12-22

    IPC分类号: H04L12/28 H04L12/54

    CPC分类号: H04L47/6295 H04L49/90

    摘要: A method and apparatus to receive a plurality of packet from an inflow of a single packet flow. In response to receiving the plurality of packets, a plurality of packet pointers is enqueued into multiple physical queues. Each of the plurality of packet pointers designates one of the plurality of packets from the single packet flow. The plurality of packet pointers are dequeued from the multiple physical queues to transmit the plurality of packets along an outflow of the single packet flow.

    摘要翻译: 一种从单个分组流的流入接收多个分组的方法和装置。 响应于接收到多个分组,多个分组指针被排队到多个物理队列中。 多个分组指针中的每一个从单个分组流指定多个分组中的一个。 多个分组指针从多个物理队列出出,以沿着单个分组流的流出发送多个分组。

    Free packet buffer allocation
    2.
    发明授权
    Free packet buffer allocation 失效
    免费包缓冲区分配

    公开(公告)号:US07159051B2

    公开(公告)日:2007-01-02

    申请号:US10668550

    申请日:2003-09-23

    IPC分类号: G06F3/00

    CPC分类号: H04L49/3018 H04L49/103

    摘要: According to some embodiments, systems an apparatuses may have a communication path to exchange information packets. A processor may process information packets. A buffer pool cache local to the processor may store free buffer handles for information packets when the buffer pool cache local to the processor is not full. A non-local memory may store the free buffer handles for information packets when the buffer pool cache local to the processor is full.

    摘要翻译: 根据一些实施例,设备可以具有用于交换信息分组的通信路径的系统。 处理器可以处理信息包。 当处理器本地的缓冲池缓存未满时,处理器本地的缓冲池缓存可以存储信息包的空闲缓冲区句柄。 当处理器本地缓冲池缓存已满时,非本地内存可能会存储信息包的空闲缓冲区句柄。

    Method for optimizing queuing performance
    3.
    发明授权
    Method for optimizing queuing performance 有权
    优化排队性能的方法

    公开(公告)号:US07433364B2

    公开(公告)日:2008-10-07

    申请号:US10746273

    申请日:2003-12-24

    IPC分类号: H04L12/54

    CPC分类号: G06F13/128

    摘要: Techniques for optimizing queuing performance include passing, from a ring having M slots, one or more enqueue requests and one or more dequeue requests to a queue manager, and determining whether the ring is full, and if the ring is full, sending only an enqueue request to the queue manager when one of the M slots is next available, otherwise, sending both an enqueue request and a dequeue request to the queue manager.

    摘要翻译: 用于优化排队性能的技术包括从具有M个时隙的环路将一个或多个入队请求和一个或多个出队请求传递到队列管理器,以及确定该环是否已满,并且如果该环已满,则仅发送一个入队 当M个时隙中的一个下一个可用时,请求队列管理器,否则,向队列管理器发送入队请求和出队请求。

    Buffer management for communication protocols
    4.
    发明授权
    Buffer management for communication protocols 有权
    通讯协议的缓冲管理

    公开(公告)号:US07929536B2

    公开(公告)日:2011-04-19

    申请号:US11617439

    申请日:2006-12-28

    IPC分类号: H04L12/28 H04L12/56

    摘要: A method according to one embodiment may include storing data in a send buffer. A transmission header may be created, in which the transmission header may include a pointer to the data in the send buffer. Packets may be transmitted, in which the packets include the transmission header and the data linked to the transmission header by the pointer, wherein the packets are transmitted without copying the data to create the packets. Of course, many alternatives, variations and modifications are possible without materially departing from this embodiment.

    摘要翻译: 根据一个实施例的方法可以包括将数据存储在发送缓冲器中。 可以创建传输报头,其中传输报头可以包括指向发送缓冲器中的数据的指针。 可以发送分组,其中分组包括传输头部和由指针链接到传输头部的数据,其中分组被传输而不复制数据以创建分组。 当然,在不脱离本实施例的情况下,可以进行许多替代,变化和修改。

    Buffer Management for Communication Protocols
    5.
    发明申请
    Buffer Management for Communication Protocols 有权
    通信协议的缓冲区管理

    公开(公告)号:US20080062991A1

    公开(公告)日:2008-03-13

    申请号:US11617439

    申请日:2006-12-28

    IPC分类号: H04L12/56

    摘要: A method according to one embodiment may include storing data in a send buffer. A transmission header may be created, in which the transmission header may include a pointer to the data in the send buffer. Packets may be transmitted, in which the packets include the transmission header and the data linked to the transmission header by the pointer, wherein the packets are transmitted without copying the data to create the packets. Of course, many alternatives, variations and modifications are possible without materially departing from this embodiment.

    摘要翻译: 根据一个实施例的方法可以包括将数据存储在发送缓冲器中。 可以创建传输报头,其中传输报头可以包括指向发送缓冲器中的数据的指针。 可以发送分组,其中分组包括传输头部和由指针链接到传输头部的数据,其中分组被传输而不复制数据以创建分组。 当然,在不脱离本实施例的情况下,可以进行许多替换,变化和修改。

    Method for parallel processing of events within multiple event contexts maintaining ordered mutual exclusion
    6.
    发明授权
    Method for parallel processing of events within multiple event contexts maintaining ordered mutual exclusion 失效
    用于并行处理多个事件上下文中保持有序互斥的事件的方法

    公开(公告)号:US07730501B2

    公开(公告)日:2010-06-01

    申请号:US10718497

    申请日:2003-11-19

    IPC分类号: G06F9/44

    CPC分类号: G06F9/542

    摘要: Techniques for parallel processing of events within multiple event contexts include dynamically binding an event context to an execution context in response to receiving an event by storing arriving events into a global event queue and storing events from the global event queue in per-execution context event queues are described. The techniques associate event queues with the execution contexts to temporarily store the events for a duration of the binding and thus dynamically bind the events received on a per-event basis in the context queues.

    摘要翻译: 用于并行处理多个事件上下文中的事件的技术包括通过将到达的事件存储到全局事件队列中并将来自全局事件队列的事件存储在每执行上下文事件队列中来响应于接收事件来动态地将事件上下文绑定到执行上下文 被描述。 这些技术将事件队列与执行上下文相关联,以在绑定的持续时间内临时存储事件,并因此在上下文队列中动态地绑定在每个事件基础上接收的事件。

    Thread-based engine cache partitioning
    9.
    发明授权
    Thread-based engine cache partitioning 有权
    基于线程的引擎缓存分区

    公开(公告)号:US07536692B2

    公开(公告)日:2009-05-19

    申请号:US10704431

    申请日:2003-11-06

    IPC分类号: G06F9/46 G06F12/00

    摘要: In general, in one aspect, the disclosure describes a processor that includes an instruction store to store instructions of at least a portion of at least one program and multiple engines coupled to the shared instruction store. The engines provide multiple execution threads and include an instruction cache to cache a subset of the at least the portion of the at least one program from the instruction store, with different respective portions of the engine's instruction cache being allocated to different respective ones of the engine threads.

    摘要翻译: 一般来说,一方面,本发明描述了一种处理器,其包括指令存储器,用于存储至少一个程序的至少一部分和耦合到共享指令存储器的多个引擎的指令。 引擎提供多个执行线程并且包括指令高速缓存以从指令存储器缓存至少一个程序的至少一部分的子集,其中引擎指令高速缓存的不同相应部分分配给引擎的不同相应引擎 线程。