Techniques for reducing thread overhead for systems with multiple multi-threaded processors
    1.
    发明授权
    Techniques for reducing thread overhead for systems with multiple multi-threaded processors 有权
    用于减少具有多个多线程处理器的系统的线程开销的技术

    公开(公告)号:US08453147B2

    公开(公告)日:2013-05-28

    申请号:US11446609

    申请日:2006-06-05

    IPC分类号: G06F9/46

    CPC分类号: G06F9/5027

    摘要: Techniques for processing requests from a processing thread for a shared resource shared among threads on one or more processors include receiving a bundle of requests from a portion of a thread that is executed during a single wake interval on a particular processor. The bundle includes multiple commands for one or more shared resources. The bundle is processed at the shared resource(s) to produce a bundle result. The bundle result is sent to the particular processor. The thread undergoes no more than one wake interval to sleep interval cycle while the bundle commands are processed at the shared resource(s). These techniques allow a lock for shared resource(s) to be obtained, used and released all while the particular thread is sleeping, so that locks are held for shorter times than in conventional approaches. Using these techniques, line rate packet processing is more readily achieved in routers with multiple multi-threaded processors.

    摘要翻译: 用于处理来自处理线程的用于在一个或多个处理器上的线程之间共享的共享资源的请求的技术包括从在特定处理器上的单个唤醒间隔期间执行的线程的一部分接收一束请求。 捆绑包包含一个或多个共享资源的多个命令。 在共享资源处理捆绑包以产生捆绑结果。 捆绑结果发送到特定的处理器。 线程经历不超过一个唤醒间隔到休眠间隔周期,而束命令在共享资源处被处理。 这些技术允许在特定线程正在休眠期间获得,使用和释放共享资源的锁定,使得锁比一般方法保持更短的时间。 使用这些技术,在具有多个多线程处理器的路由器中更容易实现线路速率分组处理。

    Techniques for reducing thread overhead for systems with multiple multi-theaded processors
    2.
    发明申请
    Techniques for reducing thread overhead for systems with multiple multi-theaded processors 有权
    用于减少具有多个多重处理器的系统的线程开销的技术

    公开(公告)号:US20070283357A1

    公开(公告)日:2007-12-06

    申请号:US11446609

    申请日:2006-06-05

    IPC分类号: G06F9/46

    CPC分类号: G06F9/5027

    摘要: Techniques for processing requests from a processing thread for a shared resource shared among threads on one or more processors include receiving a bundle of requests from a portion of a thread that is executed during a single wake interval on a particular processor. The bundle includes multiple commands for one or more shared resources. The bundle is processed at the shared resource(s) to produce a bundle result. The bundle result is sent to the particular processor. The thread undergoes no more than one wake interval to sleep interval cycle while the bundle commands are processed at the shared resource(s). These techniques allow a lock for shared resource(s) to be obtained, used and released all while the particular thread is sleeping, so that locks are held for shorter times than in conventional approaches. Using these techniques, line rate packet processing is more readily achieved in routers with multiple multi-threaded processors.

    摘要翻译: 用于处理来自处理线程的用于在一个或多个处理器上的线程之间共享的共享资源的请求的技术包括从在特定处理器上的单个唤醒间隔期间执行的线程的一部分接收一束请求。 捆绑包包含一个或多个共享资源的多个命令。 在共享资源处理捆绑包以产生捆绑结果。 捆绑结果发送到特定的处理器。 线程经历不超过一个唤醒间隔到休眠间隔周期,而束命令在共享资源处被处理。 这些技术允许在特定线程正在休眠期间获得,使用和释放共享资源的锁定,使得锁比一般方法保持更短的时间。 使用这些技术,在具有多个多线程处理器的路由器中更容易实现线路速率分组处理。

    Multi-threaded Processing Using Path Locks
    3.
    发明申请
    Multi-threaded Processing Using Path Locks 有权
    使用路径锁的多线程处理

    公开(公告)号:US20080077926A1

    公开(公告)日:2008-03-27

    申请号:US11535956

    申请日:2006-09-27

    IPC分类号: G06F9/46

    CPC分类号: G06F9/524 G06F9/4881

    摘要: In one embodiment, a method includes receiving at a thread scheduler data that indicates a first thread is to execute next a particular instruction path in software to access a particular portion of a shared computational resource. The thread scheduler determines whether a different second thread is exclusively eligible to execute the particular instruction path on any processor of a set of one or more processors to access the particular portion of the shared computational resource. If so, then the thread scheduler prevents the first thread from executing any instruction from the particular instruction path on any processor of the set of one or more processors. This enables several threads of the same software to share a resource without obtaining locks on the resource or holding a lock on a resource while a thread is not running.

    摘要翻译: 在一个实施例中,一种方法包括在线程调度器处接收指示第一线程将在软件中执行特定指令路径以访问共享计算资源的特定部分的数据。 线程调度器确定不同的第二线程是否唯一地有资格在一组一个或多个处理器的任何处理器上执行特定的指令路径以访问共享计算资源的特定部分。 如果是这样,则线程调度器阻止第一线程在一个或多个处理器组的任何处理器上执行来自特定指令路径的任何指令。 这使得同一软件的多个线程可以共享资源,而不会在线程未运行时获取资源上的锁定或对资源进行锁定。

    Multi-threaded processing using path locks
    4.
    发明授权
    Multi-threaded processing using path locks 有权
    使用路径锁的多线程处理

    公开(公告)号:US08010966B2

    公开(公告)日:2011-08-30

    申请号:US11535956

    申请日:2006-09-27

    IPC分类号: G06F9/46

    CPC分类号: G06F9/524 G06F9/4881

    摘要: In one embodiment, a method includes receiving at a thread scheduler data that indicates a first thread is to execute next a particular instruction path in software to access a particular portion of a shared computational resource. The thread scheduler determines whether a different second thread is exclusively eligible to execute the particular instruction path on any processor of a set of one or more processors to access the particular portion of the shared computational resource. If so, then the thread scheduler prevents the first thread from executing any instruction from the particular instruction path on any processor of the set of one or more processors. This enables several threads of the same software to share a resource without obtaining locks on the resource or holding a lock on a resource while a thread is not running.

    摘要翻译: 在一个实施例中,一种方法包括在线程调度器处接收指示第一线程将在软件中执行特定指令路径以访问共享计算资源的特定部分的数据。 线程调度器确定不同的第二线程是否唯一地有资格在一组一个或多个处理器的任何处理器上执行特定的指令路径以访问共享计算资源的特定部分。 如果是这样,则线程调度器阻止第一线程在一个或多个处理器组的任何处理器上执行来自特定指令路径的任何指令。 这使得同一软件的多个线程可以共享资源,而不会在线程未运行时获取资源上的锁定或对资源进行锁定。

    Apparatus for hardware-software classification of data packet flows
    5.
    发明申请
    Apparatus for hardware-software classification of data packet flows 有权
    用于数据包流的硬件分类的装置

    公开(公告)号:US20080013532A1

    公开(公告)日:2008-01-17

    申请号:US11484791

    申请日:2006-07-11

    IPC分类号: H04L12/56

    摘要: An apparatus for routing data packets includes a network interface, a memory, a general purpose processor and a flow classifier. The memory stores a flow structure. Every packet in one flow has identical values for a set of data fields in the packet. The memory stores instruction that cause the processor to receive missing flow data and to add the missing flow to the flow structure. The apparatus forwards a packet based on the flow. The flow classifier determines a particular flow and whether it is already stored in the flow structure. If not, then the classifier determines whether that flow has already been sent to the processor as missing data. If not, then the classifier stores into a different data structure data that indicates the flow has been sent to the processor but is not yet included in the flow data structure, and sends missing data to the processor.

    摘要翻译: 用于路由数据分组的装置包括网络接口,存储器,通用处理器和流分类器。 存储器存储流程结构。 一个流中的每个数据包对于数据包中的一组数据字段具有相同的值。 存储器存储使得处理器接收丢失的流数据并将丢失的流添加到流结构的指令。 该装置基于流转发分组。 流分类器确定特定的流程以及它是否已经存储在流结构中。 如果没有,则分类器确定该流是否已经作为丢失数据发送到处理器。 如果不是,则分类器将不同的数据结构存储在指示流已经发送到处理器但尚未包括在流数据结构中的数据中,并将丢失的数据发送到处理器。

    Memory efficient hashing algorithm
    6.
    发明申请
    Memory efficient hashing algorithm 审中-公开
    内存高效散列算法

    公开(公告)号:US20050171937A1

    公开(公告)日:2005-08-04

    申请号:US10769941

    申请日:2004-02-02

    IPC分类号: G06F17/30

    CPC分类号: G06F16/9014

    摘要: A technique efficiently searches a hash table. Conventionally, a predetermined set of “signature” information is hashed to generate a hash-table index which, in turn, is associated with a corresponding linked list accessible through the hash table. The indexed list is sequentially searched, beginning with the first list entry, until a “matching” list entry is located containing the signature information. For long list lengths, this conventional approach may search a substantially large number of list entries. In contrast, the inventive technique reduces, on average, the number of list entries that are searched to locate the matching list entry. To that end, list entries are partitioned into different groups within each linked list. Thus, by searching only a selected group (e.g., subset) of entries in the indexed list, the technique consumes fewer resources, such as processor bandwidth and processing time, than previous implementations.

    摘要翻译: 一种技术有效地搜索哈希表。 常规地,预定的一组“签名”信息被散列以产生哈希表索引,该哈希表索引又与通过散列表可访问的对应链表相关联。 依次搜索索引列表,从第一个列表条目开始,直到包含签名信息的“匹配”列表条目为止。 对于长列表长度,这种常规方法可以搜索大量的列表条目。 相比之下,本发明技术平均减少了搜索以定位匹配列表条目的列表条目的数量。 为此,列表条目在每个链接列表中被分成不同的组。 因此,通过仅搜索索引列表中的选择的组(例​​如,子集)条目,该技术比先前的实现消耗更少的资源,例如处理器带宽和处理时间。

    Apparatus for hardware-software classification of data packet flows
    7.
    发明授权
    Apparatus for hardware-software classification of data packet flows 有权
    用于数据包流的硬件分类的装置

    公开(公告)号:US08228908B2

    公开(公告)日:2012-07-24

    申请号:US11484791

    申请日:2006-07-11

    IPC分类号: H04L12/28

    摘要: An apparatus for routing data packets includes a network interface, a memory, a general purpose processor and a flow classifier. The memory stores a flow structure. Every packet in one flow has identical values for a set of data fields in the packet. The memory stores instruction that cause the processor to receive missing flow data and to add the missing flow to the flow structure. The apparatus forwards a packet based on the flow. The flow classifier determines a particular flow and whether it is already stored in the flow structure. If not, then the classifier determines whether that flow has already been sent to the processor as missing data. If not, then the classifier stores into a different data structure data that indicates the flow has been sent to the processor but is not yet included in the flow data structure, and sends missing data to the processor.

    摘要翻译: 用于路由数据分组的装置包括网络接口,存储器,通用处理器和流分类器。 存储器存储流程结构。 一个流中的每个数据包对于数据包中的一组数据字段具有相同的值。 存储器存储使得处理器接收丢失的流数据并将丢失的流添加到流结构的指令。 该装置基于流转发分组。 流分类器确定特定的流程以及它是否已经存储在流结构中。 如果没有,则分类器确定该流是否已经作为丢失数据发送到处理器。 如果不是,则分类器将不同的数据结构存储在指示流已经发送到处理器但尚未包括在流数据结构中的数据中,并将丢失的数据发送到处理器。

    Hardware filtering support for denial-of-service attacks
    8.
    发明申请
    Hardware filtering support for denial-of-service attacks 有权
    硬件过滤支持拒绝服务攻击

    公开(公告)号:US20050213570A1

    公开(公告)日:2005-09-29

    申请号:US10811195

    申请日:2004-03-26

    IPC分类号: H04L12/56 H04L29/06

    摘要: A system and method is provided for automatically identifying and removing malicious data packets, such as denial-of-service (DoS) packets, in an intermediate network node before the packets can be forwarded to a central processing unit (CPU) in the node. The CPU's processing bandwidth is therefore not consumed identifying and removing the malicious packets from the system memory. As such, processing of the malicious packets is essentially “off-loaded” from the CPU, thereby enabling the CPU to process non-malicious packets in a more efficient manner. Unlike prior implementations, the invention identifies malicious packets having complex encapsulations that can not be identified using traditional techniques, such as ternary content addressable memories (TCAM) or lookup tables.

    摘要翻译: 提供了一种系统和方法,用于在分组可以转发到节点中的中央处理单元(CPU)之前自动识别和去除中间网络节点中的恶意数据分组,例如拒绝服务(DoS)分组。 因此,CPU的处理带宽不被识别并从系统内存中删除恶意数据包。 因此,恶意数据包的处理本质上从CPU中“卸载”,从而使CPU能够以更有效的方式处理非恶意数据包。 与先前的实现不同,本发明识别具有复杂封装的恶意数据包,这些封装不能使用诸如三进制内容可寻址存储器(TCAM)或查找表之类的传统技术来识别。

    Multi processor enqueue packet circuit
    9.
    发明授权
    Multi processor enqueue packet circuit 有权
    多处理器入队包电路

    公开(公告)号:US07174394B1

    公开(公告)日:2007-02-06

    申请号:US10171957

    申请日:2002-06-14

    IPC分类号: G06F3/00

    CPC分类号: G06F5/065 G06F2205/064

    摘要: The present invention provides a system and method for a plurality of independent processors to simultaneously assemble requests in a context memory coupled to a coprocessor. A write manager coupled to the context memory organizes segments received from multiple processors to form requests for the coprocessor. Each received segment indicates a location in the context memory, such as an indexed memory block, where the segment should be stored. Illustratively, the write manager parses the received segments to their appropriate blocks of the context memory, and detects when the last segment for a request has been received. The last segment may be identified according to a predetermined address bit, e.g. an upper order bit, that is set. When the write manager receives the last segment for a request, the write manager (1) finishes assembling the request in a block of the context memory, (2) enqueues an index associated with the memory block in an index FIFO, and (3) sets a valid bit associated with memory block. By setting the valid bit, the write manager prevents newly received segments from overwriting the assembled request that has not yet been forwarded to the coprocessor. When an index reaches the head of the index FIFO, a request is dequeued from the indexed block of the context memory and forwarded to the coprocessor.

    摘要翻译: 本发明提供了一种用于多个独立处理器以在耦合到协处理器的上下文存储器中同时组合请求的系统和方法。 耦合到上下文存储器的写管理器组织从多个处理器接收的段以形成对协处理器的请求。 每个接收的段指示上下文存储器中的位置,例如索引的存储器块,其中应该存储段。 说明性地,写入管理器将接收到的段解析为上下文存储器的适当块,并且检测何时已经接收到请求的最后段。 可以根据预定的地址位来识别最后一个段,例如, 一个高位位,即被设置。 当写入管理器接收到请求的最后一个段时,写入管理器(1)完成在上下文存储器的一个块中组合该请求,(2)将与索引FIFO中的存储器块相关联的索引排入队列,以及(3) 设置与存储器块相关联的有效位。 通过设置有效位,写入管理器防止新接收的段覆盖尚未转发到协处理器的组合请求。 当索引到达索引FIFO的头部时,请求从上下文存储器的索引块中出发并转发到协处理器。

    Apparatus and technique for maintaining order among requests directed to a same address on an external bus of an intermediate network node
    10.
    发明授权
    Apparatus and technique for maintaining order among requests directed to a same address on an external bus of an intermediate network node 有权
    用于在针对中间网络节点的外部总线上的同一地址的请求之间维持命令的装置和技术

    公开(公告)号:US06832279B1

    公开(公告)日:2004-12-14

    申请号:US09859709

    申请日:2001-05-17

    IPC分类号: G06F1300

    CPC分类号: G06F13/1621 G06F13/405

    摘要: An apparatus and technique off-loads responsibility for maintaining order among requests directed to a same address on a split transaction bus from a processor to a split transaction bus controller, thereby increasing the performance of the processor. The present invention comprises an ordering circuit that enables the controller to defer issuing a subsequent (write) request directed to an address on the bus until a previous (read) request directed to the same address completes. By off-loading responsibility for maintaining order among requests from the processor to the controller, the invention enhances performance of the processor since the processor may proceed with program execution without having to stall to ensure such ordering. The ordering circuit maintains ordering in an efficient manner that is transparent to the processor.

    摘要翻译: 一种装置和技术将负责维护在分割事务总线上从处理器到分割事务总线控制器的相同地址的请求之间的顺序的责任,从而增加了处理器的性能。 本发明包括排序电路,其使得控制器能推迟发出针对总线上的地址的后续(写入)请求,直到针对同一地址的先前(读取)请求完成为止。 通过卸载在处理器到控制器的请求之间维护订单的责任,本发明增强了处理器的性能,因为处理器可以进行程序执行而不必停顿以确保这样的排序。 排序电路以对处理器透明的有效方式维护排序。