Instruction breakpoints in a multi-core, multi-thread network communications processor architecture
    1.
    发明授权
    Instruction breakpoints in a multi-core, multi-thread network communications processor architecture 有权
    指令断点在多核,多线程网络通信处理器架构中

    公开(公告)号:US08868889B2

    公开(公告)日:2014-10-21

    申请号:US12976045

    申请日:2010-12-22

    摘要: Described embodiments provide a packet classifier for a network processor that generates tasks corresponding to each received packet. The packet classifier includes a scheduler to generate threads of contexts corresponding to tasks received by the packet classifier from a plurality of processing modules of the network processor. A multi-thread instruction engine processes instructions corresponding to threads received from the scheduler. The multi-thread instruction engine executes instructions by fetching an instruction of the thread from an instruction memory of the packet classifier and determining whether a breakpoint mode of the network processor is enabled. If the breakpoint mode is enabled, and breakpoint indicator of the fetched instruction is set, the packet classifier enters a breakpoint mode. Otherwise, if the breakpoint indicator of the fetched instruction is not set, the multi-thread instruction engine executes the fetched instruction.

    摘要翻译: 描述的实施例提供了一种用于生成与每个接收的分组相对应的任务的网络处理器的分组分类器。 分组分类器包括调度器,用于从网络处理器的多个处理模块生成对应于由分组分类器接收到的任务的上下文的线程。 多线程指令引擎处理与从调度程序接收到的线程相对应的指令。 多线程指令引擎通过从分组分类器的指令存储器取出线程的指令来执行指令,并且确定是否使能网络处理器的断点模式。 如果断点模式被使能,并且获取的指令的断点指示器被设置,则分组分类器进入断点模式。 否则,如果未设置获取的指令的断点指示符,则多线程指令引擎执行读取的指令。

    Reducing data read latency in a network communications processor architecture
    2.
    发明授权
    Reducing data read latency in a network communications processor architecture 有权
    降低网络通信处理器架构中的数据读延迟

    公开(公告)号:US08505013B2

    公开(公告)日:2013-08-06

    申请号:US12975823

    申请日:2010-12-22

    IPC分类号: G06F9/46 G06F12/06

    摘要: Described embodiments provide address translation for data stored in at least one shared memory of a network processor. A processing module of the network processor generates tasks corresponding to each of a plurality of received packets. A packet classifier generates contexts for each task, each context associated with a thread of instructions to apply to the corresponding packet. A first subset of instructions is stored in a tree memory within the at least one shared memory. A second subset of instructions is stored in a cache within a multi-thread engine of the packet classifier. The multi-thread engine maintains status indicators corresponding to the first and second subsets of instructions within the cache and the tree memory and, based on the status indicators, accesses a lookup table while processing a thread to translate between an instruction number and a physical address of the instruction in the first and second subset of instructions.

    摘要翻译: 描述的实施例为存储在网络处理器的至少一个共享存储器中的数据提供地址转换。 网络处理器的处理模块生成与多个接收到的分组中的每一个对应的任务。 分组分类器为每个任务生成上下文,每个上下文与指令线程相关联以应用于相应的分组。 指令的第一子集被存储在所述至少一个共享存储器内的树存储器中。 指令的第二子集存储在分组分类器的多线程引擎内的高速缓存中。 多线程引擎保持与高速缓存和树存储器中的第一和第二指令子集相对应的状态指示符,并且基于状态指示符,在处理线程以在指令数和物理地址之间转换时,访问查找表 在指令的第一和第二子集中的指令。

    INSTRUCTION BREAKPOINTS IN A MULTI-CORE, MULTI-THREAD NETWORK COMMUNICATIONS PROCESSOR ARCHITECTURE
    3.
    发明申请
    INSTRUCTION BREAKPOINTS IN A MULTI-CORE, MULTI-THREAD NETWORK COMMUNICATIONS PROCESSOR ARCHITECTURE 有权
    多核心,多线程网络通信处理器架构中的指导性突破

    公开(公告)号:US20110225394A1

    公开(公告)日:2011-09-15

    申请号:US12976045

    申请日:2010-12-22

    IPC分类号: G06F9/312

    摘要: Described embodiments provide a packet classifier for a network processor that generates tasks corresponding to each received packet. The packet classifier includes a scheduler to generate threads of contexts corresponding to tasks received by the packet classifier from a plurality of processing modules of the network processor. A multi-thread instruction engine processes instructions corresponding to threads received from the scheduler. The multi-thread instruction engine executes instructions by fetching an instruction of the thread from an instruction memory of the packet classifier and determining whether a breakpoint mode of the network processor is enabled. If the breakpoint mode is enabled, and breakpoint indicator of the fetched instruction is set, the packet classifier enters a breakpoint mode. Otherwise, if the breakpoint indicator of the fetched instruction is not set, the multi-thread instruction engine executes the fetched instruction.

    摘要翻译: 描述的实施例提供了一种用于生成与每个接收的分组相对应的任务的网络处理器的分组分类器。 分组分类器包括调度器,用于从网络处理器的多个处理模块生成对应于由分组分类器接收到的任务的上下文的线程。 多线程指令引擎处理与从调度程序接收到的线程相对应的指令。 多线程指令引擎通过从分组分类器的指令存储器取出线程的指令来执行指令,并且确定是否使能网络处理器的断点模式。 如果断点模式被使能,并且获取的指令的断点指示符被设置,则分组分类器进入断点模式。 否则,如果未设置获取的指令的断点指示符,则多线程指令引擎执行读取的指令。

    REDUCING DATA READ LATENCY IN A NETWORK COMMUNICATIONS PROCESSOR ARCHITECTURE
    4.
    发明申请
    REDUCING DATA READ LATENCY IN A NETWORK COMMUNICATIONS PROCESSOR ARCHITECTURE 有权
    在网络通信处理器架构中减少数据读取延迟

    公开(公告)号:US20110225588A1

    公开(公告)日:2011-09-15

    申请号:US12975823

    申请日:2010-12-22

    IPC分类号: G06F9/46

    摘要: Described embodiments provide address translation for data stored in at least one shared memory of a network processor. A processing module of the network processor generates tasks corresponding to each of a plurality of received packets. A packet classifier generates contexts for each task, each context associated with a thread of instructions to apply to the corresponding packet. A first subset of instructions is stored in a tree memory within the at least one shared memory. A second subset of instructions is stored in a cache within a multi-thread engine of the packet classifier. The multi-thread engine maintains status indicators corresponding to the first and second subsets of instructions within the cache and the tree memory and, based on the status indicators, accesses a lookup table while processing a thread to translate between an instruction number and a physical address of the instruction in the first and second subset of instructions.

    摘要翻译: 描述的实施例为存储在网络处理器的至少一个共享存储器中的数据提供地址转换。 网络处理器的处理模块生成与多个接收到的分组中的每一个对应的任务。 分组分类器为每个任务生成上下文,每个上下文与指令线程相关联以应用于相应的分组。 指令的第一子集被存储在所述至少一个共享存储器内的树存储器中。 指令的第二子集存储在分组分类器的多线程引擎内的高速缓存中。 多线程引擎保持与高速缓存和树存储器中的第一和第二指令子集相对应的状态指示符,并且基于状态指示符,在处理线程以在指令数和物理地址之间转换时,访问查找表 在指令的第一和第二子集中的指令。

    Multi-threaded processing with hardware accelerators
    7.
    发明授权
    Multi-threaded processing with hardware accelerators 有权
    使用硬件加速器进行多线程处理

    公开(公告)号:US08949838B2

    公开(公告)日:2015-02-03

    申请号:US13474114

    申请日:2012-05-17

    摘要: Described embodiments process multiple threads of commands in a network processor. One or more tasks are generated corresponding to each received packet, and the tasks are provided to a packet processor module (MPP). A scheduler associates each received task with a command flow. A thread updater writes state data corresponding to the flow to a context memory. The scheduler determines an order of processing of the command flows. When a processing thread of a multi-thread processor is available, the thread updater loads, from the context memory, state data for at least one scheduled flow to one of the multi-thread processors. The multi-thread processor processes a next command of the flow based on the loaded state data. If the processed command requires operation of a co-processor module, the multi-thread processor sends a co-processor request and switches command processing from the first flow to a second flow.

    摘要翻译: 描述的实施例处理网络处理器中的多个命令线程。 对应于每个接收到的分组生成一个或多个任务,并且将任务提供给分组处理器模块(MPP)。 调度器将每个接收到的任务与命令流相关联。 线程更新器将对应于流的状态数据写入上下文存储器。 调度器确定命令流的处理顺序。 当多线程处理器的处理线程可用时,线程更新器从上下文存储器加载至少一个调度流的状态数据到多线程处理器之一。 多线程处理器基于加载的状态数据处理流的下一个命令。 如果处理的命令需要协处理器模块的操作,则多线程处理器发送协处理器请求并将命令处理从第一流切换到第二流。

    Memory manager for a network communications processor architecture
    8.
    发明授权
    Memory manager for a network communications processor architecture 有权
    用于网络通信处理器架构的内存管理器

    公开(公告)号:US08677075B2

    公开(公告)日:2014-03-18

    申请号:US13359690

    申请日:2012-01-27

    IPC分类号: G06F12/00

    摘要: Described embodiments provide a network processor having a plurality of processing modules coupled to a system cache and a shared memory. A memory manager allocates blocks of the shared memory to a requesting one of the processing modules. The allocated blocks store data corresponding to packets received by the network processor. The memory manager maintains a reference count for each allocated memory block indicating a number of processing modules accessing the block. One of the processing modules reads the data stored in the allocated memory blocks, stores the read data to corresponding entries of the system cache and operates on the data stored in the system cache. Upon completion of operation on the data, the processing module requests to decrement the reference count of each memory block. Based on the reference count, the memory manager invalidates the entries of the system cache and deallocates the memory blocks.

    摘要翻译: 所描述的实施例提供具有耦合到系统高速缓存和共享存储器的多个处理模块的网络处理器。 存储器管理器将共享存储器的块分配给请求的一个处理模块。 所分配的块存储对应于由网络处理器接收的分组的数据。 存储器管理器为每个分配的存储块维护指示接入该块的处理模块的数量的引用计数。 其中一个处理模块读取存储在分配的存储器块中的数据,将读取的数据存储到系统高速缓存的相应条目,并对存储在系统高速缓存中的数据进行操作。 在完成对数据的操作时,处理模块请求递减每个存储器块的引用计数。 基于引用计数,内存管理器使系统缓存的条目无效,并释放内存块。

    Exception detection and thread rescheduling in a multi-core, multi-thread network processor
    9.
    发明授权
    Exception detection and thread rescheduling in a multi-core, multi-thread network processor 有权
    多核,多线程网络处理器中的异常检测和线程重新调度

    公开(公告)号:US08537832B2

    公开(公告)日:2013-09-17

    申请号:US13046726

    申请日:2011-03-12

    IPC分类号: H04L12/28

    摘要: Described embodiments provide a packet classifier of a network processor having a plurality of processing modules. A scheduler generates a thread of contexts for each tasks generated by the network processor corresponding to each received packet. The thread corresponds to an order of instructions applied to the corresponding packet. A multi-thread instruction engine processes the threads of instructions. A function bus interface inspects instructions received from the multi-thread instruction engine for one or more exception conditions. If the function bus interface detects an exception, the function bus interface reports the exception to the scheduler and the multi-thread instruction engine. The scheduler reschedules the thread corresponding to the instruction having the exception for processing in the multi-thread instruction engine. Otherwise, the function bus interface provides the instruction to a corresponding destination processing module of the network processor.

    摘要翻译: 描述的实施例提供具有多个处理模块的网络处理器的分组分类器。 调度器针对与每个接收的分组相对应的由网络处理器生成的每个任务生成上下文的线程。 线程对应于应用于相应分组的指令的顺序。 多线程指令引擎处理指令的线程。 功能总线接口检查从多线程指令引擎接收的指令是否存在一个或多个异常情况。 如果功能总线接口检测到异常,则功能总线接口向调度程序和多线程指令引擎报告异常。 调度器对与具有用于在多线程指令引擎中进行处理的异常的指令相对应的线程进行重新调度。 否则,功能总线接口向网络处理器的相应目标处理模块提供指令。

    Hash processing in a network communications processor architecture
    10.
    发明授权
    Hash processing in a network communications processor architecture 失效
    网络通讯处理器架构中的哈希处理

    公开(公告)号:US08321385B2

    公开(公告)日:2012-11-27

    申请号:US13046719

    申请日:2011-03-12

    IPC分类号: G06F7/00 G06F17/00 G06F17/30

    摘要: Described embodiments provide coherent processing of hash operations of a network processor having a plurality of processing modules. A hash processor of the network processor receives hash operation requests from the plurality of processing modules. A hash table identifier and bucket index corresponding to the received hash operation request are determined. An active index list is maintained for active hash operations for each hash table identifier and bucket index. If the hash table identifier and bucket index of the received hash operation request are in the active index list, the received hash operation request is deferred until the hash table identifier and bucket index corresponding to the received hash operation request clear from the active index list. Otherwise, the active index list is updated with the hash table identifier and bucket index of the received hash operation request and the received hash operation request is processed.

    摘要翻译: 所描述的实施例提供具有多个处理模块的网络处理器的散列操作的相干处理。 网络处理器的散列处理器从多个处理模块接收散列操作请求。 确定与所接收的散列操作请求对应的哈希表标识符和桶索引。 为每个哈希表标识符和桶索引的活动散列操作维护活动索引列表。 如果接收到的散列操作请求的散列表标识符和桶索引在活动索引列表中,则接收到的散列操作请求被延迟,直到与所接收的散列操作请求对应的散列表标识符和存储桶索引从活动索引列表中清除。 否则,主动索引列表用哈希表标识符和接收到的散列操作请求的桶索引进行更新,并且处理接收的散列操作请求。