Flexible mesh structure for hierarchical scheduling
    21.
    发明申请
    Flexible mesh structure for hierarchical scheduling 有权
    灵活的网格结构,用于分层调度

    公开(公告)号:US20060140192A1

    公开(公告)日:2006-06-29

    申请号:US11024957

    申请日:2004-12-29

    CPC分类号: H04L47/24 H04L47/50

    摘要: Systems and methods employing a flexible mesh structure for hierarchical scheduling are disclosed. The method generally includes reading a packet grouping configured in a two dimensional mesh structure of N columns, each containing M packets, selecting and promoting a column best packet from each column to a final row containing N packets, reading, selecting and promoting a final best packet from the final row to a next level in the hierarchy. Each time a final best packet is selected and promoted, the mesh structure can be refreshed by replacing the packet corresponding to the final best packet, and reading, selecting and promoting a column best packet from the column containing the replacement packet to the final row. As only the column containing the replacement packet and the final row are read and compared for each refresh, the mesh structure results in reduced read and compare cycles for schedule determination.

    摘要翻译: 公开了采用柔性网格结构进行分层调度的系统和方法。 该方法通常包括读取以N列的二维网格结构配置的分组分组,每个分组包含M个分组,从每列选择并提升列最佳分组到包含N个分组的最后一行,读取,选择和促进最终最佳 分组从最后一行到层次结构中的下一个级别。 每次选择和提升最终最佳分组时,可以通过替换与最终最佳分组相对应的分组来刷新网格结构,并从包含替换分组的列到最后一行读取,选择和促进列最佳分组。 由于只有包含替换数据包和最后一行的列被读取并进行比较才能进行每次刷新,所以网格结构导致读取和比较周期减少以用于计划确定。

    Expansion of compute engine code space by sharing adjacent control stores using interleaved program addresses
    22.
    发明申请
    Expansion of compute engine code space by sharing adjacent control stores using interleaved program addresses 审中-公开
    通过使用交错的程序地址共享相邻的控制存储来扩展计算引擎代码空间

    公开(公告)号:US20060095730A1

    公开(公告)日:2006-05-04

    申请号:US10955643

    申请日:2004-09-30

    IPC分类号: G06F9/30

    摘要: Method and apparatus to support expansion of compute engine code space by sharing adjacent control stores using interleaved addressing schemes. Instructions corresponding to an original instruction thread are partitioned into multiple interleaved sequences that are stored in respective control stores. During thread execution, instructions are retrieved from the control stores in a repeated order based on the interleaving scheme. For example, in one embodiment two compute engines share two control stores. Thus, instructions for a given thread are sequentially loaded from the control stores in an alternating manner. In another embodiment, four control stores are shared by four compute engines. In this case, the instructions in a thread are interleave using four stores, and each store is accessed every fourth instruction in the code sequence. Schemes are also provided for handling branching operations to maintain synchronized access to the control stores.

    摘要翻译: 通过使用交错寻址方案共享相邻控制存储器来支持计算引擎代码空间的扩展的方法和装置。 与原始指令线程相对应的指令被划分为存储在相应控制存储器中的多个交错序列。 在线程执行期间,基于交织方案以重复的顺序从控制存储器检索指令。 例如,在一个实施例中,两个计算引擎共享两个控制存储。 因此,给定线程的指令以交替方式从控制存储器顺序加载。 在另一个实施例中,四个控制存储由四个计算引擎共享。 在这种情况下,线程中的指令使用四个存储进行交织,并且每个存储在代码序列中每第四个指令被访问。 还提供了处理分支操作以维持对控制存储的同步访问的方案。

    Method and apparatus providing efficient queue descriptor memory access
    23.
    发明申请
    Method and apparatus providing efficient queue descriptor memory access 有权
    提供有效的队列描述符存储器访问的方法和装置

    公开(公告)号:US20060069854A1

    公开(公告)日:2006-03-30

    申请号:US10955969

    申请日:2004-09-30

    IPC分类号: G06F12/00

    CPC分类号: G06F12/121

    摘要: A system having queue control structures includes a conflict avoidance mechanism to prevent memory bank conflicts for queue descriptor access. In one embodiment, a queue descriptor bank table contains information including in which memory bank each queue descriptor is stored.

    摘要翻译: 具有队列控制结构的系统包括冲突避免机制,以防止存储器组冲突用于队列描述符访问。 在一个实施例中,队列描述符库表包含包含存储每个队列描述符的存储体的信息。

    Efficient sort scheme for a hierarchical scheduler
    25.
    发明申请
    Efficient sort scheme for a hierarchical scheduler 有权
    分层调度器的高效排序方案

    公开(公告)号:US20070223504A1

    公开(公告)日:2007-09-27

    申请号:US11389650

    申请日:2006-03-23

    IPC分类号: H04L12/56

    摘要: Scheduling of packets is performed by a scheduler based on departure times. If wrap up of departure times is possible, departure times are transposed based on a zone associated with the last departure time. By using the zone to transpose in order to sort departure times, cycles of independent checks on each of the departure times are avoided.

    摘要翻译: 基于出发次数由调度程序执行数据包的调度。 如果出发时间结束,可以根据与最后一个出发时间相关的区域进行转机。 通过使用区域进行转置以排序出发时间,避免了每个出发时间的独立检查周期。

    Inter-thread communication of lock protected data
    26.
    发明申请
    Inter-thread communication of lock protected data 审中-公开
    锁保护数据的线程间通信

    公开(公告)号:US20070044103A1

    公开(公告)日:2007-02-22

    申请号:US11190115

    申请日:2005-07-25

    IPC分类号: G06F9/46

    CPC分类号: G06F9/526

    摘要: In general, in one aspect, the disclosure describes a method that includes issuing, by a first thread at a first programmable unit of a set of multiple multi-threaded programmable units integrated within a single die, a request for a lock associated with data. The method also includes receiving, by the first thread, a grant for the lock and identification of a second thread to receive a grant for the lock after the lock is released by the first thread. The first thread initiates transfer of the data associated with the lock to the one of the multiple multi-threaded programmable units executing the second thread and releases the lock.

    摘要翻译: 通常,在一个方面,本公开描述了一种方法,其包括由集成在单个管芯内的一组多个多线程可编程单元的第一可编程单元的第一线程发出与数据相关联的锁定请求。 所述方法还包括由所述第一线程接收所述锁的许可和识别第二线程以在所述锁被所述第一线程释放之后接收所述锁的许可。 第一线程启动与锁相关联的数据传送到执行第二线程的多个多线程可编程单元之一并释放锁。

    Method and apparatus to support efficient check-point and role-back operations for flow-controlled queues in network devices
    27.
    发明申请
    Method and apparatus to support efficient check-point and role-back operations for flow-controlled queues in network devices 有权
    支持网络设备流量控制队列的有效检查点和角色返回操作的方法和装置

    公开(公告)号:US20070008985A1

    公开(公告)日:2007-01-11

    申请号:US11173005

    申请日:2005-06-30

    IPC分类号: H04L12/56

    摘要: Method and apparatus to support efficient check-point and role-back operations for flow-controlled queues in network devices. The method and apparatus employ queue descriptors to manage transfer of data from corresponding queues in memory into a switch fabric. In one embodiment, each queue descriptor includes an enqueue pointer identifying a tail cell of a segment of data scheduled to be transferred from the queue, a schedule pointer identifying a head cell of the segment of data, and a commit pointer identifying a most recent cell in the segment of data to be successfully transmitted into the switch fabric. In another embodiment, the queue descriptor further includes a scheduler sequence number; and a committed sequence number that are employed in connection with transfers of data from queues containing multiple segments. The various pointers and sequence numbers are employed to facilitate efficient check-point and roll-back operations relating to unsuccessful transmissions into the switch fabric.

    摘要翻译: 支持网络设备流量控制队列的有效检查点和角色返回操作的方法和装置。 该方法和装置使用队列描述符来管理数据从存储器中的相应队列传输到交换结构。 在一个实施例中,每个队列描述符包括标识被调度为从队列传送的数据段的尾部单元的入队指针,标识数据段的头单元的调度指针以及标识最近的单元的提交指针 在数据段中成功发送到交换矩阵中。 在另一个实施例中,队列描述符还包括调度器序列号; 以及与从包含多个段的队列传送数据相关联使用的承诺序列号。 采用各种指针和序列号来促进与进入交换结构的不成功传输有关的有效的检查点和回滚操作。

    Method and apparatus to enable DRAM to support low-latency access via vertical caching
    29.
    发明申请
    Method and apparatus to enable DRAM to support low-latency access via vertical caching 失效
    使DRAM能够通过垂直高速缓存支持低延迟访问的方法和装置

    公开(公告)号:US20060090039A1

    公开(公告)日:2006-04-27

    申请号:US10974122

    申请日:2004-10-27

    IPC分类号: G06F12/00

    摘要: Method and apparatus to enable slower memory, such as dynamic random access memory (DRAM)-based memory, to support low-latency access using vertical caching. Related function metadata used for packet-processing functions, including metering and flow statistics, is stored in an external DRAM-based store. In one embodiment, the DRAM comprises double data-rate (DDR) DRAM. A network processor architecture is disclosed including a DDR assist with data cache coupled to a DRAM controller. The architecture further includes multiple compute engines used to execute various packet-processing functions. One such function is a DDR assist function that is used to pre-fetch a set of function metadata for a current packet and store the function metadata in the data cache. Subsequently, one or more packet-processing functions may operate on the function metadata by accessing it from the cache. After the functions are completed, the function metadata are written back to the DRAM-based store. The scheme provides similar performance to SRAM-based schemes, but uses much cheaper DRAM-type memory.

    摘要翻译: 实现较慢存储器的方法和装置,例如基于动态随机存取存储器(DRAM)的存储器,以支持使用垂直缓存的低延迟访问。 用于包处理功能(包括计量和流量统计)的相关功能元数据存储在外部基于DRAM的存储中。 在一个实施例中,DRAM包括双数据速率(DDR)DRAM。 公开了一种网络处理器架构,其包括与DRAM控制器耦合的数据高速缓存的DDR辅助。 该架构还包括用于执行各种分组处理功能的多个计算引擎。 一个这样的功能是DDR辅助功能,其用于预取当前分组的一组功能元数据并将功能元数据存储在数据高速缓存中。 随后,一个或多个分组处理功能可以通过从高速缓存访​​问功能元数据来操作。 功能完成后,将功能元数据写回到基于DRAM的商店。 该方案提供与基于SRAM的方案类似的性能,但使用更便宜的DRAM型存储器。