Using ordered locking mechanisms to maintain sequences of items such as packets
    1.
    发明授权
    Using ordered locking mechanisms to maintain sequences of items such as packets 有权
    使用有序锁定机制来维护诸如数据包的项目序列

    公开(公告)号:US07626987B2

    公开(公告)日:2009-12-01

    申请号:US10706704

    申请日:2003-11-12

    IPC分类号: H04L12/56

    摘要: Sequences of items may be maintained using ordered locks. These items may correspond to anything, but using ordered locks to maintain sequences of packets may be particularly useful. One implementation uses a locking request, acceptance, and release protocol. One implementation associates instructions with locking requests such that when a lock is acquired, the locking mechanism executes or causes to be executed the associated instructions as an acceptance request of the lock is implied by the association of instructions (or may be explicitly requested). In some applications, the ordering of the entire sequence of packets is not required to be preserved, but rather only among certain sub-sequences of the entire sequence of items, which can be accomplished by converting an initial root ordered lock (maintaining the sequence of the entire stream of items) to various other locks (each maintaining a sequence of different sub-streams of items).

    摘要翻译: 可以使用有序锁来维护物品的顺序。 这些项目可以对应于任何东西,但是使用有序锁来维护分组的序列可能是特别有用的。 一个实现使用锁定请求,接受和释放协议。 一个实现将指令与锁定请求相关联,使得当获取锁时,锁定机制执行或导致执行相关联的指令,因为锁的接受请求由指令的关联(或可以被明确请求)所暗示。 在一些应用中,整个数据包序列的顺序不需要被保留,而是仅在整个项目序列的某些子序列之中,这可以通过转换初始的根有序锁来实现 整个项目流)到各种其他锁(每个保持一系列不同子项目的子流)。

    Thread-aware instruction fetching in a multithreaded embedded processor
    2.
    发明授权
    Thread-aware instruction fetching in a multithreaded embedded processor 有权
    线程感知指令在多线程嵌入式处理器中获取

    公开(公告)号:US07441101B1

    公开(公告)日:2008-10-21

    申请号:US10773385

    申请日:2004-02-05

    IPC分类号: G06F9/312 G06F9/48

    CPC分类号: G06F9/3851 G06F9/3802

    摘要: The present invention provides a multithreaded processor, such as a network processor, that fetches instructions in a pipeline stage based on feedback signals from later stages. The multithreaded processor comprises a pipeline with an instruction unit in the early stage and an instruction queue, a thread interleaver, and an execution pipeline in the later stages. Feedback signals from the later stages cause the instruction unit to block fetching, immediately fetch, raise priority, or lower priority for a particular thread. The instruction queue generates a queue signal, on a per thread basis, responsive to a thread queue condition, etc., the thread interleaver generates an interleaver signal responsive to a thread condition, etc., and the execution pipeline generates an execution signal responsive to an execution stall, etc.

    摘要翻译: 本发明提供了一种多线程处理器,例如网络处理器,其基于来自后期阶段的反馈信号在流水线级中取指令。 多线程处理器包括在早期阶段具有指令单元的流水线以及稍后阶段中的指令队列,线程交织器和执行流水线。 来自后期的反馈信号导致指令单元阻止特定线程的取出,立即获取,提高优先级或降低优先级。 指令队列响应于线程队列条件等而在每个线程的基础上生成队列信号,线程交织器响应于线程状态等产生交织器信号,并且执行流水线生成响应于 执行档等

    Thread interleaving in a multithreaded embedded processor
    3.
    发明授权
    Thread interleaving in a multithreaded embedded processor 有权
    多线程嵌入式处理器中的线程交织

    公开(公告)号:US07360064B1

    公开(公告)日:2008-04-15

    申请号:US10733153

    申请日:2003-12-10

    IPC分类号: G06F9/38

    摘要: The present invention provides a network multithreaded processor, such as a network processor, including a thread interleaver that implements fine-grained thread decisions to avoid underutilization of instruction execution resources in spite of large communication latencies. In an upper pipeline, an instruction unit determines an instruction fetch sequence responsive to an instruction queue depth on a per thread basis. In a lower pipeline, a thread interleaver determines a thread interleave sequence responsive to thread conditions including thread latency conditions. The thread interleaver selects threads using a two-level round robin arbitration. Thread latency signals are active responsive to thread latencies such as thread stalls, cache misses, and interlocks. During the subsequent one or more clock cycles, the thread is ineligible for arbitration. In one embodiment, other thread conditions affect selection decisions such as local priority, global stalls, and late stalls.

    摘要翻译: 本发明提供了一种诸如网络处理器的网络多线程处理器,其包括线程交织器,其执行细粒度线程决定以避免指令执行资源的不充分利用,尽管具有大的通信延迟。 在上部流水线中,指令单元基于每个线程确定响应于指令队列深度的指令获取序列。 在较低流水线中,线程交织器响应于包括线程等待时间条件的线程状况来确定线程交织序列。 线程交织器使用两级循环仲裁来选择线程。 线程延迟信号响应于线程延迟(如线程停止,高速缓存未命中和互锁)而有效。 在随后的一个或多个时钟周期中,线程不符合仲裁规则。 在一个实施例中,其他线程条件影响选择决策,例如本地优先级,全局档位和延迟档。

    Dropping cells of a same packet sent among multiple paths within a packet switching device

    公开(公告)号:US10305787B2

    公开(公告)日:2019-05-28

    申请号:US14687425

    申请日:2015-04-15

    摘要: In one embodiment, cells of a same packet are sent among multiple paths within a packet switching device. Each of these cells is associated with a same drop value for use in determining whether to drop or forward the cell at multiple positions within a packet switching fabric of a packet switching device in light of a current congestion measurement. In one embodiment, the drop value is calculated at each of these multiple positions based on fields of the cell that are packet variant, but not cell variant, so a same drop value is calculated by each cell of a packet. In one embodiment, at least one of these fields provides entropy (e.g., a timestamp of the packet) such that a produced drop value has, or approximately has, an equal probability of being any value within a predetermined range for fairness purposes.

    DFA sequential matching of regular expression with divergent states
    5.
    发明授权
    DFA sequential matching of regular expression with divergent states 有权
    具有发散状态的正则表达式的DFA顺序匹配

    公开(公告)号:US07689530B1

    公开(公告)日:2010-03-30

    申请号:US11144476

    申请日:2005-06-03

    IPC分类号: G06F15/18 G06F15/00

    CPC分类号: G06F17/30985

    摘要: Disclosed are, inter alia, methods, apparatus, data structures, computer-readable media, and mechanisms, for identifying matches to a series of regular expressions, with the series of regular expressions including a first regular expression followed by a second regular expression, which avoids the potential overlap of characters used in matching the first and second regular expressions, while allowing individual deterministic finite automata (DFAs) to be used, whether standalone or as a merged DFA, which decreases the number of states required to represent the series of regular expressions. This potential overlap of characters can be avoided by adding marking states in a merged DFA as “divergent” in order to mask (e.g., ignore) a matching of the second regular expression for the potential overlap, or by using another DFA corresponding to the second regular expression for use during this divergent period.

    摘要翻译: 公开了用于识别与一系列正则表达式的匹配的方法,装置,数据结构,计算机可读介质和机制,其中一系列正则表达式包括第二正则表达式,后面是第二正则表达式,其中 避免在匹配第一和第二正则表达式时使用的字符的潜在重叠,同时允许使用单独的确定性有限自动机(DFA),无论是独立的还是作为合并的DFA,它减少了代表一系列常规数据所需的状态数量 表达。 可以通过将合并的DFA中的标记状态添加为“发散”来掩盖(例如,忽略)用于潜在重叠的第二正则表达式的匹配,或者通过使用对应于第二个的另一个DFA来避免字符的这种潜在的重叠 在这个不同的时期使用正则表达式。

    Method and apparatus using a random indication to map items to paths and to recirculate or delay the sending of a particular item when a destination over its mapped path is unreachable
    6.
    发明授权
    Method and apparatus using a random indication to map items to paths and to recirculate or delay the sending of a particular item when a destination over its mapped path is unreachable 有权
    使用随机指示将物品映射到路径的方法和装置,以及当其映射路径上的目的地不可达时,将特定物品的发送再循环或延迟

    公开(公告)号:US07613200B1

    公开(公告)日:2009-11-03

    申请号:US10051728

    申请日:2002-01-15

    IPC分类号: H04L12/54

    摘要: Methods and apparatus are disclosed using a random indication to map items to paths and to recirculate or delay the sending of a particular item when a destination over its mapped path is unreachable, including, but not limited to the context of sending of packets across multiple paths in a packet switching system. In one implementation, a set of items is buffered, with the set of items including a first and second sets of items. The items in the first set of items are forwarded over a set of paths in a first configuration. The set of paths is reconfigured into a second configuration, and the items in the second set of items are forwarded over the set of paths in the second configuration. In one implementation, a recirculation buffer is used to hold items not immediately sent. In one implementation, the paths are reconfigured in a random fashion.

    摘要翻译: 公开了使用随机指示将物品映射到路径的方法和装置,并且当通过其映射路径的目的地不可达时,再循环或延迟特定物品的发送,包括但不限于跨多个路径发送包的上下文 在分组交换系统中。 在一个实现中,一组项目被缓冲,其中该组项目包括第一和第二组项目。 第一组中的项目通过一组路径在第一配置中转发。 路径集合被重新配置为第二配置,并且第二组中的项目在第二配置中通过路径集合转发。 在一个实施方案中,再循环缓冲器用于保持未立即发送的物品。 在一个实现中,以随机方式重新配置路径。

    Method and apparatus for an adaptive rate control mechanism reactive to flow control messages in a packet switching system
    7.
    发明授权
    Method and apparatus for an adaptive rate control mechanism reactive to flow control messages in a packet switching system 有权
    用于在分组交换系统中响应于流控制消息的自适应速率控制机制的方法和装置

    公开(公告)号:US07269139B1

    公开(公告)日:2007-09-11

    申请号:US09894199

    申请日:2001-06-27

    IPC分类号: H04L12/26

    摘要: Methods and apparatus are disclosed for an adaptive rate control mechanism reactive to flow control messages in a packet switching system and other communications and computer systems. Typically, a multiplicative increase and exponential decrease technique is used to throttle traffic. Backpressure feedback is used to calculate the initial rate at which to allow traffic after backpressure is deasserted. This reduces the probability of underrun of buffers (e.g., too little traffic being carried). The adjustment to the initial rate is made by measuring the time between the XON and XOFF in factor periods. Then a target XON time is subtracted. If the result is positive (i.e., the measured XON time was too long), the rate is multiplicatively increased (e.g., by a factor of two to the difference). If the result is negative (i.e., the measured XON time was too short), the rate is exponentially decreased (e.g., by the square root).

    摘要翻译: 公开了用于在分组交换系统和其他通信和计算机系统中对流控制消息无效的自适应速率控制机制的方法和装置。 通常,使用乘法增加和指数减少技术来抑制流量。 背压反馈用于计算在反压无效后允许流量的初始速率。 这降低了缓冲器欠载的可能性(例如,传送的流量太少)。 初始速率的调整是通过测量因子周期中XON和XOFF之间的时间来进行的。 然后减去目标XON时间。 如果结果是正的(即,测量的XON时间太长),则速率被乘法地增加(例如,相对于该差异为2倍)。 如果结果为负(即,测量的XON时间太短),则速率被指数地减小(例如,以平方根计)。

    Methods and apparatus for communicating time and latency sensitive information
    8.
    发明授权
    Methods and apparatus for communicating time and latency sensitive information 有权
    传达时间和延迟敏感信息的方法和设备

    公开(公告)号:US07051259B1

    公开(公告)日:2006-05-23

    申请号:US10266466

    申请日:2002-10-08

    IPC分类号: H03M13/00

    摘要: Methods and apparatus are disclosed for communicating time and latency sensitive information in a system, such as, but not limited to a computer or communications system. A first block of data is identified and transmitted. A check code is partially determined based on this first data. While the first data is being transmitted, the time-sensitive data (e.g., flow control, other control information, etc.) is identified. This identified time-sensitive data is then contiguously transmitted after the first data. The determination of the check code is completed based on the time-sensitive data, and the check code is contiguously transmitted after the time-sensitive data. One implementation receives the first data, the time-sensitive data, and the check code. If error correction is being used and is needed, the time-sensitive data is first corrected based on the check code, and then subsequently, the first data is corrected. In this manner, the latency of the availability of this time-sensitive data may be reduced.

    摘要翻译: 公开了用于在系统(例如但不限于计算机或通信系统)中传送时间和延迟敏感信息的方法和装置。 识别和传输第一个数据块。 基于该第一数据部分确定校验码。 在发送第一数据的同时,识别时间敏感数据(例如流控制,其他控制信息等)。 这个识别的时间敏感数据在第一个数据之后被连续发送。 基于时间敏感数据完成校验码的确定,并且在时间敏感数据之后连续发送校验码。 一个实现接收第一数据,时间敏感数据和检查码。 如果正在使用和需要纠错,则首先根据检查码校正时间敏感数据,然后校正第一数据。 以这种方式,可以减少该时间敏感数据的可用性的等待时间。

    Method and apparatus for distributing information within a packet switching system
    9.
    发明授权
    Method and apparatus for distributing information within a packet switching system 有权
    在分组交换系统内分发信息的方法和装置

    公开(公告)号:US07016305B1

    公开(公告)日:2006-03-21

    申请号:US09894200

    申请日:2001-06-27

    IPC分类号: H04L12/56

    摘要: Methods and apparatus are disclosed for distributing flow control information in a packet switching system. In one packet switching system, flow control information is collected in a data structure in the first stage switching elements. Each of these switching elements transmit data from the flow control data structure as small messages or in fields included in packets being sent across multiple statically allocated paths. Flow control information is received by next stage elements, which are programmed to forward only flow control information received from a limited number of components or over a limited number of paths. The first stage switching elements may also periodically or occasionally delay sending flow control information or send a dummy message or information to accommodate bandwidth transmission differences between components of the packet switching system, including to accommodate bandwidth variations caused by plesiochronous timing across the network.

    摘要翻译: 公开了用于在分组交换系统中分发流量控制信息的方法和装置。 在一个分组交换系统中,流控制信息被收集在第一级交换单元的数据结构中。 这些开关元件中的每一个从流控制数据结构中传送数据作为小消息,或者包含在跨多个静态分配路径发送的分组中的字段中。 流控制信息由下一级元件接收,下一级元件被编程为仅转发从有限数量的部件或有限数量的路径接收到的流控制信息。 第一级交换元件还可以周期性地或偶尔地延迟发送流控制信息或发送虚拟消息或信息以适应分组交换系统的组件之间的带宽传输差异,包括适应由网络上的同步定时引起的带宽变化。

    Method and apparatus for using barrier phases to limit packet disorder in a packet switching system

    公开(公告)号:US06967926B1

    公开(公告)日:2005-11-22

    申请号:US09752422

    申请日:2000-12-31

    IPC分类号: H04L12/00 H04L12/56

    摘要: Methods and apparatus are disclosed for using barrier phases to limit the disorder of packets which may be used in a computer or communications system. In one packet switching system, source nodes include an indication of their current barrier state in sent packets. For each barrier state, a predetermined range of sequence numbers may be used or a predetermined number of packets may be sent by a source node. The source, destination, and switching nodes are systematically switched between barrier phases, which is typically performed continuously in response to the flow of barrier request and barrier acknowledgement packets or signals. Each source node broadcasts to all forward connected nodes a barrier request to change to a next barrier state. After a switching node has received a barrier request on all incoming links, the switching node propagates the barrier request. Upon receiving barrier requests over all links, each destination stage relays an acknowledgement message to all connected source elements, which then send a barrier acknowledgement in much the same way, and each source element changes its barrier state causing the sequence number or counting space to be reset, and newly sent packets to indicate the new barrier state. Upon receiving all its barrier acknowledgement messages, each destination stage changes its barrier state, and then the destination can manipulate (e.g., resequence, reassemble, send, place in an output queue, etc.) packets marked with the previous barrier state as it knows that every packet from the previous barrier state has been received. This transition of barrier phases and limiting the number of packets sent per barrier phases may be used to limit the range of the sequence number space and the size of outgoing, resequencing, and reassembling buffers, as well providing a packet time-out mechanism which may be especially useful when non-continuous sequence numbers or time-stamps are included in packets for resequencing and/or reassembly purposes.