Priority based bandwidth allocation within real-time and non-real-time traffic streams
    51.
    发明授权
    Priority based bandwidth allocation within real-time and non-real-time traffic streams 失效
    实时和非实时业务流中基于优先级的带宽分配

    公开(公告)号:US07385997B2

    公开(公告)日:2008-06-10

    申请号:US10118493

    申请日:2002-04-08

    IPC分类号: H04L12/28 H04L12/56

    摘要: A method and system for transmitting packets in a packet switching network. Packets received by a packet processor may be prioritized based on the urgency to process them. Packets that are urgent to be processed may be referred to as real-time packets. Packets that are not urgent to be processed may be referred to as non-real-time packets. Real-time packets have a higher priority to be processed than non-real-time packets. A real-time packet may either be discarded or transmitted into a real-time queue based upon its value priority, the minimum and maximum rates for that value priority and the current real-time queue congestion conditions. A non-real-time packet may either be discarded or transmitted into a non-real-time queue based upon its value priority, the minimum and maximum rates for that value priority and the current real-time and non-real-time queue congestion conditions.

    摘要翻译: 一种用于在分组交换网络中传送分组的方法和系统。 可以基于处理它们的紧急性来优先考虑由分组处理器接收的分组。 紧急处理的数据包可以称为实时数据包。 不紧急处理的数据包可能被称为非实时数据包。 实时数据包的优先级要高于非实时数据包。 可以根据其值优先级,该值优先级的最小和最大速率以及当前实时队列拥塞条件,将实时分组丢弃或传输到实时队列中。 可以基于其值优先级,该值优先级的最小和最大速率以及当前的实时和非实时队列拥塞将非实时分组丢弃或发送到非实时队列 条件。

    Eliminating memory corruption when performing tree functions on multiple threads
    52.
    发明授权
    Eliminating memory corruption when performing tree functions on multiple threads 有权
    在多个线程上执行树函数时,消除内存损坏

    公开(公告)号:US07036125B2

    公开(公告)日:2006-04-25

    申请号:US10217529

    申请日:2002-08-13

    IPC分类号: G06F9/46 G06F12/00

    CPC分类号: G06F9/52

    摘要: A method, system and computer program product for eliminating memory corruption when performing multi-threaded tree operations. A network processor may receive a command to perform a tree operation on a tree on one or more of multiple threads. Upon performing the requested tree operation, the network processor may lock one or more resources during a portion of the execution of the requested tree operation using one or more semaphores. A semaphore may refer to a flag used to indicate whether to “lock” or make available the resource associated with the semaphore. Locking may refer to preventing the resource from being available to other threads. Hence, by locking one or more resources during a portion of the tree operation, memory corruption may be eliminated in a multiple thread system while preventing these resources from being used by other threads for a minimal amount of time.

    摘要翻译: 一种用于在执行多线程树操作时消除内存损坏的方法,系统和计算机程序产品。 网络处理器可以在多个线程中的一个或多个上接收在树上执行树操作的命令。 在执行所请求的树操作时,网络处理器可以在使用一个或多个信号量的所请求的树操作的执行的一部分期间锁定一个或多个资源。 信号量可以指用于指示是否“锁定”或提供与信号量相关联的资源的标志。 锁定可能是指防止资源对其他线程可用。 因此,通过在树操作的一部分期间锁定一个或多个资源,可以在多线程系统中消除内存损坏,同时防止这些资源在最短时间内被其他线程使用。

    Fast routing and non-blocking switch which accomodates multicasting and
variable length packets
    56.
    发明授权
    Fast routing and non-blocking switch which accomodates multicasting and variable length packets 失效
    快速路由和非阻塞交换机,适应多播和可变长度的数据包

    公开(公告)号:US6144662A

    公开(公告)日:2000-11-07

    申请号:US100

    申请日:1998-04-13

    摘要: The invention relates to a switching device which transports data packets from input ports to selected output ports. The payload of the packets is stored in a storage means. A switching means is arranged which has more switch outputs than switch inputs and which switches sequentially between one switch input and several switch outputs while storing the payloads. Furthermore, the invention relates to a storing method which uses switching means to store payloads in a sequential order and to a switching apparatus comprising several switching devices. Furthermore, the invention relates to systems using the switching device as a scaleable module.

    摘要翻译: PCT No.PCT / IB96 / 00658 Sec。 371日期:1998年4月13日 102(e)1998年4月13日PCT PCT 1996年7月9日PCT公布。 出版物WO98 / 02013 日期1998年1月15日本发明涉及将数据分组从输入端口传送到选择的输出端口的切换装置。 分组的有效载荷被存储在存储装置中。 布置开关装置,其具有比开关输入更多的开关输出,并且在存储有效载荷的同时在一个开关输入和多个开关输出之间顺序切换。 此外,本发明涉及使用切换装置按顺序存储有效载荷的存储方法以及包括多个交换装置的交换装置。 此外,本发明涉及使用该开关装置作为可缩放模块的系统。

    High speed buffer management of share memory using linked lists and
plural buffer managers for processing multiple requests concurrently
    57.
    发明授权
    High speed buffer management of share memory using linked lists and plural buffer managers for processing multiple requests concurrently 失效
    使用链表对共享存储器进行高速缓冲管理,并且多个缓冲管理器同时处理多个请求

    公开(公告)号:US5432908A

    公开(公告)日:1995-07-11

    申请号:US313656

    申请日:1994-09-27

    CPC分类号: G06F5/06 G06F2205/064

    摘要: The present invention relates to the management of a large and fast memory. The memory is logically subdivided into several smaller parts called buffers. A buffer-control memory (11) having as many sections for buffer-control records as buffers exist is employed together with a buffer manager (12). The buffer manager (12) organizes and controls the buffers by keeping the corresponding buffer-control records in linked lists. A request manager (20), as pad of the buffer manager (12), does or does not grant the allocation of a buffer. A stack manager (21) controls the free buffers by keeping the buffer-control records in a stack (23.1), and a FIFO manager (22) keeps the buffer-control records of allocated buffers in FIFO linked lists (23.2-23.n). The stack and FIFO managers (20), (21) are parts of the buffer manager (12), too.

    摘要翻译: 本发明涉及大型和快速存储器的管理。 存储器在逻辑上细分为几个称为缓冲器的较小部件。 缓冲器管理器(12)与缓冲器控制存储器(11)一起使用,缓冲器控制存储器(11)具有与存储缓冲器控制记录一样多的部分。 缓冲管理器(12)通过将相应的缓冲器控制记录保持在链表中来组织和控制缓冲器。 作为缓冲器管理器(12)的焊盘的请求管理器(20)执行或不准许缓冲器的分配。 堆栈管理器(21)通过将缓冲器控制记录保持在堆栈(23.1)中来控制空闲缓冲器,并且FIFO管理器(22)将分配的缓冲器的缓冲器控制记录保持在FIFO链接列表中(23.2-23.n )。 栈和FIFO管理器(20),(21)也是缓冲管理器(12)的一部分。