Traffic management
    1.
    发明授权

    公开(公告)号:US10158578B2

    公开(公告)日:2018-12-18

    申请号:US15269295

    申请日:2016-09-19

    Abstract: One embodiment provides a network device. The network device includes a a processor including at least one processor core; a network interface configured to transmit and receive packets at a line rate; a memory configured to store a scheduler hierarchical data structure; and a scheduler module. The scheduler module is configured to prefetch a next active pipe structure, the next active pipe structure included in the hierarchical data structure, update credits for a current pipe and an associated subport, identify a next active traffic class within the current pipe based, at least in part, on a current pipe data structure, select a next queue associated with the identified next active traffic class, and schedule a next packet from the selected next queue for transmission by the network interface if available traffic shaping token bucket credits and available traffic class credits are greater than or equal to a next packet credits.

    Technologies for a least recently used cache replacement policy using vector instructions

    公开(公告)号:US10789176B2

    公开(公告)日:2020-09-29

    申请号:US16059147

    申请日:2018-08-09

    Abstract: Technologies for least recently used (LRU) cache replacement include a computing device with a processor with vector instruction support. The computing device retrieves a bucket of an associative cache from memory that includes multiple entries arranged from front to back. The bucket may be a 256-bit array including eight 32-bit entries. For lookups, a matching entry is located at a position in the bucket. The computing device executes a vector permutation processor instruction that moves the matching entry to the front of the bucket while preserving the order of other entries of the bucket. For insertion, an inserted entry is written at the back of the bucket. The computing device executes a vector permutation processor instruction that moves the inserted entry to the front of the bucket while preserving the order of other entries. The permuted bucket is stored to the memory. Other embodiments are described and claimed.

    TECHNOLOGIES FOR A LEAST RECENTLY USED CACHE REPLACEMENT POLICY USING VECTOR INSTRUCTIONS

    公开(公告)号:US20190042471A1

    公开(公告)日:2019-02-07

    申请号:US16059147

    申请日:2018-08-09

    Abstract: Technologies for least recently used (LRU) cache replacement include a computing device with a processor with vector instruction support. The computing device retrieves a bucket of an associative cache from memory that includes multiple entries arranged from front to back. The bucket may be a 256-bit array including eight 32-bit entries. For lookups, a matching entry is located at a position in the bucket. The computing device executes a vector permutation processor instruction that moves the matching entry to the front of the bucket while preserving the order of other entries of the bucket. For insertion, an inserted entry is written at the back of the bucket. The computing device executes a vector permutation processor instruction that moves the inserted entry to the front of the bucket while preserving the order of other entries. The permuted bucket is stored to the memory. Other embodiments are described and claimed.

    Mapping application functional blocks to multi-core processors

    公开(公告)号:US10354033B2

    公开(公告)日:2019-07-16

    申请号:US15711740

    申请日:2017-09-21

    Abstract: One embodiment provides a system to identify a “best” usage of a given set of CPU cores to maximize performance of a given application. The given application is parsed into a number of functional blocks, and the system maps the functional blocks to the given set of CPU cores to maximize the performance of the given application. The system determines and then tests various mappings to determine the performance, generally preferring mappings that maximize throughput per physical core. Before testing a mapping, the system determines whether the mapping is redundant with any previously tested mappings. In addition, given a performance target for the given application, the system determines a minimum number of CPU cores needed for the application to meet the application performance target.

    Traffic management
    8.
    发明授权

    公开(公告)号:US10091122B2

    公开(公告)日:2018-10-02

    申请号:US15396488

    申请日:2016-12-31

    Abstract: One embodiment provides a network device. The network device includes a a processor including at least one processor core; a network interface configured to transmit and receive packets at a line rate; a memory configured to store a scheduler hierarchical data structure; and a scheduler module. The scheduler module is configured to prefetch a next active pipe structure, the next active pipe structure included in the hierarchical data structure, update credits for a current pipe and an associated subport, identify a next active traffic class within the current pipe based, at least in part, on a current pipe data structure, select a next queue associated with the identified next active traffic class, and schedule a next packet from the selected next queue for transmission by the network interface if available traffic shaping token bucket credits and available traffic class credits are greater than or equal to a next packet credits.

    TRAFFIC MANAGEMENT
    9.
    发明申请
    TRAFFIC MANAGEMENT 审中-公开
    交通管理

    公开(公告)号:US20170070356A1

    公开(公告)日:2017-03-09

    申请号:US15269295

    申请日:2016-09-19

    Abstract: One embodiment provides a network device. The network device includes a a processor including at least one processor core; a network interface configured to transmit and receive packets at a line rate; a memory configured to store a scheduler hierarchical data structure; and a scheduler module. The scheduler module is configured to prefetch a next active pipe structure, the next active pipe structure included in the hierarchical data structure, update credits for a current pipe and an associated subport, identify a next active traffic class within the current pipe based, at least in part, on a current pipe data structure, select a next queue associated with the identified next active traffic class, and schedule a next packet from the selected next queue for transmission by the network interface if available traffic shaping token bucket credits and available traffic class credits are greater than or equal to a next packet credits.

    Abstract translation: 一个实施例提供一种网络设备。 网络设备包括:处理器,包括至少一个处理器核心; 网络接口,被配置为以线路速率发送和接收分组; 存储器,被配置为存储调度器分层数据结构; 和调度器模块。 调度器模块被配置为预取下一个活动管道结构,包括在分级数据结构中的下一个主动管道结构,更新当前管道和相关联的子端口的信用,至少基于当前管道识别下一个活动业务类别 部分地,在当前的管道数据结构上,选择与所识别的下一个活动业务类别相关联的下一个队列,并且如果可用的流量整形令牌桶信用和可用业务类别,则可以从所选择的下一个队列调度下一个分组以供网络接口传输 信用额度大于或等于下一个信用额度。

Patent Agency Ranking