Technologies for network device flow lookup management

    公开(公告)号:US10284470B2

    公开(公告)日:2019-05-07

    申请号:US14580801

    申请日:2014-12-23

    Abstract: Technologies for managing network flow lookups of a network device include a network controller and a target device, each communicatively coupled to the network device. The network device includes a cache for a processor of the network device and a main memory. The network device additionally includes a multi-level hash table having a first-level hash table stored in the cache of the network device and a second-level hash table stored in the main memory of the network device. The network device is configured to determine whether to store a network flow hash corresponding to a network flow indicating the target device in the first-level or second-level hash table based on a priority of the network flow provided to the network device by the network controller.

    DENIAL OF SERVICE MITIGATION WITH TWO-TIER HASH

    公开(公告)号:US20190104150A1

    公开(公告)日:2019-04-04

    申请号:US15720821

    申请日:2017-09-29

    Abstract: A computing apparatus for providing a node within a distributed network function, including: a hardware platform; a network interface to communicatively couple to at least one other peer node of the distributed network function; a distributor function including logic to operate on the hardware platform, including a hashing module configured to receive an incoming network packet via the network interface and perform on the incoming network packet a first-level hash of a two-level hash, the first level hash being a lightweight hash with respect to a second-level hash, the first level hash to deterministically direct a packet to one of the nodes of the distributed network function as a directed packet; and a denial of service (DoS) mitigation engine to receive notification of a DoS attack, identify a DoS packet via the first-level hash, and prevent the DoS packet from reaching the second-level hash.

    COMPUTE NODE CLUSTER BASED ROUTING METHOD AND APPARATUS

    公开(公告)号:US20180234336A1

    公开(公告)日:2018-08-16

    申请号:US15433758

    申请日:2017-02-15

    CPC classification number: H04L45/46 H04L45/28 H04L45/745

    Abstract: Apparatus and method to facilitate networked compute node cluster routing are disclosed herein. In some embodiments, a compute node for cluster compute may include one or more input ports to receive data packets from first selected ones of a cluster of compute nodes; one or more output ports to route data packets to second selected ones of the cluster of computer nodes; and one or more processors, wherein the one or more processors includes logic to determine a particular output port, of the one or more output ports, to which a data packet received at the one or more input ports is to be routed, and wherein the logic is to exclude output ports associated with links indicated in fault status information as having a fault status to be the particular output port to which the data packet is to be routed.

    TECHNOLOGIES FOR DISTRIBUTED ROUTING TABLE LOOKUP

    公开(公告)号:US20180019943A1

    公开(公告)日:2018-01-18

    申请号:US15717287

    申请日:2017-09-27

    Abstract: Technologies for distributed table lookup via a distributed router includes an ingress computing node, an intermediate computing node, and an egress computing node. Each computing node of the distributed router includes a forwarding table to store a different set of network routing entries obtained from a routing table of the distributed router. The ingress computing node generates a hash key based on the destination address included in a received network packet. The hash key identifies the intermediate computing node of the distributed router that stores the forwarding table that includes a network routing entry corresponding to the destination address. The ingress computing node forwards the received network packet to the intermediate computing node for routing. The intermediate computing node receives the forwarded network packet, determines a destination address of the network packet, and determines the egress computing node for transmission of the network packet from the distributed router.

    Cache management
    16.
    发明授权
    Cache management 有权
    缓存管理

    公开(公告)号:US09390010B2

    公开(公告)日:2016-07-12

    申请号:US13715526

    申请日:2012-12-14

    CPC classification number: G06F12/0804 G06F12/0888 Y02D10/13

    Abstract: The present disclosure provides techniques for cache management. A data block may be received from an IO interface. After receiving the data block, the occupancy level of a cache memory may be determined. The data block may be directed to a main memory if the occupancy level exceeds a threshold. The data block may be directed to a cache memory if the occupancy level is below a threshold.

    Abstract translation: 本公开提供了用于高速缓存管理的技术。 可以从IO接口接收数据块。 在接收到数据块之后,可以确定高速缓冲存储器的占用水平。 如果占用率超过阈值,则数据块可以被引导到主存储器。 如果占用水平低于阈值,则数据块可以被引导到高速缓冲存储器。

    Energy-efficient application content update and sleep mode determination
    17.
    发明授权
    Energy-efficient application content update and sleep mode determination 有权
    节能应用内容更新和睡眠模式确定

    公开(公告)号:US09037887B2

    公开(公告)日:2015-05-19

    申请号:US13627822

    申请日:2012-09-26

    Abstract: Embodiments of methods, systems, and storage medium associated with are disclosed herein. In one instance, the method may include: first determining whether the computing device is connected to a network, based on a result of the first determining, monitoring data traffic between the computing device and the network, wherein the data traffic is associated with at least one application residing on the computing device, based on the monitoring, second determining whether the at least one application has been updated, and initiating a transition of the computing device to a sleep mode upon a result of the second determining that indicates that the at least one application has been updated. Other embodiments may be described and/or claimed.

    Abstract translation: 本文公开了与其相关联的方法,系统和存储介质的实施例。 在一个实例中,该方法可以包括:基于第一确定结果监测计算设备是否连接到网络,监视计算设备和网络之间的数据业务,其中数据流量至少与 基于所述监视,驻留在所述计算设备上的一个应用,第二确定所述至少一个应用是否已被更新,以及在所述第二确定的结果指示所述至少一个应用程序的至少一个 一个应用程序已更新。 可以描述和/或要求保护其他实施例。

    Packet processing load balancer
    18.
    发明授权

    公开(公告)号:US12293231B2

    公开(公告)日:2025-05-06

    申请号:US17471889

    申请日:2021-09-10

    Abstract: Examples described herein include a device interface; a first set of one or more processing units; and a second set of one or more processing units. In some examples, the first set of one or more processing units are to perform heavy flow detection for packets of a flow and the second set of one or more processing units are to perform processing of packets of a heavy flow. In some examples, the first set of one or more processing units and second set of one or more processing units are different. In some examples, the first set of one or more processing units is to allocate pointers to packets associated with the heavy flow to a first set of one or more queues of a load balancer and the load balancer is to allocate the packets associated with the heavy flow to one or more processing units of the second set of one or more processing units based, at least in part on a packet receive rate of the packets associated with the heavy flow.

    EFFICIENT TOKEN PRUNING IN TRANSFORMER-BASED NEURAL NETWORKS

    公开(公告)号:US20250124105A1

    公开(公告)日:2025-04-17

    申请号:US19002132

    申请日:2024-12-26

    Abstract: Key-value (KV) caching accelerates inference in large language models (LLMs) by allowing the attention operation to scale linearly rather than quadratically with the total sequence length. Due to large context lengths in modern LLMs, KV cache size can exceed the model size, which can negatively impact throughput. To address this issue, KVCrush, which stands for KEY-VALUE CACHE SIZE REDUCTION USING SIMILARITY IN HEAD-BEHAVIOR, is implemented. KVCrush involves using binary vectors to represent tokens, where the vector indicates which attention heads attend to the token and which attention heads disregard the token. The binary vectors are used in a hardware-efficient, low-overhead process to produce representatives for unimportant tokens to be pruned, without having to implement k-means clustering techniques.

    Denial of service mitigation with two-tier hash

    公开(公告)号:US11005884B2

    公开(公告)日:2021-05-11

    申请号:US15720821

    申请日:2017-09-29

    Abstract: A computing apparatus for providing a node within a distributed network function, including: a hardware platform; a network interface to communicatively couple to at least one other peer node of the distributed network function; a distributor function including logic to operate on the hardware platform, including a hashing module configured to receive an incoming network packet via the network interface and perform on the incoming network packet a first-level hash of a two-level hash, the first level hash being a lightweight hash with respect to a second-level hash, the first level hash to deterministically direct a packet to one of the nodes of the distributed network function as a directed packet; and a denial of service (DoS) mitigation engine to receive notification of a DoS attack, identify a DoS packet via the first-level hash, and prevent the DoS packet from reaching the second-level hash.

Patent Agency Ranking