CACHE ALLOCATION SYSTEM
    5.
    发明申请

    公开(公告)号:US20210359955A1

    公开(公告)日:2021-11-18

    申请号:US17384627

    申请日:2021-07-23

    Abstract: Examples described herein relate to a network interface device comprising: a host interface, a direct memory access (DMA) engine, and circuitry to allocate a region in a cache to store a context of a connection. In some examples, the circuitry is to allocate a region in a cache to store a context of a connection based on connection reliability and wherein connection reliability comprises use of a reliable transport protocol or non-use of a reliable transport protocol. In some examples, the circuitry is to allocate a region in a cache to store a context of a connection based on expected length of runtime of the connection and the expected length of runtime of the connection is based on a historic average amount of time the context for the connection was stored in the cache. In some examples, the circuitry is to allocate a region in a cache to store a context of a connection based on content transmitted and the content transmitted comprises congestion messaging payload or acknowledgement. In some examples, the circuitry is to allocate a region in a cache to store a context of a connection based on application-specified priority level and the application-specified priority level comprises an application-specified traffic class level or class of service level.

    PACKET BUFFERING TECHNOLOGIES
    7.
    发明公开

    公开(公告)号:US20240089219A1

    公开(公告)日:2024-03-14

    申请号:US18388780

    申请日:2023-11-10

    CPC classification number: H04L49/206 H04L47/621 H04L49/9063

    Abstract: Examples described herein relate to a switch. In some examples, the switch includes circuitry that is configured to: based on receipt of a packet and a level of a first queue, select among a first memory and a second memory device among multiple second memory devices to store the packet, based on selection of the first memory, store the packet in the first memory, and based on selection of the second memory device among multiple second memory devices, store the packet into the selected second memory device. In some examples, the packet is associated with an ingress port and an egress port, and the selected second memory device is associated with a third port that is different than the ingress port or the egress port associated with the packet.

    CONGESTION NOTIFICATION IN A MULTI-QUEUE ENVIRONMENT

    公开(公告)号:US20230403233A1

    公开(公告)日:2023-12-14

    申请号:US18239467

    申请日:2023-08-29

    CPC classification number: H04L47/12 H04L47/2425 H04L47/30 H04L49/9047

    Abstract: Examples described herein relate to a network interface device. In some examples, the network interface device includes a host interface; a direct memory access (DMA) circuitry; a network interface; and circuitry. The circuitry can be configured to: based on received telemetry data from at least one switch: select a next hop network interface device from among multiple network interface devices based on received telemetry data. In some examples, the telemetry data is based on congestion information of a first queue associated with a first traffic class, the telemetry data is based on per-network interface device hop-level congestion states from at least one network interface device, the first queue shares bandwidth of an egress port with a second queue, the first traffic class is associated with packet traffic subject to congestion control based on utilization of the first queue, and the utilization of the first queue is based on a drain rate of the first queue and a transmit rate from the egress port.

    ACCELERATING MULTI-NODE PERFORMANCE OF MACHINE LEARNING WORKLOADS

    公开(公告)号:US20210092069A1

    公开(公告)日:2021-03-25

    申请号:US17118409

    申请日:2020-12-10

    Abstract: Examples described herein relate to a network interface and at least one processor that is to indicate whether data is associated with a machine learning operation or non-machine learning operation to manage traversal of the data through one or more network elements to a destination network element and cause the network interface to include an indication in a packet of whether the packet includes machine learning data or non-machine learning data. In some examples, the indication in a packet of whether the packet includes machine learning data or non-machine learning data comprises a priority level and wherein one or more higher priority levels identify machine learning data. In some examples, for machine learning data, the priority level is based on whether the data is associated with inference, training, or re-training operations. In some examples, for machine learning data, the priority level is based on whether the data is associated with real-time or time insensitive inference operations.

Patent Agency Ranking