摘要:
The present disclosure provides techniques for cache management. A data block may be received from an IO interface. After receiving the data block, the occupancy level of a cache memory may be determined. The data block may be directed to a main memory if the occupancy level exceeds a threshold. The data block may be directed to a cache memory if the occupancy level is below a threshold.
摘要:
Technologies for distributed table lookup via a distributed router includes an ingress computing node, an intermediate computing node, and an egress computing node. Each computing node of the distributed router includes a forwarding table to store a different set of network routing entries obtained from a routing table of the distributed router. The ingress computing node generates a hash key based on the destination address included in a received network packet. The hash key identifies the intermediate computing node of the distributed router that stores the forwarding table that includes a network routing entry corresponding to the destination address. The ingress computing node forwards the received network packet to the intermediate computing node for routing. The intermediate computing node receives the forwarded network packet, determines a destination address of the network packet, and determines the egress computing node for transmission of the network packet from the distributed router.
摘要:
Methods and apparatus for provision of adaptive packet deflection to achieve fair, low-cost, and/or energy-efficient Quality of Service (QoS) in Network-on-Chip (NoC) devices are described. In some embodiments, it is determined whether a target port of a packet has reached a threshold utilization value and the packet is routed to an alternate port in response to a deflection probability value that is to be determined based on a utilization value of the target port and a priority level value of the packet. Other embodiments are also claimed and/or disclosed.
摘要:
Technologies for identifying a cache line of a network packet for eviction from an on-processor cache of a network device communicatively coupled to a network controller. The network device is configured to determine whether a cache line of the cache corresponding to the network packet is to be evicted from the cache based on a determination that the network packet is not needed subsequent to processing the network packet, and provide an indication that the cache line is to be evicted from the cache based on an eviction policy received from the network controller.
摘要:
Devices and methods for optimizing semi-active workloads are described herein. A network interface device may be configured to offload data packet acknowledgment responsibilities of a host platform by transmitting, to the sender of the packets, acknowledgements of packets received throughout a time duration. Upon completion of the time duration, the network interface device may trigger the host platform to perform batch processing of the data packets received during the time duration.
摘要:
Technologies for supporting concurrency of a flow lookup table at a network device. The flow lookup table includes a plurality of candidate buckets that each includes one or more entries. The network device includes a flow lookup table write module configured to perform a displacement operation of a key/value pair to move the key/value pair from one bucket to another bucket via an atomic instruction and increment a version counter associated with the buckets affected by the displacement operation. The network device additionally includes a flow lookup table read module to check the version counters during a lookup operation on the flow lookup table to determine whether a displacement operation is affecting the presently read value of the buckets. Other embodiments are described herein and claimed.
摘要:
Methods and apparatus for facilitating efficient Quality of Service (QoS) support for software-based packet processing by offloading QoS rate-limiting to NIC hardware. Software-based packet processing is performed on packet flows received at a compute platform, such as a general purpose server, and/or packet flows generated by local applications running on the compute platform. The packet processing includes packet classification that associates packets with packet flows using flow IDs, and identifying a QoS class for the packet and packet flow. NIC Tx queues are dynamically configured or pre-configured to effect rate limiting for forwarding packets enqueued in the NIC Tx queues. New packet flows are detected, and mapping data is created to map flow IDs associated with flows to the NIC Tx queues used to forward the packets associated with the flows.
摘要:
Technologies for modular forwarding table scalability of a software cluster switch includes a plurality of computing nodes. Each of the plurality of computing nodes includes a global partition table (GPT) to determine an egress computing node for a network packet received at an ingress computing node of the software cluster switch based on a flow identifier of the network packet. The GPT includes a set mapping index that corresponds to a result of a hash function applied to the flow identifier and a hash function index that identifies a hash function of a hash function family whose output results in a node identifier that corresponds to the egress computing node to which the ingress computing node forwards the network packet. Other embodiments are described herein and claimed.
摘要:
Technologies for managing network flow lookups of a network device include a network controller and a target device, each communicatively coupled to the network device. The network device includes a cache for a processor of the network device and a main memory. The network device additionally includes a multi-level hash table having a first-level hash table stored in the cache of the network device and a second-level hash table stored in the main memory of the network device. The network device is configured to determine whether to store a network flow hash corresponding to a network flow indicating the target device in the first-level or second-level hash table based on a priority of the network flow provided to the network device by the network controller.
摘要:
Devices and methods for optimizing semi-active workloads are described herein. A network interface device may be configured to offload data packet acknowledgment responsibilities of a host platform by transmitting, to the sender of the packets, acknowledgements of packets received throughout a time duration. Upon completion of the time duration, the network interface device may trigger the host platform to perform batch processing of the data packets received during the time duration.