Abstract:
Described are embodiments of mediums, methods, and systems for application-reserved use of cache for direct I/O. A method for using application-reserved cache may include reserving, by one of a plurality of cores of a processor, use of a first portion of one of a plurality of levels of cache for an application executed by the one of the plurality of cores, and transferring, by the one of the plurality of cores, data associated with the application from an input/output (I/O) device of a computing device directly to the first portion of the one of the plurality of levels of the cache. Other embodiments may be described and claimed.
Abstract:
Methods, apparatus, and systems for implementing in Network Interface Controller (NIC) flow switching. Switching operations are effected via hardware-based forwarding mechanisms in apparatus such as NICs in a manner that does not employ use of computer system processor resources and is transparent to operating systems hosted by such computer systems. The forwarding mechanisms are configured to move or copy Media Access Control (MAC) frame data between receive (Rx) and transmit (Tx) queues associated with different NIC ports that may be on the same NIC or separate NICs. The hardware-based switching operations effect forwarding of MAC frames between NIC ports using memory operations, thus reducing external network traffic, internal interconnect traffic, and processor workload associated with packet processing.
Abstract:
Methods, apparatus and systems for routing information flows in networks based on spanning trees and network switching element resources. One or more controllers are used to assign information flows to network switching elements (NSEs) through use of spanning trees derived from link path costs. NSEs generate status information relating to resources they employ to facilitate information flows that is sent to the controller(s). The status information is used to derive link costs, which are then used to generate spanning trees that support routing between the NSEs without any path loops. Information flows are assigned to the NSEs such that the routing paths for the flows use the links in the spanning tree. The link costs and spanning trees are dynamically computed during ongoing operations, enabling the network routing and flow assignments to be reconfigured in response to dataplane events and changes to the information flow traffic.
Abstract:
An embodiment may include circuitry to be included, at least in part, in at least one node in a network. The circuitry may generate, at least in part, and/or receive, at least in part, at least one packet. The packet may be received, at least in part, by at least one switch node in the network. The switch node may designate, in response at least in part to the packet, at least one port of the switch node to be used to facilitate, at least in part, establishment, at least in part, of at least one path for propagation of at least one flow between at least two other nodes in the network. The packet may be generated based at least in part upon (1) at least one application classification, (2) at least one allocation request, and (3) network resource availability information.
Abstract:
Methods and apparatus for accelerating VM-to-VM Network Traffic using CPU cache. A virtual queue manager (VQM) manages data that is to be kept in VM-VM shared data buffers in CPU cache. The VQM stores a list of VM-VM allow entries identifying data transfers between VMs that may use VM-VM cache “fast-path” forwarding. Packets are sent from VMs to the VQM for forwarding to destination VMs. Indicia in the packets (e.g., in a tag or header) is inspected to determine whether a packet is to be forwarded via a VM-VM cache fast path or be forwarded via a virtual switch. The VQM determines the VM data already in the CPU cache domain while concurrently coordinating with the data to and from the external shared memory, and also ensures data coherency between data kept in cache and that which is kept in shared memory.
Abstract:
A system for network routing based on resource availability. A network switching element (NSE) may be configured to provide status information to a controller. The controller may be configured to utilize the status information in determining control information that may be provided to the NSE. The NSE may further be configured to assign processing of information flows to processors in the NSE based on the control information. For example, the control information may contain minimum and maximum percent utilization levels for the processors. Information flows may be reassigned to processors that have available processing capacity from processors whose operation is determined not to be in compliance with the minimum and maximum levels. Moreover, inactive processors may be deactivated and alerts may be sent to the controller when the NSE determines that no available processing capacity exists to reassign the flows of processors whose operation is determined to be noncompliant.
Abstract:
Examples are disclosed for a device having at least two media access controllers. In some examples, a first media access controller may be coupled to a host computing device. A second media access controller may be coupled to one or more processor circuits arranged to perform packet processing of data payloads for one or more data frames forwarded through the first media access controller and/or forwarded through the second media access controller. The first media access controller may be coupled to the second media access controller via a communication link. Other examples are described and claimed.
Abstract:
Described are embodiments of mediums, methods, and systems for application-reserved use of cache for direct I/O. A method for using application-reserved cache may include reserving, by one of a plurality of cores of a processor, use of a first portion of one of a plurality of levels of cache for an application executed by the one of the plurality of cores, and transferring, by the one of the plurality of cores, data associated with the application from an input/output (I/O) device of a computing device directly to the first portion of the one of the plurality of levels of the cache. Other embodiments may be described and claimed.