摘要:
In one embodiment, a data movement module (DMM) may receive a command to copy data from a source buffer to a destination buffer. One or more cache lines corresponding to addresses of the source buffer and the destination buffer may be invalidated. Also, an entry may be added to a queue to indicate that the command to copy is completion pending.
摘要:
Methods and apparatus to reduce the number of uncacheable write requests are described. In one embodiment, a single uncacheable write request is sent instead of a plurality of uncacheable write requests to an address.
摘要:
In one embodiment, a data movement module (DMM) may receive a command to copy data from a source buffer to a destination buffer. One or more cache lines corresponding to addresses of the source buffer and the destination buffer may be invalidated. Also, an entry may be added to a queue to indicate that the command to copy is completion pending.
摘要:
A dual interface coherent and non-coherent network interface controller architecture is generally presented. In this regard, a network interface controller is introduced including a non-coherent bus interface to communicatively couple with devices of a system through a non-coherent protocol, the non-coherent bus interface to facilitate discovery of the network interface controller by an operating system, a coherent bus interface to communicatively couple with devices of the system through a coherent protocol, and a coherency engine to perform coherent transactions over the coherent interface including to snoop for writes on system memory. Other embodiments are also disclosed and claimed.
摘要:
In one embodiment, it may be determined whether a processor is going to access a packet payload that is stored in a source buffer. If the processor is not going to access the packet payload, a data movement module (DMM) may move the packet payload from the source buffer to a destination buffer.
摘要:
Provided are a method, system, and program for managing transmit throughput for a network controller. In one embodiment, transmit requests from an application may be posted by the device driver to the network controller of the network adapter in a pipeline of transmit requests without waiting for an acknowledgment of the transfer of the accompanying transmit data to the network controller. In another aspect, a device driver monitors the available buffer space of a network controller buffer to ensure that the network controller has sufficient available buffer space before posting the next transmit request to the network controller. In accordance with yet another aspect, the device driver can copy transmit data from an application buffer to a driver buffer if the size of the transmit data of a particular transmit request is below a programmable threshold. If so, the device driver can notify the application to permit the application buffer to be released.
摘要:
An embodiment may include at least one server processor that may control, at least in part, server switch circuitry data and control plane processing. The at least one processor may include at least one cache memory that is capable of being involved in at least one data transfer that involves at least one component of the server. The at least one data transfer may be carried out in a manner that by-passes involvement of server system memory. The switch circuitry may be communicatively coupled to the at least one processor and to at least one node via communication links. The at least one processor may select, at least in part, at least one communication protocol to be used by the links. The switch circuitry may forward, at least in part, via at least one of the links at least one received packet. Many modifications are possible.
摘要:
In an embodiment, a method is provided. The method of this embodiment provides determining a flow context associated with a receive packet; and if the flow context complies with a dynamic interrupt moderation policy having one or more rules, generating an interrupt to process the receive packet substantially independently of an interrupt generated in accordance with an interrupt coalescing scheme (“coalesced interrupt”). Other embodiments are disclosed and/or claimed.
摘要:
Generally, this disclosure relates to adaptive interrupt moderation. A method may include determining, by a host device, a number of connections between the host device and one or more link partners based, at least in part, on a connection identifier associated with each connection; determining, by the host device, a new interrupt rate based at least in part on a number of connections; updating, by the host device, an interrupt moderation timer with a value related to the new interrupt rate; and configuring the interrupt moderation timer to allow interrupts to occur at the new interrupt rate.
摘要:
Methods for performing efficient receive interrupt signaling and associated apparatus, computing platform, software, and firmware. Receive (RX) queues in which descriptors associated with packets are enqueued are implemented in host memory and logically partitioned into pools, with each RX queue pool associated with a respective interrupt vector. Receive event queues (REQs) associated with respective RX queue pools and interrupt vectors are also implemented in host memory. Event generation is selectively enabled for some RX queues, while event generation is masked for others. In response to event causes for RX queues that are event generation-enabled, associated events are generated and enqueued in the REQs and interrupts on associated interrupt vectors are asserted. The events are serviced by accessing the events in the REQs, which identify the RX queue for the event and a next activity location at which a next descriptor to be processed is located. After asserting an interrupt, an RX queue may be auto-masked to prevent generation of additional events when new descriptors are enqueued in the RX queue.