Abstract:
Remote transactions using transactional memory are carried out over a data network between an initiator host and a remote target. The transaction comprises a plurality of input-output (IO) operations between an initiator network interface controller and a target network interface controller. The IO operations are controlled by the initiator network interface controller and the target network interface controller to cause the first process to perform accesses to the memory location atomically.
Abstract:
A computing system comprises one or more cores. Each core comprises a processor and switch with each processor coupled to a communication network among the cores. Also disclosed are techniques for implementing an adaptive last level allocation policy in a last level cache in a multicore system receiving one or more new blocks for allocating for storage in the cache, accessing a selected profile from plural profiles that define allocation actions, according to a least recently used type of allocation and based on a cache action, a state bit, and traffic pattern type for the new blocks of data and handling the new block according to the selected profile for a selected least recently used (LRU) position in the cache.
Abstract:
A network interface device includes a host interface for connection to a host processor and a network interface, which is configured to transmit and receive data packets over a network, and which comprises multiple distinct physical ports configured for connection to the network. Processing circuitry is configured to receive, via one of the physical ports, a data packet from the network and to decide, responsively to a destination identifier in the packet, whether to deliver a payload of the data packet to the host processor via the host interface or to forward the data packet to the network via another one of the physical ports.
Abstract:
A method for processing data includes receiving in a peripheral device, which is connected by a bus to a host processor having host resources, a notification of a sleep state of at least one of the host resources. While the at least one of the host resources is in the sleep state, when the peripheral device receives data from a data source for delivery to the host processor, the peripheral device sends a message to the data source, which causes the data source to defer conveying further data to the peripheral device until the at least one of the host resources has awakened from the sleep state.
Abstract:
Peripheral apparatus for use with a host computer includes an add-on device, which includes a first network port coupled to one end of a packet communication link and add-on logic, which is configured to receive and transmit packets containing data over the packet communication link and to perform computational operations on the data. A network interface controller (NIC) includes a host bus interface, configured for connection to the host bus of the host computer and a second network port, coupled to the other end of the packet communication link. Packet processing logic in the NIC is coupled between the host bus interface and the second network port, and is configured to translate between the packets transmitted and received over the packet communication link and transactions executed on the host bus so as to provide access between the add-on device and the resources of the host computer.
Abstract:
Data processing apparatus includes a host processor and a network interface controller (NIC), which is configured to couple the host processor to a packet data network. A memory holds a flow state table containing context information with respect to computational operations to be performed on multiple packet flows conveyed between the host processor and the network. Acceleration logic is coupled to perform the computational operations on payloads of packets in the multiple packet flows using the context information in the flow state table.
Abstract:
A method in a system that includes first and second devices that communicate with one another over a fabric that operates in accordance with a fabric address space, and in which the second device accesses a local memory via a local connection and not over the fabric, includes sending from the first device to a translation agent (TA) a translation request that specifies an untranslated address in an address space according to which the first device operates, for directly accessing the local memory of the second device. A translation response that specifies a respective translated address in the fabric address space, which the first device is to use instead of the untranslated address is received by the first device. The local memory of the second device is directly accessed by the first device over the fabric by converting the untranslated address to the translated address.
Abstract:
A memory device includes a target memory, having a memory address space, and a volatile buffer memory, which is coupled to receive data written over a bus to the memory device for storage in specified addresses within the memory address space. A memory controller is configured to receive, via the bus, a flush instruction and, in response to the flush instruction, to immediately flush the data held in the buffer memory with specified addresses within the memory address space to the target memory.
Abstract:
A Network Interface Controller (NIC) includes a network interface, a peer interface and steering logic. The network interface is configured to receive incoming packets from a communication network. The peer interface is configured to communicate with a peer NIC not via the communication network. The steering logic is configured to classify the packets received over the network interface into first incoming packets that are destined to a local Central Processing Unit (CPU) served by the NIC, and second incoming packets that are destined to a remote CPU served by the peer NIC, to forward the first incoming packets to the local CPU, and to forward the second incoming packets to the peer NIC over the peer interface not via the communication network.
Abstract:
A method for processing data includes receiving in a peripheral device, which is connected by a bus to a host processor having multiple host resources, information regarding respective power states of the host resources. The data are selectively directed from the peripheral device to the host resources responsively to the respective power states.