Abstract:
A method for computing includes running a plurality of virtual machines on a computer having one or more cores and a memory. Upon occurrence of an event pertaining to a given virtual machine during a period in which the given virtual machine is unable to receive an interrupt, an interrupt message is written to a pre-assigned interrupt address in the memory. When the given virtual machine is able to receive the interrupt, after writing of the interrupt message, a context of the given virtual machine is copied from the memory to a given core on which the given virtual machine is running, and a hardware interrupt is automatically raised on the given core responsively to the interrupt message in the memory.
Abstract:
An interface device includes a first proxy interface configured to carry out first direct memory access (DMA) transactions initiated by an input/output (I/O) device and a second proxy interface configured to carry out second DMA transactions initiated by a storage drive. A buffer memory is coupled between the first and second proxy interfaces and configured to temporarily hold data transferred in the first and second DMA transactions. Control logic is configured to invoke the second DMA transactions in response to the first DMA transactions so as to cause the data to be transferred via the buffer between the I/O device and the storage drive.
Abstract:
An apparatus includes a Silicon Photonics (SiP) device and a ferrule. The SiP includes multiple optical waveguides. The ferrule includes multiple optical fibers for exchanging optical signals with the respective optical waveguides of the SiP device. In some embodiments, an array of micro-lenses is configured to couple the optical signals between the optical waveguides of the SiP device and the respective optical fibers of the ferrule. In some embodiments, a polymer layer is placed between the SiP device and the ferrule, and includes multiple polymer-based Spot-Size Converters (SSCs) that are configured to couple the optical signals between the optical waveguides of the SiP device and the respective optical fibers of the ferrule.
Abstract:
A network interface includes a host interface for communicating with a node, and circuitry which is configured to communicate with one or more other nodes over a communication network so as to carry out, jointly with one or more other nodes, a redundant storage operation that includes a redundancy calculation, including performing the redundancy calculation on behalf of the node.
Abstract:
A connector cage includes a bezel, having a plurality of slots formed therein, and a cage structure including upper and lower sides and multiple partitions extending between the upper and lower sides to define receptacles for receiving cable connectors. Multiple tabs protrude out of at least one of the sides in locations at which the tabs fit into the slots in the bezel, and are folded over the slots so as to secure the cage structure to the bezel. The cage may also include multiple snap-on spring subassemblies, each spring subassembly secured to a front end of a respective partition and comprising leaves that bow outward to contact the shells of the connectors that are inserted into the receptacles adjacent to the partition.
Abstract:
An apparatus includes multiple data sources and arbitration circuitry. The data sources are configured to send to a common destination data items and respective arbitration requests, such that the data items are sent to the destination regardless of receiving any indication that the data items were served to the destination in response to the respective arbitration requests. The arbitration circuitry is configured to receive and buffer the data items, to perform arbitration on the buffered data items responsively to the arbitration requests, and to serve the buffered data items to the destination in accordance with the arbitration.
Abstract:
Communication apparatus includes a switch, which includes switching logic, multiple ports for connection to a network, and a management port, and which is configured to assign both a first link-layer address and a second link-layer address to the management port. A host processor includes a memory and a central processing unit (CPU), which is configured to run software implementing a management agent for managing functions of the switch. A network interface controller (NIC) is connected to the management port and is configured to convey incoming management packets, which are directed by the switch to the first link-layer address, to the CPU for processing by the management agent, and to write directly to the memory data contained in incoming remote direct memory access (RDMA) packets, which are directed by the switch to the second link-layer address.
Abstract:
A method for communication in a packet data network including at least first and second subnets interconnected by routers. The method includes defining at least first and second classes of link-layer traffic within the subnets, such that the link-layer traffic in the first class is transmitted among nodes in the network without loss of packets, while at least some of the packets in the second class are dropped in case of network congestion. The routers are configured by transmitting control traffic over the network in the packets of the second class. Data traffic is transmitted between the nodes in the first and second subnets via the configured routers in the packets of the first class.
Abstract:
A method for data transfer includes receiving in an operating system of a host computer an instruction initiated by a user application running on the host processor identifying a page of virtual memory of the host computer that is to be used in receiving data in a message that is to be transmitted over a network to the host computer but has not yet been received by the host computer. In response to the instruction, the page is loaded into the memory, and upon receiving the message, the data are written to the loaded page.
Abstract:
A method for data transfer includes receiving in an input/output (I/O) operation a first segment of data to be written to a specified virtual address in a host memory. Upon receiving the first segment of the data, it is detected that a first page that contains the specified virtual address is swapped out of the host memory. At least one second page of the host memory is identified, to which a second segment of the data is expected to be written. Responsively to detecting that the first page is swapped out and to identifying the at least one second page, at least the first and second pages are swapped into the host memory. After swapping at least the first and second pages into the host memory, the data are written to the first and second pages.