Abstract:
Examples are disclosed for forwarding or receiving data segments associated with a large data packets. In some examples, a large data packet may be segmented into a number of data segments having separate headers that include identifiers to associate the data segments with the large data packet. The data segments with separate headers may then be forwarded from a network node via a communication channel. In other examples, the data segments with separate headers may be received at another network node and then recombined to form the large data packet at the other network node. Other examples are described and claimed.
Abstract:
Technologies for providing FPGA infrastructure-as-a-service include a computing device having an FPGA, scheduler logic, and design loader logic. The scheduler logic selects an FPGA application for execution and the design loader logic loads a design image into the FPGA. The scheduler logic receives a ready signal from the FGPA in response to loading the design and sends a start signal to the FPGA application. The FPGA executes the FPGA application in response to sending the start signal. The scheduler logic may time-share the FPGA among multiple FPGA applications. The computing device may include signaling logic to manage signals between a user process and the FPGA application and DMA logic to manage bulk data transfer between the user process and the FPGA application. The computing device may include a user process linked to an FGPA library executed by a processor of the computing device. Other embodiments are described and claimed.
Abstract:
Disclosed herein are systems, devices, and methods for simultaneous multithreading (SMT) with context associations. For example, in some embodiments, a computing device may include: one or more physical cores; and SMT logic to manage multiple logical cores per physical core such that operations of a first computing context are to be executed by a first logical core associated with the first computing context and operations of a second computing context are to be executed by a second logical core associated with the second computing context, wherein the first logical core and the second logical core share a common physical core.
Abstract:
Disclosed herein are systems and methods for multi-architecture computing. For example, in some embodiments, a computing device may include: a processor system including at least one first processing core having a first instruction set architecture (ISA), and at least one second processing core having a second ISA different from the first ISA; and a memory device coupled to the processor system, wherein the memory device has stored thereon a first binary representation of a program for the first ISA and a second binary representation of the program for the second ISA, and the memory device has stored thereon data for the program having an in-memory representation compatible with both the first ISA and the second ISA.
Abstract:
Examples are disclosed for replicating data between storage servers. In some examples, a network input/output (I/O) device coupled to either a client device or to a storage server may exchange remote direct memory access (RDMA) commands or RDMA completion commands associated with replicating data received from the client device. The data may be replicated to a plurality of storage servers interconnect to each other and/or the client device via respective network communication links. Other examples are described and claimed.
Abstract:
Embodiments of the disclosure are directed to controlling an endpoint device running an endpoint device using a central control server. The central controller server is configured to communicate with the endpoint device across a communications interface compliant with a remote direct access (RDMA) compliant protocol. The central control server includes an RDMA network interface controller and a control process. The control process can execute an endpoint device algorithm to identify read and write commands to be sent across the RDMA protocol-compliant interface to the endpoint device. The RDMA network interface controller can convert messages into RDMA compliant messages that include direct read or write commands and memory location information. The endpoint device can also include a network interface controller that can understand the RDMA message, identify the memory location from the message, and execute the direct read or write access command.
Abstract:
Methods and apparatus for implementing notification by network elements of packet drops. In response to determining a packet is to be dropped, a network element such as a switch or router determines the source of the packet and returns a dropped packet notification message to the source. Upon receipt of notification, networking software or embedded hardware on the source causes the dropped packet to be retransmitted. The notification may also be sent from the network element to the destination computer to inform networking software or embedded logic implemented by the destination computer that the packet was dropped and notification to the source has been sent, thus alleviating the destination from needing to send a Selective ACKnowledge (SACK) message to inform the source the packet was not delivered. (Too narrow)
Abstract:
Network interface devices with remote storage control. In some embodiments, a network interface device may include receiver circuitry and remote storage device control circuitry. The remote storage device control circuitry may be coupled to the receiver circuitry and may share a physical support with the receiver circuitry. The remote storage device control circuitry may be configured to control writing of data from the receiver circuitry to a remote storage device that does not share a physical support with the remote storage device control circuitry.
Abstract:
An embodiment may include circuitry to facilitate, at least in part, a first network interface controller (NIC) in a client to be capable of accessing, via a second NIC in a server that is remote from the client and in a manner that is independent of an operating system environment in the server, at least one command interface of another controller of the server. The command interface may include at least one controller command queue. Such accessing may include writing at least one queue element to the at least one command queue to command the another controller to perform at least one operation associated with the another controller. The another controller may perform the at least one operation in response, at least in part, to the at least one queue element. Many alternatives, variations, and modifications are possible.
Abstract:
Technologies for providing efficient detection of idle poll loops include a compute device. The compute device has a compute engine that includes a plurality of cores and a memory. The compute engine is to determine a ratio of unsuccessful operations to successful operations over a predefined time period of a core of the plurality cores that is assigned to continually poll, within the predefined time period, a memory address for a change in status and determine whether the determined ratio satisfies a reference ratio of unsuccessful operations to successful operations. The reference ratio is indicative of a change in the operation of the assigned core. The compute engine is further to selectively increase or decrease a power usage of the assigned core as a function of whether the determined ratio satisfies the reference ratio. Other embodiments are also described and claimed.