Abstract:
Examples are disclosed for command validation for access to a storage device maintained at a server. In some examples, a network input/output device coupled to the server may receive the command from a client remote to the server. For these examples, elements or modules of the network input/output device may be capable of validating the command and reporting the status of the received command to the client. Other examples are described and claimed.
Abstract:
Techniques to use descriptors for packet transmit scheduling include grouping a plurality of data descriptors associated with blocks of data with a single descriptor. The single descriptor to include information related to the plurality of data descriptors. The single descriptor to be used to schedule transmission of the blocks of data from a computing platform.
Abstract:
There is disclosed in an example, a computer-implemented method of providing network function virtualization orchestration (NFVO), including: determining that a first virtual network function (VNF) instance, providing a virtual service appliance on a virtual network, is to be migrated; provisioning a second VNF instance of the virtual service appliance; cloning configuration data from the first VNF to the second VNF; starting the second VNF without copying traffic data; and halting the first VNF. There is also disclosed an apparatus for performing the method, and a computer-readable medium having instructions for performing the method.
Abstract:
Examples described herein include configuration of a transmitting network device to identify a source queue-pair identifier in at least some of the packets that are transmitted to an endpoint destination. A network device that receives packets and experiences congestion can determine if a congestion causing packet includes a source queue-pair identifier. If the congestion causing packet includes a source queue-pair identifier, the network device can form and transmit a congestion notification message with a copy of the source queue-pair identifier to the transmitting network device. The transmitting network device can access a context for the congestion causing packet using the source queue-pair identifier without having to perform a lookup to identify the context.
Abstract:
Apparatuses and methods for managing jitter resulting from processing through a network interface pipeline are disclosed. In embodiments, a network traffic scheduler annotates packets to be transmitted over a bandwidth-limited network connection with time relationship information to ensure downstream bandwidth limitations are not violated. Following processing through a network interface pipeline, a jitter shaper inspects the annotated time relationship information and pipeline-imposed delays and, by imposing a variable delay, reestablishes bandwidth-complaint time relationships based upon the annotated time relationship information and configured tolerances.
Abstract:
Systems and methods for multi-architecture computing. Some computing devices may include: a processor system including at least one first processing core having a first instruction set architecture (ISA), and at least one second processing core having a second ISA different from the first ISA; and a memory device coupled to the processor system, wherein the memory device has stored thereon a first binary representation of a program for the first ISA and a second binary representation of the program for the second ISA, and the memory device has stored thereon data for the program having an in-memory representation compatible with both the first ISA and the second ISA.
Abstract:
Examples are disclosed for forwarding or receiving data segments associated with a large data packets. In some examples, a large data packet may be segmented into a number of data segments having separate headers that include identifiers to associate the data segments with the large data packet. The data segments with separate headers may then be forwarded from a network node via a communication channel. In other examples, the data segments with separate headers may be received at another network node and then recombined to form the large data packet at the other network node. Other examples are described and claimed.
Abstract:
Technologies for providing FPGA infrastructure-as-a-service include a computing device having an FPGA, scheduler logic, and design loader logic. The scheduler logic selects an FPGA application for execution and the design loader logic loads a design image into the FPGA. The scheduler logic receives a ready signal from the FGPA in response to loading the design and sends a start signal to the FPGA application. The FPGA executes the FPGA application in response to sending the start signal. The scheduler logic may time-share the FPGA among multiple FPGA applications. The computing device may include signaling logic to manage signals between a user process and the FPGA application and DMA logic to manage bulk data transfer between the user process and the FPGA application. The computing device may include a user process linked to an FGPA library executed by a processor of the computing device. Other embodiments are described and claimed.
Abstract:
Technologies for I/O device virtualization include a computing device with an I/O device that includes a physical function, multiple virtual functions, and multiple assignable resources, such as I/O queues. The physical function assigns an assignable resource to a virtual function. The computing device configures a page table mapping from a virtual function memory page located in a configuration space of the virtual function to a physical function memory page located in a configuration space of the physical function. The virtual function memory page includes a control register for the assignable resource, and the physical function memory page includes another control register for the assignable resource. A value may be written to the control register in the virtual function memory page. A processor of the computing device translates the virtual function memory page to the physical function memory page using the page mapping. Other embodiments are described and claimed.
Abstract:
Disclosed herein are systems, devices, and methods for simultaneous multithreading (SMT) with context associations. For example, in some embodiments, a computing device may include: one or more physical cores; and SMT logic to manage multiple logical cores per physical core such that operations of a first computing context are to be executed by a first logical core associated with the first computing context and operations of a second computing context are to be executed by a second logical core associated with the second computing context, wherein the first logical core and the second logical core share a common physical core.