Abstract:
An example computer system for transferring a packet includes a hypervisor to run a first virtual machine and a second virtual machine. The computer system also includes a first memory address space associated with the first virtual machine to store the packet. The computer system further includes a second memory address space associated with the second virtual machine to receive and store the packet. The computer system also includes a virtual switch coupled to the first virtual machine and the second virtual machine to detect that the packet is to be sent from the first virtual machine to the second virtual machine. The computer system further includes a direct memory access device to copy the packet from the first memory address space to the second memory address space via the direct memory access device.
Abstract:
Methods for implementing mini-mezzanine Open Compute Project (OCP) plug-and-play Network PHY Cards and associated apparatus. In accordance with one aspect, the MAC (Media Access Channel) and PHY (Physical) layer functions in one or more communication protocol stacks are split between a MAC block in a Platform Controller Hub (PCH) or processor SoC and a PHY card installed in a mezzanine slot of a platform and including one or more ports. During platform initialization operations, configuration parameters are read from the PHY card including a PHY card ID, and a corresponding configuration script is selected and executed to configure the PHY card for use in the platform. The configuration parameters are also used to enumerate PCIe devices associated with physical functions and ports supported by the PHY card.
Abstract:
Systems, apparatuses, and/or methods to provide data processing offload. An apparatus may determine whether a task is to be processed locally at a client device or remotely off the client device and issue the task to a wireless network and/or a wired network when the task is to be processed remotely off the client device at a server device. An apparatus may identify the task from the wireless network and/or the wired network when the task is to be processed locally at the server device, distribute the task to a server resource at the server device when the task is to be to processed locally at the service device, and provide a result of the task to the wireless network and/or the wired network when the result is to be consumed remotely at the client device.
Abstract:
Examples include techniques for a field programmable gate array (FPGA) to perform one or more functions for an application specific integrated circuit (ASIC). Example techniques include communication between the ASIC and the FPGA via a sideband communication link to enable the ASIC to indicate to the FPGA a need for the FPGA to perform a function to fulfill a request received by the ASIC.
Abstract:
Examples described herein relate to a switch device for a rack of two or more physical servers, wherein the switch device is coupled to the two or more physical servers and the switch device performs packet protocol processing termination for received packets and provides payload data from the received packets without a received packet header to a destination buffer of a destination physical server in the rack. In some examples, the switch device comprises at least one central processing unit, the at least one central processing unit is to execute packet processing operations on the received packets. In some examples, a physical server executes at least one virtualized execution environments (VEE) and the at least one central processing unit executes a VEE for packet processing of packets with data to be accessed by the physical server that executes the VEE.
Abstract:
Methods and Apparatus for Multi-Stage VM Virtual Network Function and Virtual Service Function Chain Acceleration for NFV and needs-based hardware acceleration. Compute platform hosting virtualized environments including virtual machines (VMs) running service applications performing network function virtualization (NFV) employ Field Programmable Gate Array (FPGA) to provide a hardware-based fast path for performing VM-to-VM and NFV-to-NFV transfers. The FPGAs, along with associated configuration data are also configured to support dynamic assignment and performance of hardware-acceleration to offload processing tasks from processors in virtualized environments, such as cloud data centers and the like.
Abstract:
Devices and techniques for out-of-band platform tuning and configuration are described herein. A device can include a telemetry interface to a telemetry collection system and a network interface to network adapter hardware. The device can receive platform telemetry metrics from the telemetry collection system, and network adapter silicon hardware statistics over the network interface, to gather collected statistics. The device can apply a heuristic algorithm using the collected statistics to determine processing core workloads generated by operation of a plurality of software systems communicatively coupled to the device. The device can provide a reconfiguration message to instruct at least one software system to switch operations to a different processing core, responsive to detecting an overload state on at least one processing core, based on the processing core workloads. Other embodiments are also described.
Abstract:
In one embodiment, a system comprises platform logic comprising a plurality of processor cores and resource allocation logic. The resource allocation logic may receive a processing request and direct the processing request to a processor core of the plurality of processor cores, wherein the processor core is selected based at least in part on telemetry data associated with the platform logic, the telemetry data indicating a topology of at least a portion of the platform logic.
Abstract:
An apparatus and method for using conductive adhesive fibers as a data interface are disclosed. A particular embodiment includes: a first array of conductive adhesive fiber fastener pads configured for attachment to a first item; a second array of conductive adhesive fiber fastener pads configured for attachment to a second item, each pad of the first and second array being fabricated with a hook or loop removable fastener, each removable fastener being electrically conductive, the first array of pads being arranged to align with the second array of pads to create a plurality of independent electrical connections when the first item is removably attached to the second item, the plurality of independent electrical connections establishing a data interface connection between the first item and the second item.
Abstract:
Examples described herein relate to a network interface receiving a firmware update from one or more packets. In some examples, the one or more packets indicate a start of a firmware update. In some examples, the network interface can also perform authenticating the start of firmware update indication and based on authentication of the firmware update, permit a firmware update of a device. In some examples, the device is one or more of: Board Management Controller (BMC), central processing unit (CPU), network interface, Ethernet controller, storage controller, memory controller, display engine, graphics processing unit (GPU), accelerator device, or peripheral device. In some examples, an end of firmware update indicator is received in the one or more packets. In some examples, communications are maintained through a port during a firmware change.