NETWORK FUNCTION VIRTUALIZATION ARCHITECTURE WITH DEVICE ISOLATION

    公开(公告)号:US20190042506A1

    公开(公告)日:2019-02-07

    申请号:US16027776

    申请日:2018-07-05

    Abstract: A network system includes a central processing unit and a peripheral device in electrical communication with the central processing unit. The peripheral device has at least one power input and a data input. The network system also includes an out of band controller in electrical communication with the central processing unit, the peripheral device, and an external management interface. Responsive to an identified threat, the out of band controller is configured to disable the at least one power input and the data input to the peripheral device, where the disablement indicates to the central processing unit that a hot plug event has occurred with respect to the peripheral device. The out of band controller is also configured to enable auxiliary power to the peripheral device such that the out of band controller remains in communication with the peripheral device during remediation of the identified threat.

    TECHNOLOGIES FOR DEMOTING CACHE LINES TO SHARED CACHE

    公开(公告)号:US20190042419A1

    公开(公告)日:2019-02-07

    申请号:US16024773

    申请日:2018-06-30

    Abstract: Technologies for demoting cache lines to a shared cache include a compute device with at least one processor having multiple cores, a cache memory with a core-local cache and a shared cache, and a cache line demote device. A processor core of a processor of the compute device is configured to retrieve at least a portion of data of a received network packet and move the data into one or more core-local cache lines of the core-local cache. The processor core is further configured to perform a processing operation on the data and transmit a cache line demotion command to the cache line demote device subsequent to having completed the processing operation. The cache line demote device is configured to perform a cache line demotion operation to demote the data from the core-local cache lines to shared cache lines of the shared cache. Other embodiments are described herein.

    Technologies for demoting cache lines to shared cache

    公开(公告)号:US10657056B2

    公开(公告)日:2020-05-19

    申请号:US16024773

    申请日:2018-06-30

    Abstract: Technologies for demoting cache lines to a shared cache include a compute device with at least one processor having multiple cores, a cache memory with a core-local cache and a shared cache, and a cache line demote device. A processor core of a processor of the compute device is configured to retrieve at least a portion of data of a received network packet and move the data into one or more core-local cache lines of the core-local cache. The processor core is further configured to perform a processing operation on the data and transmit a cache line demotion command to the cache line demote device subsequent to having completed the processing operation. The cache line demote device is configured to perform a cache line demotion operation to demote the data from the core-local cache lines to shared cache lines of the shared cache. Other embodiments are described herein.

    POLICY-BASED NETWORK SERVICE FINGERPRINTING
    6.
    发明申请

    公开(公告)号:US20190104022A1

    公开(公告)日:2019-04-04

    申请号:US15721373

    申请日:2017-09-29

    Abstract: A data center orchestrator, including: a hardware platform; a host fabric interface to communicatively couple the orchestrator to a network; an orchestrator engine to provide a data center orchestration function; and a data structure, including a network function virtualization definition (NFVD) instance, the NFVD instance including a definition for instantiating a virtual network function (VNF) on a host platform, including a telemetry fingerprint policy description (TFPD) for the VNF, wherein the TFPD includes information to collect telemetry data selected from a set of available telemetry data for the host platform.

    TECHNOLOGIES FOR REORDERING NETWORK PACKETS ON EGRESS

    公开(公告)号:US20190044879A1

    公开(公告)日:2019-02-07

    申请号:US16023743

    申请日:2018-06-29

    Abstract: Technologies for reordering network packets on egress include a network interface controller (NIC) configured to associate a received network packet with a descriptor, generate a sequence identifier for the received network packet, and insert the generated sequence identifier into the associated descriptor. The NIC is further configured to determine whether the received network packet is to be transmitted from a compute device associated with the NIC to another compute device and insert, in response to a determination that the received network packet is to be transmitted to the another compute device, the descriptor into a transmission queue of descriptors. Additionally, the NIC is configured to transmit the network packet based on position of the descriptor in the transmission queue of descriptors based on the generated sequence identifier. Other embodiments are described herein.

    TECHNOLOGIES FOR POWER-AWARE SCHEDULING FOR NETWORK PACKET PROCESSING

    公开(公告)号:US20190042310A1

    公开(公告)日:2019-02-07

    申请号:US15951650

    申请日:2018-04-12

    Abstract: Technologies for power-aware scheduling include a computing device that receives network packets. The computing device classifies the network packets by priority level and then assigns each network packet to a performance group bin. The packets are assigned based on priority level and other performance criteria. The computing device schedules the network packets assigned to each performance group for processing by a processing engine such as a processor core. Network packets assigned to performance groups having a high priority level are scheduled for processing by processing engines with a high performance level. The computing device may select performance levels for processing engines based on processing workload of the network packets. The computing device may control the performance level of the processing engines, for example by controlling the frequency of processor cores. The processing workload may include packet encryption. Other embodiments are described and claimed.

    MULTI-TIMESCALE POWER CONTROL TECHNOLOGIES

    公开(公告)号:US20220326757A1

    公开(公告)日:2022-10-13

    申请号:US17853442

    申请日:2022-06-29

    Abstract: The present disclosure is related to power control mechanisms for workload processing systems, and in particular, multi-scale power control technologies that can be used to reduce the overhead of workload processing systems. The disclosed power control mechanisms operate on multiple timescales including a slow timescale and a fast timescale. Separate control loops (or governors) are used for the slow and fast timescales where each control loop includes its own trigger mechanisms and configurable operational policies. The operational policies for slow timescale control loop can be trained separately using various machine learning techniques while the operational policies for the fast timescale control loop can be simple and reactive heuristics.

Patent Agency Ranking