-
公开(公告)号:US12020068B2
公开(公告)日:2024-06-25
申请号:US17022332
申请日:2020-09-16
Applicant: Intel Corporation
Inventor: Chris MacNamara , Amruta Misra , John Browne
IPC: G06F9/48 , G06F9/455 , G06F12/0875 , G06F13/20
CPC classification number: G06F9/4893 , G06F9/45558 , G06F12/0875 , G06F13/20 , G06F2212/1016
Abstract: Methods to automatically prioritize input/output (I/O) for Network Function Virtualization (NFV) workloads at platform overload and associated apparatus and mechanisms. During lab or runtime workload operations, various platform telemetry data are collected and analyzed to determine whether a current workload is uncore-sensitive—that is, sensitive to operations involving utilization of the uncore circuitry such as I/O-related operations, memory bandwidth utilization, LLC utilization, network traffic, core-to-core traffic etc. For uncore sensitive workloads, upon detection of a platform overload condition such as a thermal load approaching a TDP limit, the uncore circuitry is prioritized over the core circuitry such that the frequency of the core is reduced first. A closed-loop feedback mechanism is used to adjust the frequencies of the core and uncore under various workload conditions. The mechanism enables I/O throughput to be maintained for NFV workloads, while reducing the processor thermal load.
-
公开(公告)号:US12001932B2
公开(公告)日:2024-06-04
申请号:US16939237
申请日:2020-07-27
Applicant: Intel Corporation
Inventor: Zhu Zhou , Xiaotian Gao , Chris MacNamara , Stephen Doyle , Atul Kwatra
IPC: G06N3/006 , G06F1/3287 , G06N5/04 , G06N20/00
CPC classification number: G06N3/006 , G06F1/3287 , G06N5/04 , G06N20/00
Abstract: Methods and apparatus for hierarchical reinforcement learning (RL) algorithm for network function virtualization (NFV) server power management. A first RL model at a first layer is trained by adjusting a frequency of the core of processor while performing a workload to obtain a first trained RL model. The trained RL model is operated in an inference mode while training a second RL model at a second level in the RL hierarchy by adjusting a frequency of the core and a frequency of processor circuitry external to the core to obtain a second trained RL model. Training may be performed online or offline. The first and second RL models are operated in inference modes during online operations to adjust the frequency of the core and the frequency of the circuitry external to the core while executing software on the plurality of cores of to perform a workload, such as an NFV workload.
-
公开(公告)号:US20230315143A1
公开(公告)日:2023-10-05
申请号:US18329492
申请日:2023-06-05
Applicant: Intel Corporation
Inventor: Vasudevan Srinivasan , Krishnakanth V. Sistla , Corey D. Gough , Ian M. Steiner , Nikhil Gupta , Vivek Garg , Ankush Varma , Sujal A. Vora , David P. Lerner , Joseph M. Sullivan , Nagasubramanian Gurumoorthy , William J. Bowhill , Venkatesh Ramamurthy , Chris MacNamara , John J. Browne , Ripan Das
IPC: G06F1/08 , G06F1/3203 , G06F9/30 , G06F9/455 , G06F1/324
CPC classification number: G06F1/08 , G06F1/3203 , G06F9/30101 , G06F9/45558 , G06F1/324 , G06F2009/45591
Abstract: A processing device includes a plurality of processing cores, a control register, associated with a first processing core of the plurality of processing cores, to store a first base clock frequency value at which the first processing core is to run, and a power management circuit to receive a base clock frequency request comprising a second base clock frequency value, store the second base clock frequency value in the control register to cause the first processing core to run at the second base clock frequency value, and expose the second base clock frequency value on a hardware interface associated with the power management circuit.
-
公开(公告)号:US11703933B2
公开(公告)日:2023-07-18
申请号:US16747202
申请日:2020-01-20
Applicant: Intel Corporation
Inventor: Liang Ma , Weigang Li , Madhusudana Raghupatruni , Hongjun Ni , Xuekun Hu , Changzheng Wei , Chris MacNamara , John J. Browne
CPC classification number: G06F1/324 , G06F9/544 , G06F21/53 , G06F21/606 , G06F2221/032
Abstract: Examples described herein provide for a first core to map a measurement of packet processing activity and operating parameters so that a second core can access the measurement of packet processing activity and potentially modify an operating parameter of the first core. The second core can modify operating parameters of the first core based on the measurement of packet processing activity. The first and second cores can be provisioned on start-up with a common key. The first and second cores can use the common key to encrypt or decrypt measurement of packet processing activity and operating parameters that are shared between the first and second cores. Accordingly, operating parameters of the first core can be modified by a different core while providing for secure modification of operating parameters.
-
公开(公告)号:US11665062B2
公开(公告)日:2023-05-30
申请号:US17464465
申请日:2021-09-01
Applicant: Intel Corporation
Inventor: Damien Power , Alan Carey , Chris MacNamara
IPC: H04L41/0893 , H04L67/51 , H04L67/52 , H04L47/70 , H04L67/61 , H04L41/5009 , H04L43/0882 , H04L47/762 , H04L47/80 , H04L67/1021 , H04W28/02 , H04W40/20
CPC classification number: H04L41/0893 , H04L41/5009 , H04L43/0882 , H04L47/762 , H04L47/805 , H04L47/822 , H04L47/829 , H04L67/1021 , H04L67/51 , H04L67/52 , H04L67/61 , H04W28/0226 , H04W40/20
Abstract: Methods, systems, and computer programs are presented for managing resources to deliver a network service in a distributed configuration. A method includes an operation for identifying resources for delivering a network service, the resources being classified by geographic area. Further, the method includes operations for selecting service agents to configure the identified resources, each service agent to manage service pools for delivering the network service across at least one geographic area, the service agents being selected to provide configurability for the service pools. The method further includes operations for sending configuration rules, to the service agents, configured to establish service pools for delivering the network service across the geographic areas. Service traffic information is collected from the service agents, and the resources are adjusted based on the collected service traffic information. Updated respective configuration rules are sent to each determined service agent based on the adjusting.
-
公开(公告)号:US11144085B2
公开(公告)日:2021-10-12
申请号:US15632000
申请日:2017-06-23
Applicant: Intel Corporation
Inventor: Asma H. Al-Rawi , Federico Ardanaz , Jonathan M. Eastep , Dorit Shapira , Krishnakanth Sistla , Nikhil Gupta , Vasudevan Srinivasan , Chris MacNamara
Abstract: An apparatus system is provided which comprises: a first component and a second component; a first circuitry to assign the first component to a first group of components, and to assign the second component to a second group of components; and a second circuitry to assign a first maximum frequency limit to the first group of components, and to assign a second maximum frequency limit to the second group of components, wherein the first component and the second component are to respectively operate in accordance with the first maximum frequency limit and the second maximum frequency limit.
-
公开(公告)号:US20190044879A1
公开(公告)日:2019-02-07
申请号:US16023743
申请日:2018-06-29
Applicant: Intel Corporation
Inventor: Bruce Richardson , Andrew Cunningham , Alexander J. Leckey , Brendan Ryan , Patrick Fleming , Patrick Connor , David Hunt , Andrey Chilikin , Chris MacNamara
IPC: H04L12/863 , H04L12/935 , H04L12/861 , H04L12/801
Abstract: Technologies for reordering network packets on egress include a network interface controller (NIC) configured to associate a received network packet with a descriptor, generate a sequence identifier for the received network packet, and insert the generated sequence identifier into the associated descriptor. The NIC is further configured to determine whether the received network packet is to be transmitted from a compute device associated with the NIC to another compute device and insert, in response to a determination that the received network packet is to be transmitted to the another compute device, the descriptor into a transmission queue of descriptors. Additionally, the NIC is configured to transmit the network packet based on position of the descriptor in the transmission queue of descriptors based on the generated sequence identifier. Other embodiments are described herein.
-
公开(公告)号:US20190044860A1
公开(公告)日:2019-02-07
申请号:US16011103
申请日:2018-06-18
Applicant: Intel Corporation
Inventor: Chris MacNamara , John Browne , Tomasz Kantecki , Ciara Loftus , John Barry , Patrick Connor , Patrick Fleming
IPC: H04L12/801 , H04L12/861 , H04L12/841
Abstract: Technologies for providing adaptive polling of packet queues include a compute device. The compute device includes a network interface controller and a compute engine that includes a set of cores and a memory that includes a queue to store packets received by the network interface controller. The compute engine is configured to determine a predicted time period for the queue to receive packets without overflowing, execute, during the time period and with a core that is assigned to periodically poll the queue for packets, a workload, and poll, with the assigned core, the queue to remove the packets from the queue. Other embodiments are also described and claimed.
-
公开(公告)号:US20190042310A1
公开(公告)日:2019-02-07
申请号:US15951650
申请日:2018-04-12
Applicant: Intel Corporation
Inventor: John Browne , Chris MacNamara , Tomasz Kantecki , Peter McCarthy , Ma Liang , Mairtin O'Loingsigh , Rory Sexton , John Griffin , Nemanja Marjanovic , David Hunt
IPC: G06F9/48 , G06F1/32 , H04L12/851
Abstract: Technologies for power-aware scheduling include a computing device that receives network packets. The computing device classifies the network packets by priority level and then assigns each network packet to a performance group bin. The packets are assigned based on priority level and other performance criteria. The computing device schedules the network packets assigned to each performance group for processing by a processing engine such as a processor core. Network packets assigned to performance groups having a high priority level are scheduled for processing by processing engines with a high performance level. The computing device may select performance levels for processing engines based on processing workload of the network packets. The computing device may control the performance level of the processing engines, for example by controlling the frequency of processor cores. The processing workload may include packet encryption. Other embodiments are described and claimed.
-
公开(公告)号:US20180331960A1
公开(公告)日:2018-11-15
申请号:US15594838
申请日:2017-05-15
Applicant: Intel Corporation
Inventor: John J. Browne , Chris MacNamara , Ronen Chayat
IPC: H04L12/851 , H04L29/08 , H04L29/06
CPC classification number: H04L47/2441 , H04L47/11 , H04L47/2433 , H04L47/32 , H04L63/0209 , H04L67/322
Abstract: A fabric interface, including: an ingress port to receive incoming network traffic; a host interface to forward the incoming network traffic to a host; and a virtualization-aware overload protection engine including: an overload detector to detect an overload condition on the incoming network traffic; a packet inspector to inspect packets of the incoming network traffic; and a prioritizer to identify low priority packets to be dropped, and high priority packets to be forwarded to the host.
-
-
-
-
-
-
-
-
-