-
公开(公告)号:US11593210B2
公开(公告)日:2023-02-28
申请号:US17136563
申请日:2020-12-29
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Diman Zad Tootaghaj , Puneet Sharma , Faraz Ahmed , Michael Zayats
Abstract: Example implementations relate to consensus protocols in a stretched network. According to an example, a distributed system includes continuously monitoring network performance and/or network latency among a cluster of a plurality of nodes in a distributed computer system. Leadership priority for each node is set based at least in part on the monitored network performance or network latency. Each node has a vote weight based at least in part on the leadership priority of the node. Each node's vote is biased by the node's vote weight. The node having a number of biased votes higher than a maximum possible number of votes biased by respective vote weights received by any other node in the cluster is selected as a leader node.
-
公开(公告)号:US20220382593A1
公开(公告)日:2022-12-01
申请号:US17814895
申请日:2022-07-26
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Diman Zad Tootaghaj , Anu Mercian , Vivek Adarsh , Puneet Sharma
Abstract: Example implementations relate to edge acceleration by offloading network dependent applications to a hardware accelerator. According to one embodiment, queries are received at a cluster of a container orchestration platform. The cluster includes a host system and a hardware accelerator, each serving as individual worker machines of the cluster. The cluster further includes multiple worker nodes and a master node executing on the host system or the hardware accelerator. A first worker node executes on the hardware accelerator and runs a first instance of an application. A distribution of the queries is determined among the worker machines based on a queuing model that takes into consideration the respective compute capacities of the worker machines. Responsive to receipt of the queries by the host system or the hardware accelerator, the queries are directed to the master node or one of the worker nodes in accordance with the distribution.
-
公开(公告)号:US11502965B2
公开(公告)日:2022-11-15
申请号:US17016329
申请日:2020-09-09
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Jean Tourrilhes , Puneet Sharma
Abstract: Systems and methods are provided for performing burst packet preloading for Available Bandwidth (ABW) estimation, that may include: preparing a chirp train to be used for ABW estimation, the chirp train comprising a quantity of original probe packets; determining a quantity of additional probe packets that will transition the network path from a short-term mode into a long-term mode; inserting the determined quantity of additional probe packets at the beginning of the chirp train; and transmitting the chirp train, including the determined quantity of additional probe packets on the network path, to a receiver that can perform ABW estimation of the network path.
-
公开(公告)号:US11303534B2
公开(公告)日:2022-04-12
申请号:US16714637
申请日:2019-12-13
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Diman Zad Tootaghaj , Junguk Cho , Puneet Sharma
IPC: H04L12/24 , G06N20/10 , G05B6/02 , H04L41/16 , H04L41/5009 , H04L41/5054 , H04L41/147
Abstract: Example implementations relate to a proactive auto-scaling approach. According to an example, a target performance metric for an application running in a serverless framework of a private cloud is received. A machine-learning prediction model is trained to forecast future serverless workloads during a window of time for the application based on historical serverless workload information. The serverless framework is monitored to obtain serverless workload observations for the application. A future serverless workload for the application at a future time is predicted by the trained machine learning prediction model based on workload observations. A feedback control system is then used to output a new number of replicas based on a current value of the performance metric, the target performance metric and the predicted future serverless workload. Finally, the serverless framework is caused to scale and pre-warm a number of replicas supporting the application to the new number.
-
公开(公告)号:US20220078127A1
公开(公告)日:2022-03-10
申请号:US17016329
申请日:2020-09-09
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: JEAN TOURRILHES , Puneet Sharma
IPC: H04L12/835 , H04L12/841 , H04Q11/00
Abstract: Systems and methods are provided for performing burst packet preloading for Available Bandwidth (ABW) estimation, that may include: preparing a chirp train to be used for ABW estimation, the chirp train comprising a quantity of original probe packets; determining a quantity of additional probe packets that will transition the network path from a short-term mode into a long-term mode; inserting the determined quantity of additional probe packets at the beginning of the chirp train; and transmitting the chirp train, including the determined quantity of additional probe packets on the network path, to a receiver that can perform ABW estimation of the network path.
-
76.
公开(公告)号:US20210352001A1
公开(公告)日:2021-11-11
申请号:US17282838
申请日:2018-11-01
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Jean Tourrilhes , Puneet Sharma
IPC: H04L12/26
Abstract: Systems and methods are provided for available network bandwidth estimation using a one-way-delay noise filter with bump detection. The method includes receiving one-way delay measurements for each probe packet in a probe train sent over the telecommunications path; grouping the probe packets into a plurality of pairs based on the one-way delay measurements; for each pair, computing a respective noise threshold based on the one-way delay measurements of all the probe packets transmitted after a later-transmitted probe packet of the pair; selecting one of the pairs according to the noise thresholds and the one-way delay measurements for the probe packets of the pairs; and estimating the available bandwidth on the telecommunications path based on transmission times of the probe packets in the selected pair.
-
77.
公开(公告)号:US20210243133A1
公开(公告)日:2021-08-05
申请号:US16778491
申请日:2020-01-31
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Jean Tourrilhes , Puneet Sharma
IPC: H04L12/825 , H04L12/801 , H04L12/841
Abstract: Techniques and architectures for measuring available bandwidth. A train of probe packets is received from a remote electronic device. A network transmission delay for at least two packets from the train of probe packets is measured. Network congestion is estimated utilizing the at least two packets from the train of probe packets. An estimated available bandwidth is computed based on the network transmission and estimated network congestion. One or more network transmission characteristics are modified based on the estimated available bandwidth.
-
公开(公告)号:US20210184941A1
公开(公告)日:2021-06-17
申请号:US16714637
申请日:2019-12-13
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Diman Zad Tootaghaj , Junguk Cho , Puneet Sharma
Abstract: Example implementations relate to a proactive auto-scaling approach. According to an example, a target performance metric for an application running in a serverless framework of a private cloud is received. A machine-learning prediction model is trained to forecast future serverless workloads during a window of time for the application based on historical serverless workload information. The serverless framework is monitored to obtain serverless workload observations for the application. A future serverless workload for the application at a future time is predicted by the trained machine learning prediction model based on workload observations. A feedback control system is then used to output a new number of replicas based on a current value of the performance metric, the target performance metric and the predicted future serverless workload. Finally, the serverless framework is caused to scale and pre-warm a number of replicas supporting the application to the new number.
-
公开(公告)号:US20190312855A1
公开(公告)日:2019-10-10
申请号:US15947052
申请日:2018-04-06
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Puneet Sharma , Arun Raghuramu , David Lee
Abstract: In some examples, a secure compliance protocol may include a virtual computing instance (VCI) deployed on a hypervisor and may be provisioned with hardware computing resources. In some examples the VCI may also include a cryptoprocessor to provide cryptoprocessing to securely communicate with a plurality of nodes, and a plurality of agents to generate a plurality of compliance proofs; the VCI may communicate with a server corresponding to a node of the plurality of nodes; and receive a time stamp corresponding to at least one compliance proof based on a metric of a connected device.
-
公开(公告)号:US20180121222A1
公开(公告)日:2018-05-03
申请号:US15339574
申请日:2016-10-31
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Puneet Sharma , Lianjie Cao , Vinay Saxena , Vasu Sasikanth Sankhavaram , Badrinath Natarajan
CPC classification number: G06F9/45558 , G06F9/5077 , G06F2009/45562 , G06F2009/4557 , G06F2009/45595
Abstract: Examples relate to determining virtual network function configurations. In one example, a computing device may receive a virtual network function specifying a particular function to be performed by at least one virtual machine; identify a particular performance metric for the virtual network function; determine, using the particular performance metric and a default resource configuration, a first infrastructure configuration specifying a value for each of a plurality of infrastructure options, each of the plurality of infrastructure options specifying a feature of the at least one virtual machine; and determine, using the particular performance metric and the first infrastructure configuration, a first resource configuration specifying a value for each of a plurality of virtualized hardware resources for the at least one virtual machine.
-
-
-
-
-
-
-
-
-