-
1.
公开(公告)号:US11914982B2
公开(公告)日:2024-02-27
申请号:US18328287
申请日:2023-06-02
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Lianjie Cao , Anu Mercian , Diman Zad Tootaghaj , Faraz Ahmed , Puneet Sharma
Abstract: Embodiments described herein are generally directed to an edge-CaaS (eCaaS) framework for providing life-cycle management of containerized applications on the edge. According to an example, declarative intents are received indicative of a use case for which a cluster of a container orchestration platform is to be deployed within an edge site that is to be created based on infrastructure associated with a private network. A deployment template is created by performing intent translation on the declarative intents and based on a set of constraints. The deployment template identifies the container orchestration platform selected by the intent translation. The deployment template is then executed to deploy and configure the edge site, including provisioning and configuring the infrastructure, installing the container orchestration platform on the infrastructure, configuring the cluster within the container orchestration platform, and deploying a containerized application or portion thereof on the cluster.
-
公开(公告)号:US20210392070A1
公开(公告)日:2021-12-16
申请号:US17282941
申请日:2019-04-18
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Diman Zad Tootaghaj , Puneet Sharma , Faraz Ahmed
IPC: H04L12/729 , H04L12/721 , H04L12/715 , H04L12/26 , H04L12/801 , H04L12/727
Abstract: An example network orchestrator includes processing circuitry and a memory. The memory includes instructions that cause the network orchestrator to receive network probe information including delay times of network probes associated with a set of flows between devices. The instructions further cause the network orchestrator to generate a correlation matrix including correlations representing shared congested links between pairs of flows. The instructions further cause the network orchestrator to for each flow of the set of flows, determine a routing solution optimized for the each flow and select a total minimum cost solution from the determined routing solutions.
-
3.
公开(公告)号:US11698780B2
公开(公告)日:2023-07-11
申请号:US17236884
申请日:2021-04-21
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Lianjie Cao , Anu Mercian , Diman Zad Tootaghaj , Faraz Ahmed , Puneet Sharma
Abstract: Embodiments described herein are generally directed to an edge-CaaS (eCaaS) framework for providing life-cycle management of containerized applications on the edge. According to an example, declarative intents are received indicative of a use case for which a cluster of a container orchestration platform is to be deployed within an edge site that is to be created based on infrastructure associated with a private network. A deployment template is created by performing intent translation on the declarative intents and based on a set of constraints. The deployment template identifies the container orchestration platform selected by the intent translation. The deployment template is then executed to deploy and configure the edge site, including provisioning and configuring the infrastructure, installing the container orchestration platform on the infrastructure, configuring the cluster within the container orchestration platform, and deploying a containerized application or portion thereof on the cluster.
-
公开(公告)号:US20230089925A1
公开(公告)日:2023-03-23
申请号:US17448299
申请日:2021-09-21
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Junguk Cho , Puneet Sharma , Diman Zad Tootaghaj
Abstract: Architectures and techniques for managing heterogeneous sets of physical GPUs. Functionality information is collected for one or more physical GPUs with a GPU device manager coupled with a heterogeneous set of physical GPUs. At least one of the physical GPUs is to be managed as multiple virtual GPUs based on the collected functionality information with the GPU device manager. Each of the physical GPUs is classified as either a single physical GPU or as one or more virtual GPUs with the device manager. Traffic representing processing jobs to be processed is received by at least a subset of the physical GPUs via a gateway programmed by a traffic manager. The GPU application to process received processing jobs scheduled by and distributed into the scheduled GPU application with a GPU scheduler communicatively coupled with the traffic manager and with the GPU device manager.
-
公开(公告)号:US20210184942A1
公开(公告)日:2021-06-17
申请号:US16931850
申请日:2020-07-17
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Diman Zad Tootaghaj , Junguk Cho , Puneet Sharma
Abstract: Example implementations relate to a proactive auto-scaling approach. According to an example, a machine-learning prediction model is trained to forecast future serverless workloads during a window of time for an application running in a public cloud based on past serverless workload information associated with the application by performing a training process. During the window of time, serverless workload information associated with the application is monitored. A future serverless workload is predicted for the application at a future time within the window, based on the machine learning prediction model. Prior to the future time, containers within the public cloud executing the application are pre-warmed to accommodate the predicted future serverless workload by issuing fake requests to the application to trigger auto-scaling functionality implemented by the public cloud.
-
公开(公告)号:US20240333622A1
公开(公告)日:2024-10-03
申请号:US18193879
申请日:2023-03-31
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Diman Zad Tootaghaj , Mehrnaz Sharifian , Puneet Sharma
IPC: H04L43/0876 , H04L43/045 , H04L43/16
CPC classification number: H04L43/0876 , H04L43/045 , H04L43/16
Abstract: A device and corresponding method are provided determining a consumed computing capacity of a first networking device exceeds the threshold for total capacity for processing monitoring data for a monitoring metric. An optimization engine determines a second networking device with unused computing capacity sufficient for processing the monitoring data generated by the first networking device. The optimization engine automatically moves the monitoring data for the monitoring metric generated by the first networking device to the second networking device and causes the second networking device to process the monitoring data.
-
公开(公告)号:US20240095870A1
公开(公告)日:2024-03-21
申请号:US18307728
申请日:2023-04-26
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Diman Zad Tootaghaj , Junguk Cho , Puneet Sharma
IPC: G06T1/20
CPC classification number: G06T1/20
Abstract: Example implementations relate to scheduling of jobs for a plurality of graphics processing units (GPUs) providing concurrent processing by a plurality of virtual GPUs. According to an example, a computing system including one or more GPUs receives a request to schedule a new job to be executed by the computing system. The new job is allocated to one or more vGPUs. Allocations of existing jobs are updated to one or more vGPUs. Operational cost of operating the one or more GPUs and migration cost of allocating the new job are minimized and allocations of the existing jobs on the one or more vGPUs is updated. The new job and the existing jobs are processed by the one or more GPUs in the computing system.
-
8.
公开(公告)号:US20230325166A1
公开(公告)日:2023-10-12
申请号:US18328287
申请日:2023-06-02
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Lianjie Cao , Anu Mercian , Diman Zad Tootaghaj , Faraz Ahmed , Puneet Sharma
Abstract: Embodiments described herein are generally directed to an edge-CaaS (eCaaS) framework for providing life-cycle management of containerized applications on the edge. According to an example, declarative intents are received indicative of a use case for which a cluster of a container orchestration platform is to be deployed within an edge site that is to be created based on infrastructure associated with a private network. A deployment template is created by performing intent translation on the declarative intents and based on a set of constraints. The deployment template identifies the container orchestration platform selected by the intent translation. The deployment template is then executed to deploy and configure the edge site, including provisioning and configuring the infrastructure, installing the container orchestration platform on the infrastructure, configuring the cluster within the container orchestration platform, and deploying a containerized application or portion thereof on the cluster.
-
公开(公告)号:US20220206900A1
公开(公告)日:2022-06-30
申请号:US17136563
申请日:2020-12-29
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Diman Zad Tootaghaj , Puneet Sharma , Faraz Ahmed , Michael Zayats
Abstract: Example implementations relate to consensus protocols in a stretched network. According to an example, a distributed system includes continuously monitoring network performance and/or network latency among a cluster of a plurality of nodes in a distributed computer system. Leadership priority for each node is set based at least in part on the monitored network performance or network latency. Each node has a vote weight based at least in part on the leadership priority of the node. Each node's vote is biased by the node's vote weight. The node having a number of biased votes higher than a maximum possible number of votes biased by respective vote weights received by any other node in the cluster is selected as a leader node.
-
公开(公告)号:US11983074B2
公开(公告)日:2024-05-14
申请号:US18175091
申请日:2023-02-27
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Diman Zad Tootaghaj , Puneet Sharma , Faraz Ahmed , Michael Zayats
IPC: G06F11/30 , G06F9/50 , G06F11/14 , G06F18/23213 , G06F11/18
CPC classification number: G06F11/1425 , G06F9/5072 , G06F9/5077 , G06F9/5083 , G06F18/23213 , G06F11/187 , G06F2209/505 , G06F2209/508
Abstract: Example implementations relate to consensus protocols in a stretched network. According to an example, a distributed system includes continuously monitoring network performance and/or network latency among a cluster of a plurality of nodes in a distributed computer system. Leadership priority for each node is set based at least in part on the monitored network performance or network latency. Each node has a vote weight based at least in part on the leadership priority of the node. Each node's vote is biased by the node's vote weight. The node having a number of biased votes higher than a maximum possible number of votes biased by respective vote weights received by any other node in the cluster is selected as a leader node.
-
-
-
-
-
-
-
-
-