-
1.
公开(公告)号:US11914982B2
公开(公告)日:2024-02-27
申请号:US18328287
申请日:2023-06-02
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Lianjie Cao , Anu Mercian , Diman Zad Tootaghaj , Faraz Ahmed , Puneet Sharma
Abstract: Embodiments described herein are generally directed to an edge-CaaS (eCaaS) framework for providing life-cycle management of containerized applications on the edge. According to an example, declarative intents are received indicative of a use case for which a cluster of a container orchestration platform is to be deployed within an edge site that is to be created based on infrastructure associated with a private network. A deployment template is created by performing intent translation on the declarative intents and based on a set of constraints. The deployment template identifies the container orchestration platform selected by the intent translation. The deployment template is then executed to deploy and configure the edge site, including provisioning and configuring the infrastructure, installing the container orchestration platform on the infrastructure, configuring the cluster within the container orchestration platform, and deploying a containerized application or portion thereof on the cluster.
-
2.
公开(公告)号:US20240004710A1
公开(公告)日:2024-01-04
申请号:US18469695
申请日:2023-09-19
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Lianjie Cao , Faraz Ahmed , Puneet Sharma
IPC: G06F9/50 , G06F9/30 , G06N20/00 , G06F11/34 , G06F18/214 , G06F18/2415
CPC classification number: G06F9/5005 , G06F9/505 , G06F9/5011 , G06F18/24155 , G06N20/00 , G06F11/3409 , G06F18/214 , G06F9/3009
Abstract: Systems and methods are provided for optimally allocating resources used to perform multiple tasks/jobs, e.g., machine learning training jobs. The possible resource configurations or candidates that can be used to perform such jobs are generated. A first batch of training jobs can be randomly selected and run using one of the possible resource configuration candidates. Subsequent batches of training jobs may be performed using other resource configuration candidates that have been selected using an optimization process, e.g., Bayesian optimization. Upon reaching a stopping criterion, the resource configuration resulting in a desired optimization metric, e.g., fastest job completion time can be selected and used to execute the remaining training jobs.
-
公开(公告)号:US20210392070A1
公开(公告)日:2021-12-16
申请号:US17282941
申请日:2019-04-18
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Diman Zad Tootaghaj , Puneet Sharma , Faraz Ahmed
IPC: H04L12/729 , H04L12/721 , H04L12/715 , H04L12/26 , H04L12/801 , H04L12/727
Abstract: An example network orchestrator includes processing circuitry and a memory. The memory includes instructions that cause the network orchestrator to receive network probe information including delay times of network probes associated with a set of flows between devices. The instructions further cause the network orchestrator to generate a correlation matrix including correlations representing shared congested links between pairs of flows. The instructions further cause the network orchestrator to for each flow of the set of flows, determine a routing solution optimized for the each flow and select a total minimum cost solution from the determined routing solutions.
-
公开(公告)号:US20250110883A1
公开(公告)日:2025-04-03
申请号:US18477557
申请日:2023-09-29
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Lianjie Cao , Zhen Lin , Faraz Ahmed , Puneet Sharma
IPC: G06F12/0891 , G06F12/06
Abstract: In certain embodiments, a computer-implemented method includes: receiving, by a caching system plugin, a request to create a persistent volume for a container application instance; configuring, by the caching system plugin, a local cache volume on a host computing device; configuring, by the caching system plugin, a remote storage volume on a remote storage device; selecting, by a policy manager of the caching system plugin, a cache policy for the container application instance; creating, by the caching system plugin and from a cache manager, a virtual block device associated with the local cache volume, the remote storage volume, and the cache policy; and providing the virtual block device for use by the container application instance as the persistent volume.
-
5.
公开(公告)号:US12001511B2
公开(公告)日:2024-06-04
申请号:US17199294
申请日:2021-03-11
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Lianjie Cao , Faraz Ahmed , Puneet Sharma , Ali Tariq
IPC: G06F18/214 , G06F9/50 , G06F11/30 , G06F11/34 , G06F18/2415 , G06N3/0464 , G06N3/063 , G06N3/0985 , G06N7/01 , G06N20/00 , G06V40/16
CPC classification number: G06F18/214 , G06F9/5022 , G06F9/5027 , G06F9/505 , G06F9/5061 , G06F11/3414 , G06F18/24155 , G06N20/00
Abstract: Systems and methods can be configured to determine a plurality of computing resource configurations used to perform machine learning model training jobs. A computing resource configuration can comprise: a first tuple including numbers of worker nodes and parameter server nodes, and a second tuple including resource allocations for the worker nodes and parameter server nodes. At least one machine learning training job can be executed using a first computing resource configuration having a first set of values associated with the first tuple. During the executing the machine learning training job: resource usage of the worker nodes and parameter server nodes caused by a second set of values associated with the second tuple can be monitored, and whether to adjust the second set of values can be determined. Whether a stopping criterion is satisfied can be determined. One of the plurality of computing resource configurations can be selected.
-
6.
公开(公告)号:US11698780B2
公开(公告)日:2023-07-11
申请号:US17236884
申请日:2021-04-21
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Lianjie Cao , Anu Mercian , Diman Zad Tootaghaj , Faraz Ahmed , Puneet Sharma
Abstract: Embodiments described herein are generally directed to an edge-CaaS (eCaaS) framework for providing life-cycle management of containerized applications on the edge. According to an example, declarative intents are received indicative of a use case for which a cluster of a container orchestration platform is to be deployed within an edge site that is to be created based on infrastructure associated with a private network. A deployment template is created by performing intent translation on the declarative intents and based on a set of constraints. The deployment template identifies the container orchestration platform selected by the intent translation. The deployment template is then executed to deploy and configure the edge site, including provisioning and configuring the infrastructure, installing the container orchestration platform on the infrastructure, configuring the cluster within the container orchestration platform, and deploying a containerized application or portion thereof on the cluster.
-
公开(公告)号:US11665106B2
公开(公告)日:2023-05-30
申请号:US17468517
申请日:2021-09-07
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Ali Tariq , Lianjie Cao , Faraz Ahmed , Puneet Sharma
IPC: H04L43/16 , H04L43/0882 , H04L47/80 , H04L47/78 , H04L47/70 , H04L47/762
CPC classification number: H04L47/803 , H04L43/0882 , H04L43/16 , H04L47/762 , H04L47/781 , H04L47/822
Abstract: Systems and methods are provided for updating resource allocation in a distributed network. For example, the method may comprise allocating a plurality of resource containers in a distributed network in accordance with a first distributed resource configuration. Upon determining that a processing workload value exceeds a stabilization threshold of the distributed network, determining a resource efficiency value of the plurality of resource containers in the distributed network. When a resource efficiency value is greater than or equal to the threshold resource efficiency value, the method may generate a second distributed resource configuration that includes a resource upscaling process, or when the resource efficiency value is less than the threshold resource efficiency value, the method may generate the second distributed resource configuration that includes a resource outscaling process. The resource allocation may transmit the second to update the resource allocation.
-
8.
公开(公告)号:US12141608B2
公开(公告)日:2024-11-12
申请号:US18469695
申请日:2023-09-19
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Lianjie Cao , Faraz Ahmed , Puneet Sharma
IPC: G06F9/50 , G06F9/30 , G06F11/34 , G06F18/214 , G06F18/2415 , G06N20/00
Abstract: Systems and methods are provided for optimally allocating resources used to perform multiple tasks/jobs, e.g., machine learning training jobs. The possible resource configurations or candidates that can be used to perform such jobs are generated. A first batch of training jobs can be randomly selected and run using one of the possible resource configuration candidates. Subsequent batches of training jobs may be performed using other resource configuration candidates that have been selected using an optimization process, e.g., Bayesian optimization. Upon reaching a stopping criterion, the resource configuration resulting in a desired optimization metric, e.g., fastest job completion time can be selected and used to execute the remaining training jobs.
-
9.
公开(公告)号:US20240289421A1
公开(公告)日:2024-08-29
申请号:US18654953
申请日:2024-05-03
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Lianjie Cao , Faraz Ahmed , Puneet Sharma , Ali Tariq
IPC: G06F18/214 , G06F9/50 , G06F11/34 , G06F18/2415 , G06N20/00
CPC classification number: G06F18/214 , G06F9/5022 , G06F9/5027 , G06F9/505 , G06F9/5061 , G06F11/3414 , G06F18/24155 , G06N20/00
Abstract: Systems and methods can be configured to determine a plurality of computing resource configurations used to perform machine learning model training jobs. A computing resource configuration can comprise: a first tuple including numbers of worker nodes and parameter server nodes, and a second tuple including resource allocations for the worker nodes and parameter server nodes. At least one machine learning training job can be executed using a first computing resource configuration having a first set of values associated with the first tuple. During the executing the machine learning training job: resource usage of the worker nodes and parameter server nodes caused by a second set of values associated with the second tuple can be monitored, and whether to adjust the second set of values can be determined. Whether a stopping criterion is satisfied can be determined. One of the plurality of computing resource configurations can be selected.
-
10.
公开(公告)号:US20230222034A1
公开(公告)日:2023-07-13
申请号:US18175091
申请日:2023-02-27
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Diman Zad Tootaghaj , Puneet Sharma , Faraz Ahmed , Michael Zayats
CPC classification number: G06F11/1425 , G06F18/23213 , G06F9/5072 , G06F9/5077 , G06F9/5083 , G06F11/187 , G06F2209/508 , G06F2209/505
Abstract: Example implementations relate to consensus protocols in a stretched network. According to an example, a distributed system includes continuously monitoring network performance and/or network latency among a cluster of a plurality of nodes in a distributed computer system. Leadership priority for each node is set based at least in part on the monitored network performance or network latency. Each node has a vote weight based at least in part on the leadership priority of the node. Each node's vote is biased by the node's vote weight. The node having a number of biased votes higher than a maximum possible number of votes biased by respective vote weights received by any other node in the cluster is selected as a leader node.
-
-
-
-
-
-
-
-
-