-
公开(公告)号:US10009285B2
公开(公告)日:2018-06-26
申请号:US14908745
申请日:2013-07-30
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Jeffrey Clifford Mogul , Alvin Auyoung , Sujata Banerjee , Jung Gun Lee , Jean Tourrilhes , Michael Schlansker , Puneet Sharma , Lucian Popa
IPC: G06F15/173 , H04L12/911 , G06F9/50 , H04L12/24
CPC classification number: H04L47/70 , G06F9/50 , H04L41/0893 , Y02D10/22
Abstract: An example method for allocating resources in accordance with aspects of the present disclosure includes collecting proposals from a plurality of modules, the proposals assigning the resources to the plurality of modules and resulting in topology changes in a computer network environment, identifying a set of proposals in the proposals, the set of proposals complying with policies associated with the plurality of modules, instructing the plurality of modules to evaluate the set of proposals, selecting a proposal from the set of proposals, and instructing at least one module associated with the selected proposal to instantiate the selected proposal.
-
公开(公告)号:US20170318097A1
公开(公告)日:2017-11-02
申请号:US15142141
申请日:2016-04-29
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Julie Ward Drew , Freddy Chua , Ying Zhang , Puneet Sharma , Bernardo Huberman
IPC: H04L29/08 , H04L12/24 , H04L12/911
CPC classification number: H04L67/16 , H04L41/0806 , H04L41/145 , H04L45/64 , H04L47/783 , H04L67/10 , H04L67/2833
Abstract: Example implementations relate to virtualized network function (VNF) placements. For example, VNF placements may include generating an initial mapping of a plurality of VNFs among a plurality of nodes of a network infrastructure, wherein the initial VNF mapping distributes each of a plurality of service chains associated with the plurality of VNFs to different top-of-rack switches. VNF placement may include generating an alternate VNF mapping of the plurality of VNFs among a portion of the plurality of nodes, wherein the alternate VNF mapping corresponds to a metric associated with node resource utilization and a particular amount of servers utilized by distributing the plurality of service chains according to the alternate VNF mapping. VNF placement may include placing the plurality of VNFs according to a selected placement from the initial VNF mapping and the alternate VNF mapping.
-
63.
公开(公告)号:US12141608B2
公开(公告)日:2024-11-12
申请号:US18469695
申请日:2023-09-19
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Lianjie Cao , Faraz Ahmed , Puneet Sharma
IPC: G06F9/50 , G06F9/30 , G06F11/34 , G06F18/214 , G06F18/2415 , G06N20/00
Abstract: Systems and methods are provided for optimally allocating resources used to perform multiple tasks/jobs, e.g., machine learning training jobs. The possible resource configurations or candidates that can be used to perform such jobs are generated. A first batch of training jobs can be randomly selected and run using one of the possible resource configuration candidates. Subsequent batches of training jobs may be performed using other resource configuration candidates that have been selected using an optimization process, e.g., Bayesian optimization. Upon reaching a stopping criterion, the resource configuration resulting in a desired optimization metric, e.g., fastest job completion time can be selected and used to execute the remaining training jobs.
-
公开(公告)号:US20240345875A1
公开(公告)日:2024-10-17
申请号:US18299855
申请日:2023-04-13
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Diman Zad Tootaghaj , Yunming Xiao , Aditya Dhakal , Puneet Sharma
CPC classification number: G06F9/4881 , G06F9/4856 , G06F9/5027
Abstract: In some examples, a system including physical graphics processing units (GPUs) receives a request to schedule a new job to be executed in the system that is accessible by a plurality of tenants to use the physical GPUs. The system allocates the new job to a collection of vGPUs of the physical GPUs based on an operational cost reduction objective to reduce a cost associated with a usage of the physical GPUs and based on a tenant isolation constraint to provide tenant isolation wherein a single tenant of the plurality of tenants is to use a physical GPU at a time.
-
65.
公开(公告)号:US20240289421A1
公开(公告)日:2024-08-29
申请号:US18654953
申请日:2024-05-03
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Lianjie Cao , Faraz Ahmed , Puneet Sharma , Ali Tariq
IPC: G06F18/214 , G06F9/50 , G06F11/34 , G06F18/2415 , G06N20/00
CPC classification number: G06F18/214 , G06F9/5022 , G06F9/5027 , G06F9/505 , G06F9/5061 , G06F11/3414 , G06F18/24155 , G06N20/00
Abstract: Systems and methods can be configured to determine a plurality of computing resource configurations used to perform machine learning model training jobs. A computing resource configuration can comprise: a first tuple including numbers of worker nodes and parameter server nodes, and a second tuple including resource allocations for the worker nodes and parameter server nodes. At least one machine learning training job can be executed using a first computing resource configuration having a first set of values associated with the first tuple. During the executing the machine learning training job: resource usage of the worker nodes and parameter server nodes caused by a second set of values associated with the second tuple can be monitored, and whether to adjust the second set of values can be determined. Whether a stopping criterion is satisfied can be determined. One of the plurality of computing resource configurations can be selected.
-
公开(公告)号:US11805060B2
公开(公告)日:2023-10-31
申请号:US17554935
申请日:2021-12-17
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Jean Tourrilhes , Puneet Sharma
IPC: H04L47/215 , H04L47/625 , H04L47/32
CPC classification number: H04L47/215 , H04L47/32 , H04L47/6255
Abstract: Systems and methods are provided for a new type of quality of service (QoS) primitive at a network device that has better performance than traditional QoS primitives. The QoS primitive may comprise a token bucket with active queue management (TBAQM). Particularly, the TBAQM may receive a data packet that is processed by the token bucket; adjust tokens associated with the token bucket, where the tokens are added based on a configured rate and subtracted in association with processing the data packet; determine a number of tokens associated with the token bucket, comprising: when the token bucket has zero tokens, initiating a first action with the data packet, and when the token bucket has more than zero tokens, determining a marking probability based on the number of tokens and initiating a second action based on the marking probability.
-
公开(公告)号:US11792207B2
公开(公告)日:2023-10-17
申请号:US17539831
申请日:2021-12-01
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Puneet Sharma , Arun Raghuramu , David Lee
IPC: H04L9/40 , G06F21/64 , H04L9/32 , H04W12/106 , H04W4/40 , H04L67/12 , H04W4/021 , G06F9/455 , G06F16/27 , G06F16/182 , H04W4/70 , H04L9/00
CPC classification number: H04L63/123 , G06F9/45558 , G06F16/1834 , G06F16/27 , G06F21/64 , H04L9/3297 , H04L63/0485 , H04L63/0492 , H04W4/70 , H04W12/106 , G06F2009/45587 , G06F2009/45595 , H04L9/50 , H04L67/12 , H04L2209/127 , H04W4/021 , H04W4/40
Abstract: In some examples, a secure compliance protocol may include a virtual computing instance (VCI) deployed on a hypervisor and may be provisioned with hardware computing resources. In some examples the VCI may also include a cryptoprocessor to provide cryptoprocessing to securely communicate with a plurality of nodes, and a plurality of agents to generate a plurality of compliance proofs; the VCI may communicate with a server corresponding to a node of the plurality of nodes; and receive a time stamp corresponding to at least one compliance proof based on a metric of a connected device.
-
公开(公告)号:US20230281052A1
公开(公告)日:2023-09-07
申请号:US17683524
申请日:2022-03-01
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Diman Zad Tootaghaj , Anu Mercian , Puneet Sharma
CPC classification number: G06F9/505 , H04L47/805 , H04L47/823 , G06F2209/5019 , G06F2209/508
Abstract: Systems and methods are provided for strategically harvesting untapped compute capacity of hardware accelerators to manage transient workload spikes at computing systems, are provided. Examples provide a low-cost and scalable computing system which orchestrates seamless offloading of workloads to hardware accelerators during transient workload spikes. By utilizing hardware accelerators as short-term emergency buffers, examples improve upon existing approaches which deploy more expensive, and often significantly under-utilized servers for these emergency purposes. Accordingly, examples may reduce the occurrence of SLA violations while minimizing capital expenditure in computing power.
-
69.
公开(公告)号:US20230222034A1
公开(公告)日:2023-07-13
申请号:US18175091
申请日:2023-02-27
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Diman Zad Tootaghaj , Puneet Sharma , Faraz Ahmed , Michael Zayats
CPC classification number: G06F11/1425 , G06F18/23213 , G06F9/5072 , G06F9/5077 , G06F9/5083 , G06F11/187 , G06F2209/508 , G06F2209/505
Abstract: Example implementations relate to consensus protocols in a stretched network. According to an example, a distributed system includes continuously monitoring network performance and/or network latency among a cluster of a plurality of nodes in a distributed computer system. Leadership priority for each node is set based at least in part on the monitored network performance or network latency. Each node has a vote weight based at least in part on the leadership priority of the node. Each node's vote is biased by the node's vote weight. The node having a number of biased votes higher than a maximum possible number of votes biased by respective vote weights received by any other node in the cluster is selected as a leader node.
-
公开(公告)号:US11651470B2
公开(公告)日:2023-05-16
申请号:US17360122
申请日:2021-06-28
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Diman Zad Tootaghaj , Junguk Cho , Puneet Sharma
IPC: G06T1/20
CPC classification number: G06T1/20
Abstract: Example implementations relate to scheduling of jobs for a plurality of graphics processing units (GPUs) providing concurrent processing by a plurality of virtual GPUs. According to an example, a computing system including one or more GPUs receives a request to schedule a new job to be executed by the computing system. The new job is allocated to one or more vGPUs. Allocations of existing jobs are updated to one or more vGPUs. Operational cost of operating the one or more GPUs and migration cost of allocating the new job are minimized and allocations of the existing jobs on the one or more vGPUs is updated. The new job and the existing jobs are processed by the one or more GPUs in the computing system.
-
-
-
-
-
-
-
-
-