-
41.
公开(公告)号:US20180288101A1
公开(公告)日:2018-10-04
申请号:US15472939
申请日:2017-03-29
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Puneet Sharma , Arun Raghuramu , David Lee
Abstract: In some examples, a method includes establishing, by a network device acting as an orchestrator host for a virtual network function (VNF), a trust relationship with a VNF vendor that dynamically specify a set of usage right policies; determining, by the network device, allowed usage rights associated with the VNF based on the set of usage right policies; installing, by the network device, the VNF on a plurality of compute nodes based on the allowed usage rights; and auditing VNF usage right compliance by: issuing a proof quote request to the plurality of compute nodes; receiving a proof quote response comprising a plurality of resource states on the plurality of compute nodes; and verifying whether usage of the VNF on the plurality of compute nodes complies with the allowed usage rights based on the proof quote response.
-
公开(公告)号:US20180145893A1
公开(公告)日:2018-05-24
申请号:US15571522
申请日:2015-05-12
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Puneet Sharma , Mehdi Malboubi
CPC classification number: H04L43/0817 , G06F2009/45591 , H04L41/0213 , H04L41/142 , H04L43/062 , H04L43/0882 , H04L43/0894
Abstract: In some examples, a method can include receiving, at a network monitor, discrete side information from a first server at a first rack regarding a data flow between the first server and a second server at a rack other than the first rack. The discrete side information can, for example, include an indicator determined by the first server that indicates whether the data flow satisfies a reference criteria. The method can further include performing, with the network monitor, a network inference process partly based on the received discrete side information.
-
公开(公告)号:US12277080B2
公开(公告)日:2025-04-15
申请号:US18460043
申请日:2023-09-01
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Yunming Xiao , Diman Zad Tootaghaj , Aditya Dhakal , Puneet Sharma
IPC: G06F13/36
Abstract: In certain embodiments, a method includes receiving, at an interface of a Smart network interface card (SmartNIC) of a computing device, via a network, a network data unit; processing, by a data allocator of a SmartNIC subsystem of the SmartNIC, the network data unit to make a determination that data included in the network data unit is intended for processing by an accelerator of the computing device, wherein the accelerator is configured to execute a machine learning algorithm; storing, by the data allocator and based on the determination, the data in a local buffer of the SmartNIC subsystem; identifying, by the data allocator, a memory resource associated with the accelerator; and transferring the data from the local buffer to the memory resource.
-
公开(公告)号:US20250110883A1
公开(公告)日:2025-04-03
申请号:US18477557
申请日:2023-09-29
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Lianjie Cao , Zhen Lin , Faraz Ahmed , Puneet Sharma
IPC: G06F12/0891 , G06F12/06
Abstract: In certain embodiments, a computer-implemented method includes: receiving, by a caching system plugin, a request to create a persistent volume for a container application instance; configuring, by the caching system plugin, a local cache volume on a host computing device; configuring, by the caching system plugin, a remote storage volume on a remote storage device; selecting, by a policy manager of the caching system plugin, a cache policy for the container application instance; creating, by the caching system plugin and from a cache manager, a virtual block device associated with the local cache volume, the remote storage volume, and the cache policy; and providing the virtual block device for use by the container application instance as the persistent volume.
-
公开(公告)号:US20240289180A1
公开(公告)日:2024-08-29
申请号:US18175411
申请日:2023-02-27
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: FARAZ AHMED , Lianjie Cao , Puneet Sharma
IPC: G06F9/50
CPC classification number: G06F9/5083 , G06F9/5016 , G06F9/5027
Abstract: Systems and methods are provided for optimizing a serverless workflow. Given a directed acyclic graph (“DAG”) defining functional relationships and a gamma tuning factor to indicate a preference between cost and performance, a serverless workflow corresponding to the DAG may be optimized. The optimization is carried out in accordance with the gamma tuning factor, and is carried out in sub-segments of the DAG called stages. In addition, systems for allowing disparate types of storage media to be utilized by a serverless platform to store data are disclosed. The serverless platforms maintain visibility of the storage media types underlying persistent volumes, and may store data in partitions across disparate types of storage media. For instance, one item of data may be stored partially at a byte addressed storage media and partially at a block addressed storage media.
-
公开(公告)号:US12067420B2
公开(公告)日:2024-08-20
申请号:US17077962
申请日:2020-10-22
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Junguk Cho , Puneet Sharma , Dominik Stiller
CPC classification number: G06F9/5011 , B60W50/00 , G05D1/0212 , G06N20/00 , G08G1/20 , G08G1/202
Abstract: Systems and methods are provided for improving autotuning procedures. For example, the system can implement a task launcher, a scheduler, and an agent to launch, schedule, and execute decomposed autotuning stages, respectively. The scheduling policy implemented by the scheduler may perform operations beyond a simple scheduling policy (e.g., a FIFO-based scheduling policy), which produces a high queuing delay. By leveraging autotuning specific domain knowledge, this may help reduce queuing delay and improve resource utilization that is otherwise found in traditional systems.
-
47.
公开(公告)号:US12001511B2
公开(公告)日:2024-06-04
申请号:US17199294
申请日:2021-03-11
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Lianjie Cao , Faraz Ahmed , Puneet Sharma , Ali Tariq
IPC: G06F18/214 , G06F9/50 , G06F11/30 , G06F11/34 , G06F18/2415 , G06N3/0464 , G06N3/063 , G06N3/0985 , G06N7/01 , G06N20/00 , G06V40/16
CPC classification number: G06F18/214 , G06F9/5022 , G06F9/5027 , G06F9/505 , G06F9/5061 , G06F11/3414 , G06F18/24155 , G06N20/00
Abstract: Systems and methods can be configured to determine a plurality of computing resource configurations used to perform machine learning model training jobs. A computing resource configuration can comprise: a first tuple including numbers of worker nodes and parameter server nodes, and a second tuple including resource allocations for the worker nodes and parameter server nodes. At least one machine learning training job can be executed using a first computing resource configuration having a first set of values associated with the first tuple. During the executing the machine learning training job: resource usage of the worker nodes and parameter server nodes caused by a second set of values associated with the second tuple can be monitored, and whether to adjust the second set of values can be determined. Whether a stopping criterion is satisfied can be determined. One of the plurality of computing resource configurations can be selected.
-
48.
公开(公告)号:US11698780B2
公开(公告)日:2023-07-11
申请号:US17236884
申请日:2021-04-21
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Lianjie Cao , Anu Mercian , Diman Zad Tootaghaj , Faraz Ahmed , Puneet Sharma
Abstract: Embodiments described herein are generally directed to an edge-CaaS (eCaaS) framework for providing life-cycle management of containerized applications on the edge. According to an example, declarative intents are received indicative of a use case for which a cluster of a container orchestration platform is to be deployed within an edge site that is to be created based on infrastructure associated with a private network. A deployment template is created by performing intent translation on the declarative intents and based on a set of constraints. The deployment template identifies the container orchestration platform selected by the intent translation. The deployment template is then executed to deploy and configure the edge site, including provisioning and configuring the infrastructure, installing the container orchestration platform on the infrastructure, configuring the cluster within the container orchestration platform, and deploying a containerized application or portion thereof on the cluster.
-
公开(公告)号:US11665106B2
公开(公告)日:2023-05-30
申请号:US17468517
申请日:2021-09-07
Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Inventor: Ali Tariq , Lianjie Cao , Faraz Ahmed , Puneet Sharma
IPC: H04L43/16 , H04L43/0882 , H04L47/80 , H04L47/78 , H04L47/70 , H04L47/762
CPC classification number: H04L47/803 , H04L43/0882 , H04L43/16 , H04L47/762 , H04L47/781 , H04L47/822
Abstract: Systems and methods are provided for updating resource allocation in a distributed network. For example, the method may comprise allocating a plurality of resource containers in a distributed network in accordance with a first distributed resource configuration. Upon determining that a processing workload value exceeds a stabilization threshold of the distributed network, determining a resource efficiency value of the plurality of resource containers in the distributed network. When a resource efficiency value is greater than or equal to the threshold resource efficiency value, the method may generate a second distributed resource configuration that includes a resource upscaling process, or when the resource efficiency value is less than the threshold resource efficiency value, the method may generate the second distributed resource configuration that includes a resource outscaling process. The resource allocation may transmit the second to update the resource allocation.
-
公开(公告)号:US20230089925A1
公开(公告)日:2023-03-23
申请号:US17448299
申请日:2021-09-21
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Junguk Cho , Puneet Sharma , Diman Zad Tootaghaj
Abstract: Architectures and techniques for managing heterogeneous sets of physical GPUs. Functionality information is collected for one or more physical GPUs with a GPU device manager coupled with a heterogeneous set of physical GPUs. At least one of the physical GPUs is to be managed as multiple virtual GPUs based on the collected functionality information with the GPU device manager. Each of the physical GPUs is classified as either a single physical GPU or as one or more virtual GPUs with the device manager. Traffic representing processing jobs to be processed is received by at least a subset of the physical GPUs via a gateway programmed by a traffic manager. The GPU application to process received processing jobs scheduled by and distributed into the scheduled GPU application with a GPU scheduler communicatively coupled with the traffic manager and with the GPU device manager.
-
-
-
-
-
-
-
-
-