VERIFYING THAT USAGE OF VIRTUAL NETWORK FUNCTION (VNF) BY A PLURALITY OF COMPUTE NODES COMPLY WITH ALLOWED USAGE RIGHTS

    公开(公告)号:US20180288101A1

    公开(公告)日:2018-10-04

    申请号:US15472939

    申请日:2017-03-29

    Abstract: In some examples, a method includes establishing, by a network device acting as an orchestrator host for a virtual network function (VNF), a trust relationship with a VNF vendor that dynamically specify a set of usage right policies; determining, by the network device, allowed usage rights associated with the VNF based on the set of usage right policies; installing, by the network device, the VNF on a plurality of compute nodes based on the allowed usage rights; and auditing VNF usage right compliance by: issuing a proof quote request to the plurality of compute nodes; receiving a proof quote response comprising a plurality of resource states on the plurality of compute nodes; and verifying whether usage of the VNF on the plurality of compute nodes complies with the allowed usage rights based on the proof quote response.

    Excess active queue management (AQM): a simple AQM to handle slow-start

    公开(公告)号:US12301473B2

    公开(公告)日:2025-05-13

    申请号:US18447753

    申请日:2023-08-10

    Abstract: A system maintains a queue for storing packets, which are enqueued at a tail of the queue and dequeued at a head of the queue. The system computes a queue utilization value, based on the packets stored in the queue. The system computes an excess amount value, based on the packets stored in the queue and previously tagged as excess packets. The system receives a first packet at the tail of the queue and determines whether a difference between the queue utilization value and the excess amount value exceeds a predetermined threshold. Responsive to determining that the difference exceeds the predetermined threshold, the system tags the first packet as an excess packet. Responsive to tagging the first packet as an excess packet, the system performs an operation associated with the first packet or a second packet at the head of the queue to reduce congestion.

    Smart network interface card control plane for distributed machine learning workloads

    公开(公告)号:US12277080B2

    公开(公告)日:2025-04-15

    申请号:US18460043

    申请日:2023-09-01

    Abstract: In certain embodiments, a method includes receiving, at an interface of a Smart network interface card (SmartNIC) of a computing device, via a network, a network data unit; processing, by a data allocator of a SmartNIC subsystem of the SmartNIC, the network data unit to make a determination that data included in the network data unit is intended for processing by an accelerator of the computing device, wherein the accelerator is configured to execute a machine learning algorithm; storing, by the data allocator and based on the determination, the data in a local buffer of the SmartNIC subsystem; identifying, by the data allocator, a memory resource associated with the accelerator; and transferring the data from the local buffer to the memory resource.

    ACCELERATING CONTAINERIZED APPLICATIONS WITH CACHING

    公开(公告)号:US20250110883A1

    公开(公告)日:2025-04-03

    申请号:US18477557

    申请日:2023-09-29

    Abstract: In certain embodiments, a computer-implemented method includes: receiving, by a caching system plugin, a request to create a persistent volume for a container application instance; configuring, by the caching system plugin, a local cache volume on a host computing device; configuring, by the caching system plugin, a remote storage volume on a remote storage device; selecting, by a policy manager of the caching system plugin, a cache policy for the container application instance; creating, by the caching system plugin and from a cache manager, a virtual block device associated with the local cache volume, the remote storage volume, and the cache policy; and providing the virtual block device for use by the container application instance as the persistent volume.

    OPTIMIZING COST AND PERFORMANCE FOR SERVERLESS DATA ANALYTICS WORKLOADS

    公开(公告)号:US20240289180A1

    公开(公告)日:2024-08-29

    申请号:US18175411

    申请日:2023-02-27

    CPC classification number: G06F9/5083 G06F9/5016 G06F9/5027

    Abstract: Systems and methods are provided for optimizing a serverless workflow. Given a directed acyclic graph (“DAG”) defining functional relationships and a gamma tuning factor to indicate a preference between cost and performance, a serverless workflow corresponding to the DAG may be optimized. The optimization is carried out in accordance with the gamma tuning factor, and is carried out in sub-segments of the DAG called stages. In addition, systems for allowing disparate types of storage media to be utilized by a serverless platform to store data are disclosed. The serverless platforms maintain visibility of the storage media types underlying persistent volumes, and may store data in partitions across disparate types of storage media. For instance, one item of data may be stored partially at a byte addressed storage media and partially at a block addressed storage media.

    Deployment and configuration of an edge site based on declarative intents indicative of a use case

    公开(公告)号:US11698780B2

    公开(公告)日:2023-07-11

    申请号:US17236884

    申请日:2021-04-21

    CPC classification number: G06F8/61 G06F40/30 H04L67/12

    Abstract: Embodiments described herein are generally directed to an edge-CaaS (eCaaS) framework for providing life-cycle management of containerized applications on the edge. According to an example, declarative intents are received indicative of a use case for which a cluster of a container orchestration platform is to be deployed within an edge site that is to be created based on infrastructure associated with a private network. A deployment template is created by performing intent translation on the declarative intents and based on a set of constraints. The deployment template identifies the container orchestration platform selected by the intent translation. The deployment template is then executed to deploy and configure the edge site, including provisioning and configuring the infrastructure, installing the container orchestration platform on the infrastructure, configuring the cluster within the container orchestration platform, and deploying a containerized application or portion thereof on the cluster.

    Network-aware resource allocation
    50.
    发明授权

    公开(公告)号:US11665106B2

    公开(公告)日:2023-05-30

    申请号:US17468517

    申请日:2021-09-07

    Abstract: Systems and methods are provided for updating resource allocation in a distributed network. For example, the method may comprise allocating a plurality of resource containers in a distributed network in accordance with a first distributed resource configuration. Upon determining that a processing workload value exceeds a stabilization threshold of the distributed network, determining a resource efficiency value of the plurality of resource containers in the distributed network. When a resource efficiency value is greater than or equal to the threshold resource efficiency value, the method may generate a second distributed resource configuration that includes a resource upscaling process, or when the resource efficiency value is less than the threshold resource efficiency value, the method may generate the second distributed resource configuration that includes a resource outscaling process. The resource allocation may transmit the second to update the resource allocation.

Patent Agency Ranking