Deployment and configuration of an edge site based on declarative intents indicative of a use case

    公开(公告)号:US11914982B2

    公开(公告)日:2024-02-27

    申请号:US18328287

    申请日:2023-06-02

    CPC classification number: G06F8/61 G06F40/30 H04L67/12

    Abstract: Embodiments described herein are generally directed to an edge-CaaS (eCaaS) framework for providing life-cycle management of containerized applications on the edge. According to an example, declarative intents are received indicative of a use case for which a cluster of a container orchestration platform is to be deployed within an edge site that is to be created based on infrastructure associated with a private network. A deployment template is created by performing intent translation on the declarative intents and based on a set of constraints. The deployment template identifies the container orchestration platform selected by the intent translation. The deployment template is then executed to deploy and configure the edge site, including provisioning and configuring the infrastructure, installing the container orchestration platform on the infrastructure, configuring the cluster within the container orchestration platform, and deploying a containerized application or portion thereof on the cluster.

    HEURISTIC-BASED SD-WAN ROUTE RECONFIGURATION

    公开(公告)号:US20210392070A1

    公开(公告)日:2021-12-16

    申请号:US17282941

    申请日:2019-04-18

    Abstract: An example network orchestrator includes processing circuitry and a memory. The memory includes instructions that cause the network orchestrator to receive network probe information including delay times of network probes associated with a set of flows between devices. The instructions further cause the network orchestrator to generate a correlation matrix including correlations representing shared congested links between pairs of flows. The instructions further cause the network orchestrator to for each flow of the set of flows, determine a routing solution optimized for the each flow and select a total minimum cost solution from the determined routing solutions.

    Deployment and configuration of an edge site based on declarative intents indicative of a use case

    公开(公告)号:US11698780B2

    公开(公告)日:2023-07-11

    申请号:US17236884

    申请日:2021-04-21

    CPC classification number: G06F8/61 G06F40/30 H04L67/12

    Abstract: Embodiments described herein are generally directed to an edge-CaaS (eCaaS) framework for providing life-cycle management of containerized applications on the edge. According to an example, declarative intents are received indicative of a use case for which a cluster of a container orchestration platform is to be deployed within an edge site that is to be created based on infrastructure associated with a private network. A deployment template is created by performing intent translation on the declarative intents and based on a set of constraints. The deployment template identifies the container orchestration platform selected by the intent translation. The deployment template is then executed to deploy and configure the edge site, including provisioning and configuring the infrastructure, installing the container orchestration platform on the infrastructure, configuring the cluster within the container orchestration platform, and deploying a containerized application or portion thereof on the cluster.

    ASSIGNING JOBS TO HETEROGENEOUS GRAPHICS PROCESSING UNITS

    公开(公告)号:US20230089925A1

    公开(公告)日:2023-03-23

    申请号:US17448299

    申请日:2021-09-21

    Abstract: Architectures and techniques for managing heterogeneous sets of physical GPUs. Functionality information is collected for one or more physical GPUs with a GPU device manager coupled with a heterogeneous set of physical GPUs. At least one of the physical GPUs is to be managed as multiple virtual GPUs based on the collected functionality information with the GPU device manager. Each of the physical GPUs is classified as either a single physical GPU or as one or more virtual GPUs with the device manager. Traffic representing processing jobs to be processed is received by at least a subset of the physical GPUs via a gateway programmed by a traffic manager. The GPU application to process received processing jobs scheduled by and distributed into the scheduled GPU application with a GPU scheduler communicatively coupled with the traffic manager and with the GPU device manager.

    PROACTIVELY ACCOMODATING PREDICTED FUTURE SERVERLESS WORKLOADS USING A MACHINE LEARNING PREDICTION MODEL

    公开(公告)号:US20210184942A1

    公开(公告)日:2021-06-17

    申请号:US16931850

    申请日:2020-07-17

    Abstract: Example implementations relate to a proactive auto-scaling approach. According to an example, a machine-learning prediction model is trained to forecast future serverless workloads during a window of time for an application running in a public cloud based on past serverless workload information associated with the application by performing a training process. During the window of time, serverless workload information associated with the application is monitored. A future serverless workload is predicted for the application at a future time within the window, based on the machine learning prediction model. Prior to the future time, containers within the public cloud executing the application are pre-warmed to accommodate the predicted future serverless workload by issuing fake requests to the application to trigger auto-scaling functionality implemented by the public cloud.

    DISTRIBUTED NETWORK MONITORING
    6.
    发明公开

    公开(公告)号:US20240333622A1

    公开(公告)日:2024-10-03

    申请号:US18193879

    申请日:2023-03-31

    CPC classification number: H04L43/0876 H04L43/045 H04L43/16

    Abstract: A device and corresponding method are provided determining a consumed computing capacity of a first networking device exceeds the threshold for total capacity for processing monitoring data for a monitoring metric. An optimization engine determines a second networking device with unused computing capacity sufficient for processing the monitoring data generated by the first networking device. The optimization engine automatically moves the monitoring data for the monitoring metric generated by the first networking device to the second networking device and causes the second networking device to process the monitoring data.

    SCHEDULING JOBS ON GRAPHICAL PROCESSING UNITS

    公开(公告)号:US20240095870A1

    公开(公告)日:2024-03-21

    申请号:US18307728

    申请日:2023-04-26

    CPC classification number: G06T1/20

    Abstract: Example implementations relate to scheduling of jobs for a plurality of graphics processing units (GPUs) providing concurrent processing by a plurality of virtual GPUs. According to an example, a computing system including one or more GPUs receives a request to schedule a new job to be executed by the computing system. The new job is allocated to one or more vGPUs. Allocations of existing jobs are updated to one or more vGPUs. Operational cost of operating the one or more GPUs and migration cost of allocating the new job are minimized and allocations of the existing jobs on the one or more vGPUs is updated. The new job and the existing jobs are processed by the one or more GPUs in the computing system.

    DEPLOYMENT AND CONFIGURATION OF AN EDGE SITE BASED ON DECLARATIVE INTENTS INDICATIVE OF A USE CASE

    公开(公告)号:US20230325166A1

    公开(公告)日:2023-10-12

    申请号:US18328287

    申请日:2023-06-02

    CPC classification number: G06F8/61 H04L67/12 G06F40/30

    Abstract: Embodiments described herein are generally directed to an edge-CaaS (eCaaS) framework for providing life-cycle management of containerized applications on the edge. According to an example, declarative intents are received indicative of a use case for which a cluster of a container orchestration platform is to be deployed within an edge site that is to be created based on infrastructure associated with a private network. A deployment template is created by performing intent translation on the declarative intents and based on a set of constraints. The deployment template identifies the container orchestration platform selected by the intent translation. The deployment template is then executed to deploy and configure the edge site, including provisioning and configuring the infrastructure, installing the container orchestration platform on the infrastructure, configuring the cluster within the container orchestration platform, and deploying a containerized application or portion thereof on the cluster.

    LEADER ELECTION IN A DISTRIBUTED SYSTEM

    公开(公告)号:US20220206900A1

    公开(公告)日:2022-06-30

    申请号:US17136563

    申请日:2020-12-29

    Abstract: Example implementations relate to consensus protocols in a stretched network. According to an example, a distributed system includes continuously monitoring network performance and/or network latency among a cluster of a plurality of nodes in a distributed computer system. Leadership priority for each node is set based at least in part on the monitored network performance or network latency. Each node has a vote weight based at least in part on the leadership priority of the node. Each node's vote is biased by the node's vote weight. The node having a number of biased votes higher than a maximum possible number of votes biased by respective vote weights received by any other node in the cluster is selected as a leader node.

Patent Agency Ranking