SCHEDULING JOBS ON GRAPHICAL PROCESSING UNITS

    公开(公告)号:US20240095870A1

    公开(公告)日:2024-03-21

    申请号:US18307728

    申请日:2023-04-26

    CPC classification number: G06T1/20

    Abstract: Example implementations relate to scheduling of jobs for a plurality of graphics processing units (GPUs) providing concurrent processing by a plurality of virtual GPUs. According to an example, a computing system including one or more GPUs receives a request to schedule a new job to be executed by the computing system. The new job is allocated to one or more vGPUs. Allocations of existing jobs are updated to one or more vGPUs. Operational cost of operating the one or more GPUs and migration cost of allocating the new job are minimized and allocations of the existing jobs on the one or more vGPUs is updated. The new job and the existing jobs are processed by the one or more GPUs in the computing system.

    DEPLOYMENT AND CONFIGURATION OF AN EDGE SITE BASED ON DECLARATIVE INTENTS INDICATIVE OF A USE CASE

    公开(公告)号:US20230325166A1

    公开(公告)日:2023-10-12

    申请号:US18328287

    申请日:2023-06-02

    CPC classification number: G06F8/61 H04L67/12 G06F40/30

    Abstract: Embodiments described herein are generally directed to an edge-CaaS (eCaaS) framework for providing life-cycle management of containerized applications on the edge. According to an example, declarative intents are received indicative of a use case for which a cluster of a container orchestration platform is to be deployed within an edge site that is to be created based on infrastructure associated with a private network. A deployment template is created by performing intent translation on the declarative intents and based on a set of constraints. The deployment template identifies the container orchestration platform selected by the intent translation. The deployment template is then executed to deploy and configure the edge site, including provisioning and configuring the infrastructure, installing the container orchestration platform on the infrastructure, configuring the cluster within the container orchestration platform, and deploying a containerized application or portion thereof on the cluster.

    MACHINE LEARNING-BASED APPROACHES FOR SERVICE FUNCTION CHAIN SELECTION

    公开(公告)号:US20230123074A1

    公开(公告)日:2023-04-20

    申请号:US17503232

    申请日:2021-10-15

    Abstract: Systems, methods, and computer-readable media are described for employing a machine learning-based approach such as adaptive Bayesian optimization to learn over time the most optimized assignments of incoming network requests to service function chains (SFCs) created within network slices of a 5G network. An optimized SFC assignment may be an assignment that minimizes an unknown objective function for a given set of incoming network service requests. For example, an optimized SFC assignment may be one that minimizes request response time or one that maximizes throughput for one or more network service requests corresponding to one or more network service types. The optimized SFC for a network request of a given network service type may change over time based on the dynamic nature of network performance. The machine-learning based approaches described herein train a model to dynamically determine optimized SFC assignments based on the dynamically changing network conditions.

    NETWORK-AWARE RESOURCE ALLOCATION
    24.
    发明申请

    公开(公告)号:US20230071281A1

    公开(公告)日:2023-03-09

    申请号:US17468517

    申请日:2021-09-07

    Abstract: Systems and methods are provided for updating resource allocation in a distributed network. For example, the method may comprise allocating a plurality of resource containers in a distributed network in accordance with a first distributed resource configuration. Upon determining that a processing workload value exceeds a stabilization threshold of the distributed network, determining a resource efficiency value of the plurality of resource containers in the distributed network. When a resource efficiency value is greater than or equal to the threshold resource efficiency value, the method may generate a second distributed resource configuration that includes a resource upscaling process, or when the resource efficiency value is less than the threshold resource efficiency value, the method may generate the second distributed resource configuration that includes a resource outscaling process. The resource allocation may transmit the second to update the resource allocation.

    OPTICAL NETWORK HAVING COMBINED CIRCUIT-PACKET SWITCH ARCHITECTURE

    公开(公告)号:US20220210528A1

    公开(公告)日:2022-06-30

    申请号:US17655032

    申请日:2022-03-16

    Abstract: An optical network includes top networking ports coupled to a packet switch, first media converters, second media converters, and bottom networking ports. The first media converters are coupled to top networking ports, each of the first media converters including a first ASIC transceiver that has a circuit switch function. The second media converters are coupled to the first media converter via optical cables to receive the optical signals. Each of the second media converters includes a second ASIC transceiver that has a circuit switch function. The bottom networking ports are coupled to the second media converters. The first ASIC transceiver and the second ASIC transceiver are configured to transmit a signal from one of the top networking ports to any one of the bottom networking ports, and transmit a signal from one of the bottom networking ports to any one of the top networking ports.

    LEADER ELECTION IN A DISTRIBUTED SYSTEM

    公开(公告)号:US20220206900A1

    公开(公告)日:2022-06-30

    申请号:US17136563

    申请日:2020-12-29

    Abstract: Example implementations relate to consensus protocols in a stretched network. According to an example, a distributed system includes continuously monitoring network performance and/or network latency among a cluster of a plurality of nodes in a distributed computer system. Leadership priority for each node is set based at least in part on the monitored network performance or network latency. Each node has a vote weight based at least in part on the leadership priority of the node. Each node's vote is biased by the node's vote weight. The node having a number of biased votes higher than a maximum possible number of votes biased by respective vote weights received by any other node in the cluster is selected as a leader node.

    Graph-based policy representation system for managing network devices

    公开(公告)号:US11374979B2

    公开(公告)日:2022-06-28

    申请号:US16452152

    申请日:2019-06-25

    Abstract: Systems and methods are provided for managing network devices using policy graph representations. In some embodiments, the method includes receiving configurations for a plurality of network devices; extracting one or more policies from the configurations; extracting a label hierarchy from the configurations, the label hierarchy describing an organization of nodes in a network comprising the network devices; generating a connectivity of a network comprising the network devices based on the one or more policies and the label hierarchy; generating a policy graph representation of the connectivity of the network; and displaying the policy graph representation of the connectivity to a user.

    OPTICAL NETWORK HAVING COMBINED CIRCUIT-PACKET SWITCH ARCHITECTURE

    公开(公告)号:US20220021956A1

    公开(公告)日:2022-01-20

    申请号:US16931348

    申请日:2020-07-16

    Abstract: An optical network includes top networking ports coupled to a packet switch, first media converters, second media converters, and bottom networking ports. The first media converters are coupled to top networking ports, each of the first media converters including a first ASIC transceiver that has a circuit switch function. The second media converters are coupled to the first media converter via optical cables to receive the optical signals. Each of the second media converters includes a second ASIC transceiver that has a circuit switch function. The bottom networking ports are coupled to the second media converters. The first ASIC transceiver and the second ASIC transceiver are configured to transmit a signal from one of the top networking ports to any one of the bottom networking ports, and transmit a signal from one of the bottom networking ports to any one of the top networking ports.

    Incremental intent checking for stateful networks

    公开(公告)号:US10938667B2

    公开(公告)日:2021-03-02

    申请号:US16227502

    申请日:2018-12-20

    Abstract: An example method including identifying an intent-based stateful network having a first endpoint, a second endpoint, and one or more devices performing stateful network functions between the first endpoint and the second endpoint. Further, constructing a causality graph of the network, the causality graph having a plurality of nodes for each of the one or more devices performing stateful network functions, wherein the connecting comprises connecting the first endpoint, the second endpoint, and the one or more devices performing stateful network functions to show causal relationships between the first endpoint and the second endpoint and the one or more devices performing stateful network functions. Also, determining whether the connections between the first endpoint, the second endpoint, and the one or more devices performing stateful network functions provide a path from the first endpoint and the second endpoint, and updating, incrementally, the causality graph as a change to the network occurs.

Patent Agency Ranking