ACCELERATOR OR ACCELERATED FUNCTIONS AS A SERVICE USING NETWORKED PROCESSING UNITS

    公开(公告)号:US20230133020A1

    公开(公告)日:2023-05-04

    申请号:US18090653

    申请日:2022-12-29

    IPC分类号: G06F9/50 G06F9/48

    摘要: Various approaches for deploying and controlling distributed accelerated compute operations with the use of infrastructure processing units (IPUs) and similar networked processing units are disclosed. A system for orchestrating acceleration functions in a network compute mesh is configured to access a flowgraph, the flowgraph including data producer-consumer relationships between a plurality of tasks in a workload; identify available artifacts and resources to execute the artifacts to complete each of the plurality of tasks, wherein an artifact is an instance of a function to perform a task of the plurality of tasks; determine a configuration assigning artifacts and resources to each of the plurality of tasks in the flowgraph; and schedule, based on the configuration, the plurality of tasks to execute using the assigned artifacts and resources.

    VIRTUAL POOLS AND RESOURCES USING DISTRIBUTED NETWORKED PROCESSING UNITS

    公开(公告)号:US20230136615A1

    公开(公告)日:2023-05-04

    申请号:US18090701

    申请日:2022-12-29

    IPC分类号: G06F9/50

    摘要: Various approaches for deploying and using virtual pools of compute resources with the use of infrastructure processing units (IPUs) and similar networked processing units are disclosed. A host computing system may be configured to operate a virtual pool of resources, with operations including: identifying, at the host computing system, availability of a resource at the host computing system; transmitting, to a network infrastructure device, a notification that the resource at the host computing system is available for use in a virtual resource pool in the edge computing network; receiving a request for the resource in the virtual resource pool that is provided on behalf of a client computing system, based on the request being coordinated via the network infrastructure device and includes at least one quality of service (QoS) requirement; and servicing the request for the resource, based on the at least one QoS requirement.

    UPGRADE OF NETWORK OBJECTS USING SECURITY ISLANDS

    公开(公告)号:US20230027152A1

    公开(公告)日:2023-01-26

    申请号:US17956517

    申请日:2022-09-29

    摘要: Systems and techniques to upgrade network objects using security islands are described herein. Security islands of node groupings are created based on trust relationships between nodes in an edge network. An upgrade request may be received to upgrade a target edge node in the edge network. Building blocks may be identified for a package installed on the target edge node to be upgraded. A state backup may be stored for the building blocks. An upgrade command and an upgrade payload may be transmitted to the target edge node. The target edge node may be queried to obtain a status of the target edge node. An upgrade action may be determined based on the status and the upgrade action may be executed.

    ORCHESTRATOR EXECUTION PLANNING USING A DISTRIBUTED LEDGER

    公开(公告)号:US20210014132A1

    公开(公告)日:2021-01-14

    申请号:US17028728

    申请日:2020-09-22

    IPC分类号: H04L12/24 G06F9/455 H04L9/06

    摘要: Methods, systems, and use cases for orchestrator execution planning using a distributed ledger are discussed, including an orchestration system with memory and at least one processing circuitry coupled to the memory. The processing circuitry is configured to perform operations to generate an execution plan for a workload based on an SLA. The execution plan includes state transitions associated with corresponding edge service instances. A distributed ledger record is retrieved from the ledger based on a reinforcement learning reward value specified by the record. The reward value is associated with a state transition of the plurality of state transitions. An edge node is selected based on the retrieved distributed ledger record. Execution of an edge service instance of the plurality of edge service instances by the edge node is scheduled. The execution of the edge service instance corresponds to the state transition associated with the reinforcement learning reward value.

    CONTINUOUS TESTING, INTEGRATION, AND DEPLOYMENT MANAGEMENT FOR EDGE COMPUTING

    公开(公告)号:US20210011823A1

    公开(公告)日:2021-01-14

    申请号:US17028844

    申请日:2020-09-22

    IPC分类号: G06F11/263 G06F9/445

    摘要: Various aspects of methods, systems, and use cases for testing, integration, and deployment of failure conditions in an edge computing environment is provided through use of perturbations. In an example, operations to implement controlled perturbations in an edge computing platform include: identifying at least one perturbation parameter available to be implemented with a hardware components of an edge computing system that provides a service using the hardware components; determining values, which disrupt operation of the service, to implement the perturbation parameter among the hardware components; deploying the perturbation parameters to the hardware components, during operation of the service to process a computing workload, to cause perturbation effects on the service; collecting telemetry values associated with the hardware components, produced during operation of the service that indicate the perturbation effects upon the operation of the service; and cause a computing operation to occur based on the collected telemetry values.

    FEDERATED DISTRIBUTION OF COMPUTATION AND OPERATIONS USING NETWORKED PROCESSING UNITS

    公开(公告)号:US20230136048A1

    公开(公告)日:2023-05-04

    申请号:US18090686

    申请日:2022-12-29

    IPC分类号: G06F9/48 G06F9/30

    摘要: Various approaches for deploying and controlling distributed compute operations with the use of infrastructure processing units (IPUs) and similar network-addressable processing units are disclosed. A device for orchestrating functions in a network compute mesh is configured to receive, at a network-addressable processing unit of a network-addressable processing unit mesh from a requestor device, a computation request to execute a workflow with a set of objectives; query at least one other network-addressable processing units of the network-addressable processing unit mesh using the set of objectives, to determine aspects of available resources and data in the network-addressable processing unit mesh to apply to the workflow; transmit a list of recommended resources available to execute the workflow to the requestor device, the list of recommended resources being ranked based on at least one dimension of the resources; obtain a compute chain from the requestor device, the compute chain describing resource control transitions and data flow provided from the recommended resources and data in the network-addressable processing unit mesh; and schedule the execution of the workflow at one or more network-addressable processing units in the network-addressable processing unit mesh in accordance with the compute chain.