HARDWARE-ASSISTED TRACING SCHEMES FOR DISTRIBUTED AND SCALE-OUT APPLICATIONS

    公开(公告)号:US20200186592A1

    公开(公告)日:2020-06-11

    申请号:US16790342

    申请日:2020-02-13

    IPC分类号: H04L29/08 H04L12/24

    摘要: Methods and apparatus for scale out hardware-assisted tracing schemes for distributed and scale-out applications. In connection with execution of one or more applications using a distributed processing environment including multiple compute nodes, telemetry and tracing data are obtained using hardware-based logic on the compute nodes. Processes associated with applications are identified, as well as the compute nodes on which instances of the processes are executed. Process instances are associated with process application space identifiers (PASIDs), while processes used for an application are associating with a global group identifier (GGID) that serves as an application ID. The PASIDs and GGIDs are used to store telemetry and/or tracing data on the compute nodes and/or forward such data to a tracing server in a manner that enables telemetry and/or tracing data to be aggregated on an application basis. Telemetry and/or tracing data may be obtained from processors on the compute nodes, and (optionally) additional elements such as network interface controllers (NICs). Tracing data may also be obtained from switches used for forwarding data between processes.

    AUTOMATIC LOCALIZATION OF ACCELERATION IN EDGE COMPUTING ENVIRONMENTS

    公开(公告)号:US20200026575A1

    公开(公告)日:2020-01-23

    申请号:US16586576

    申请日:2019-09-27

    IPC分类号: G06F9/50

    摘要: Methods, apparatus, systems and machine-readable storage media of an edge computing device which is enabled to access and select the use of local or remote acceleration resources for edge computing processing is disclosed. In an example, an edge computing device obtains first telemetry information that indicates availability of local acceleration circuitry to execute a function, and obtains second telemetry that indicates availability of a remote acceleration function to execute the function. An estimated time (and cost or other identifiable or estimateable considerations) to execute the function at the respective location is identified. The use of the local acceleration circuitry or the remote acceleration resource is selected based on the estimated time and other appropriate factors in relation to a service level agreement.

    ADAPTIVE POWER MANAGEMENT FOR EDGE DEVICE

    公开(公告)号:US20210109584A1

    公开(公告)日:2021-04-15

    申请号:US17132202

    申请日:2020-12-23

    摘要: Various aspects of methods, systems, and use cases include coordinating actions at an edge device based on power production in a distributed edge computing environment. A method may include identifying a long-term service level agreement (SLA) for a component of an edge device, and determining a list of resources related to the component using the long-term SLA. The method may include scheduling a task for the component based on the long-term SLA, a current battery level at the edge device, a current energy harvest rate at the edge device, or an amount of power required to complete the task. A resource of the list of resources may be used to initiate the task, such as according to the scheduling.

    SWITCH-BASED ADAPTIVE TRANSFORMATION FOR EDGE APPLIANCES

    公开(公告)号:US20210320875A1

    公开(公告)日:2021-10-14

    申请号:US17357358

    申请日:2021-06-24

    IPC分类号: H04L12/851

    摘要: A network switch includes a memory device to store a stream information of a plurality of data streams being handled by the network switch, the stream information including a stream identifier, a stream service level agreement (SLA), and a stream traffic type; accelerator circuitry to apply stream transformation functions to data streams; telemetry circuitry to monitor egress ports of the network switch; and scheduler circuitry to: receive telemetry data from the telemetry circuitry to determine that a utilization of egress ports of the network switch is over a threshold utilization; determine a selected data stream of the plurality of data streams to transform; use the accelerator circuitry to transform the selected data stream to produce a transformed data stream, wherein the transformed data stream complies with a corresponding stream SLA; and transmit the transformed data stream on an egress port.

    EDGE COMPUTING SERVICE GLOBAL VALIDATION
    6.
    发明申请

    公开(公告)号:US20190140919A1

    公开(公告)日:2019-05-09

    申请号:US16235159

    申请日:2018-12-28

    IPC分类号: H04L12/24 H04L29/08

    摘要: An architecture to enable verification, ranking, and identification of respective edge service properties and associated service level agreement (SLA) properties, such as in an edge cloud or other edge computing environment, is disclosed. In an example, management and use of service information for an edge service includes: providing SLA information for an edge service to an operational device, for accessing an edge service hosted in an edge computing environment, with the SLA information providing reputation information for computing functions of the edge service according to an identified SLA; receiving a service request for use of the computing functions of the edge service, under the identified SLA; requesting, from the edge service, performance of the computing functions of the edge service according to the service request; and tracking the performance of the computing functions of the edge service according to the service request and compliance with the identified SLA.

    INFRASTRUCTURE-DELEGATED ORCHESTRATION BACKUP USING NETWORKED PROCESSING UNITS

    公开(公告)号:US20230132992A1

    公开(公告)日:2023-05-04

    申请号:US18090786

    申请日:2022-12-29

    IPC分类号: H04L67/10 G06F11/07

    摘要: Various approaches for monitoring and responding to orchestration or service failures with the use of infrastructure processing units (IPUs) and similar networked processing units are disclosed. A method performed by a computing device for deploying remedial actions in failure scenarios of an orchestrated edge computing environment may include: identifying an orchestration configuration of a controller entity (responsible for orchestration) and a worker entity (subject to the orchestration to provide at least one service); determining a failure scenario of the orchestration of the worker entity, such as at a networked processing unit implemented at a network interface located between the controller entity and the worker entity; and causing a remedial action to resolve the failure scenario and modify the orchestration configuration, such as replacing functionality of the controller entity or the worker entity with functionality at a replacement entity.

    VIRTUAL POOLS AND RESOURCES USING DISTRIBUTED NETWORKED PROCESSING UNITS

    公开(公告)号:US20230136615A1

    公开(公告)日:2023-05-04

    申请号:US18090701

    申请日:2022-12-29

    IPC分类号: G06F9/50

    摘要: Various approaches for deploying and using virtual pools of compute resources with the use of infrastructure processing units (IPUs) and similar networked processing units are disclosed. A host computing system may be configured to operate a virtual pool of resources, with operations including: identifying, at the host computing system, availability of a resource at the host computing system; transmitting, to a network infrastructure device, a notification that the resource at the host computing system is available for use in a virtual resource pool in the edge computing network; receiving a request for the resource in the virtual resource pool that is provided on behalf of a client computing system, based on the request being coordinated via the network infrastructure device and includes at least one quality of service (QoS) requirement; and servicing the request for the resource, based on the at least one QoS requirement.