Dynamically augmenting edge resources

    公开(公告)号:US11625277B2

    公开(公告)日:2023-04-11

    申请号:US16878861

    申请日:2020-05-20

    Abstract: Systems and methods may be used to determine where to run a service based on workload-based conditions or system-level conditions. An example method may include determining whether power available to a resource of a compute device satisfies a target power, for example to satisfy a target performance for a workload. When the power available is insufficient, an additional resource may be provided, for example on a remote device from the compute device. The additional resource may be used as a replacement for the resource of the compute device or to augment the resource of the compute device.

    SEPARATE NETWORK SLICING FOR SECURITY EVENTS PROPAGATION ACROSS LAYERS ON SPECIAL PACKET DATA PROTOCOL CONTEXT

    公开(公告)号:US20230095715A1

    公开(公告)日:2023-03-30

    申请号:US17484811

    申请日:2021-09-24

    Abstract: An apparatus and system to provide separate network slices for security events are described. A dedicated secure network slice is provided for PDP data from a UE. The network slice is used for detecting security issues and sending security-related information to clients. The communications in the dedicated network slice are associated with a special PDP context used by the UE to interface with the network slice. Once the UE has detected a security issue or has been notified of the security issue on the network or remote servers, the UE uses a special PDP service, and is able to stop uplink/downlink channels, close running applications and enter into a sate mode, cut off connections to the networks, and try to determine alternate available connectivity.

    Distributed and contextualized artificial intelligence inference service

    公开(公告)号:US11580428B2

    公开(公告)日:2023-02-14

    申请号:US17668844

    申请日:2022-02-10

    Abstract: Various systems and methods of initiating and performing contextualized AI inferencing, are described herein. In an example, operations performed with a gateway computing device to invoke an inferencing model include receiving and processing a request for an inferencing operation, selecting an implementation of the inferencing model on a remote service based on a model specification and contextual data from the edge device, and executing the selected implementation of the inferencing model, such that results from the inferencing model are provided back to the edge device. Also in an example, operations performed with an edge computing device to request an inferencing model include collecting contextual data, generating an inferencing request, transmitting the inference request to a gateway device, and receiving and processing the results of execution. Further techniques for implementing a registration of the inference model, and invoking particular variants of an inference model, are also described.

    End-to-end quality of service in edge computing environments

    公开(公告)号:US11539596B2

    公开(公告)日:2022-12-27

    申请号:US17494511

    申请日:2021-10-05

    Abstract: Systems and techniques for end-to-end quality of service in edge computing environments are described herein. A set of telemetry measurements may be obtained for an ongoing dataflow between a device and a node of an edge computing system. A current key performance indicator (KPI) may be calculated for the ongoing dataflow. The current KPI may be compared to a target KPI to determine an urgency value. A set of resource quality metrics may be collected for resources of the network. The set of resource quality metrics may be evaluated with a resource adjustment model to determine available resource adjustments. A resource adjustment may be selected from the available resource adjustments based on an expected minimization of the urgency value. Delivery of the ongoing dataflow may be modified using the selected resource adjustment.

Patent Agency Ranking