GEOFENCE-BASED EDGE SERVICE CONTROL AND AUTHENTICATION

    公开(公告)号:US20210006972A1

    公开(公告)日:2021-01-07

    申请号:US17025519

    申请日:2020-09-18

    摘要: Methods, systems, and use cases for geofence-based edge service control and authentication are discussed, including an orchestration system with memory and at least one processing circuitry coupled to the memory. The processing circuitry is configured to perform operations to obtain, from a plurality of connectivity nodes providing edge services, physical location information, and resource availability information associated with each of the plurality of connectivity nodes. An edge-to-edge location graph (ELG) is generated based on the physical location information and the resource availability information, the ELG indicating a subset of the plurality of connectivity nodes that are available for executing a plurality of services associated with an edge workload. The connectivity nodes are provisioned with the ELG and a workflow execution plan to execute the plurality of services, the workflow execution plan including metadata with a geofence policy. The geofence policy specifies geofence restrictions associated with each of the plurality of services.

    DISTRIBUTED MACHINE LEARNING IN AN INFORMATION CENTRIC NETWORK

    公开(公告)号:US20200027022A1

    公开(公告)日:2020-01-23

    申请号:US16586593

    申请日:2019-09-27

    IPC分类号: G06N20/00 H04L29/08

    摘要: Systems and techniques for distributed machine learning (DML) in an information centric network (ICN) are described herein. Finite message exchanges, such as those used in many DML exercises, may be efficiently implemented by treating certain data packets as interest packets to reduce overall network overhead when performing the finite message exchange. Further, network efficiency in DML may be improved achieved by using local coordinating nodes to manage devices participating in a distributed machine learning exercise. Additionally, modifying a round of DML training to accommodate available participant devices, such as by using a group quality of service metric to select the devices, or extending the round execution parameters to include additional devices, may have an impact on DML performance.

    ARTIFICIAL INTELLIGENCE INFERENCE ARCHITECTURE WITH HARDWARE ACCELERATION

    公开(公告)号:US20190138908A1

    公开(公告)日:2019-05-09

    申请号:US16235100

    申请日:2018-12-28

    IPC分类号: G06N3/10 H04L12/24

    摘要: Various systems and methods of artificial intelligence (AI) processing using hardware acceleration within edge computing settings are described herein. In an example, processing performed at an edge computing device includes: obtaining a request for an AI operation using an AI model; identifying, based on the request, an AI hardware platform for execution of an instance of the AI model; and causing execution of the AI model instance using the AI hardware platform. Further operations to analyze input data, perform an inference operation with the AI model, and coordinate selection and operation of the hardware platform for execution of the AI model, is also described.

    ONBOARDING AND ACCOUNTING OF DEVICES INTO AN HPC FABRIC

    公开(公告)号:US20180183796A1

    公开(公告)日:2018-06-28

    申请号:US15392379

    申请日:2016-12-28

    摘要: A method to onboard a slave node to a high performance computing system that includes a fabric switch network that includes a fabric switch master and a group of slave nodes, wherein the fabric switch master is configured to route messages between slave nodes of the group comprising: receiving a fabric switch master address message, at an onboarding slave node, over an external network; providing an identification message, by the onboarding slave node, over the fabric switch network; receiving the identification message, at the fabric switch master, over the fabric switch network; providing the permission message, by the fabric switch master, over the fabric switch network; and receiving, a permission message, at the onboarding slave node, over the fabric switch network.