TECHNOLOGIES FOR DATA MIGRATION BETWEEN EDGE ACCELERATORS HOSTED ON DIFFERENT EDGE LOCATIONS

    公开(公告)号:US20220237033A1

    公开(公告)日:2022-07-28

    申请号:US17666366

    申请日:2022-02-07

    Abstract: Technologies for migrating data between edge accelerators hosted on different edge locations include a device hosted on a present edge location. The device includes one or more processors to: receive a workload from a requesting device, determine one or more accelerator devices hosted on the present edge location to perform the workload, and transmit the workload to the one or more accelerator devices to process the workload. The one or more processor is further to determine whether to perform data migration from the one or more accelerator devices to one or more different edge accelerator devices hosted on a different edge location, and send, in response to a determination to perform the data migration, a request to the one or more accelerator devices on the present edge location for transformed workload data to be processed by the one or more different edge accelerator devices.

    TECHNOLOGIES FOR PROVIDING DYNAMIC PERSISTENCE OF DATA IN EDGE COMPUTING

    公开(公告)号:US20220222274A1

    公开(公告)日:2022-07-14

    申请号:US17580436

    申请日:2022-01-20

    Abstract: Technologies for providing dynamic persistence of data in edge computing include a device including circuitry configured to determine multiple different logical domains of data storage resources for use in storing data from a client compute device at an edge of a network. Each logical domain has a different set of characteristics. The circuitry is also to configured to receive, from the client compute device, a request to persist data. The request includes a target persistence objective indicative of an objective to be satisfied in the storage of the data. Additionally, the circuitry is configured to select, as a function of the characteristics of the logical domains and the target persistence objective, a logical domain into which to persist the data and provide the data to the selected logical domain.

    System decoder for training accelerators

    公开(公告)号:US11269801B2

    公开(公告)日:2022-03-08

    申请号:US17125439

    申请日:2020-12-17

    Abstract: There is disclosed an example of an artificial intelligence (AI) system, including: a first hardware platform; a fabric interface configured to communicatively couple the first hardware platform to a second hardware platform; a processor hosted on the first hardware platform and programmed to operate on an AI problem; and a first training accelerator, including: an accelerator hardware; a platform inter-chip link (ICL) configured to communicatively couple the first training accelerator to a second training accelerator on the first hardware platform without aid of the processor; a fabric ICL to communicatively couple the first training accelerator to a third training accelerator on a second hardware platform without aid of the processor; and a system decoder configured to operate the fabric ICL and platform ICL to share data of the accelerator hardware between the first training accelerator and second and third training accelerators without aid of the processor.

    Distributed and contextualized artificial intelligence inference service

    公开(公告)号:US11250336B2

    公开(公告)日:2022-02-15

    申请号:US15857087

    申请日:2017-12-28

    Abstract: Various systems and methods of initiating and performing contextualized AI inferencing, are described herein. In an example, operations performed with a gateway computing device to invoke an inferencing model include receiving and processing a request for an inferencing operation, selecting an implementation of the inferencing model on a remote service based on a model specification and contextual data from the edge device, and executing the selected implementation of the inferencing model, such that results from the inferencing model are provided back to the edge device. Also in an example, operations performed with an edge computing device to request an inferencing model include collecting contextual data, generating an inferencing request, transmitting the inference request to a gateway device, and receiving and processing the results of execution. Further techniques for implementing a registration of the inference model, and invoking particular variants of an inference model, are also described.

    AI model and data transforming techniques for cloud edge

    公开(公告)号:US11095618B2

    公开(公告)日:2021-08-17

    申请号:US15941724

    申请日:2018-03-30

    Abstract: Systems and techniques for AI model and data camouflaging techniques for cloud edge are described herein. In an example, a neural network transformation system is adapted to receive, from a client, camouflaged input data, the camouflaged input data resulting from application of a first encoding transformation to raw input data. The neural network transformation system may be further adapted to use the camouflaged input data as input to a neural network model, the neural network model created using a training data set created by applying the first encoding transformation on training data. The neural network transformation system may be further adapted to receive a result from the neural network model and transmit output data to the client, the output data based on the result.

    METHODS, SYSTEMS, ARTICLES OF MANUFACTURE AND APPARATUS TO BATCH FUNCTIONS

    公开(公告)号:US20210109785A1

    公开(公告)日:2021-04-15

    申请号:US17132642

    申请日:2020-12-23

    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed to batch functions. An example apparatus includes a function evaluator to, in response to receiving a function request associated with a function and an input, flag the function for batching, a timing handler to determine a waiting threshold associated with the function, a queue handler to store the function, the input, and the waiting threshold in a queue, and a client interface to, in response to a time duration the function is stored in the queue satisfying the waiting threshold, send the function and the input to a client device to increase throughput to the client device.

    TECHNOLOGIES FOR BATCHING REQUESTS IN AN EDGE INFRASTRUCTURE

    公开(公告)号:US20190394096A1

    公开(公告)日:2019-12-26

    申请号:US16563175

    申请日:2019-09-06

    Abstract: Technologies for batching requests in an edge infrastructure include a compute device including circuitry configured to obtain a request for an operation to be performed at an edge location. The circuitry is also configured to determine, as a function of a parameter of the obtained request, a batch that the obtained request is to be assigned to. The batch includes a one or more requests for operations to be performed at an edge location. The circuitry is also configured to assign the batch to a cloudlet at an edge location. The cloudlet includes a set of resources usable to execute the operations requested in the batch.

Patent Agency Ranking