TECHNOLOGIES FOR PROVIDING EFFICIENT MIGRATION OF SERVICES AT A CLOUD EDGE

    公开(公告)号:US20210103481A1

    公开(公告)日:2021-04-08

    申请号:US16969728

    申请日:2018-06-29

    Abstract: Technologies for providing efficient migration of services include a server device. The server device includes compute engine circuitry to execute a set of services on behalf of a terminal device and migration accelerator circuitry. The migration accelerator circuitry is to determine whether execution of the services is to be migrated from an edge station in which the present server device is located to a second edge station in which a second server device is located, determine a prioritization of the services executed by the server device, and send, in response to a determination that the services are to be migrated and as a function of the determined prioritization, data utilized by each service to the second server device of the second edge station to migrate the services. Other embodiments are also described and claimed.

    MICROSERVICE DATA PATH AND CONTROL PATH PROCESSING

    公开(公告)号:US20220321491A1

    公开(公告)日:2022-10-06

    申请号:US17844506

    申请日:2022-06-20

    Abstract: Examples described herein relate to a network interface device that includes circuitry to process data and circuitry to split a received flow of a mixture of control and data content and provide the control content to a control plane processor and provide the data content for access to the circuitry to process data, wherein the mixture of control and data content are received as part of a Remote Procedure Call. In some examples, provide the control content to a control plane processor, the circuitry is to remove data content from a received packet and include an indicator of a location of removed data content in the received packet.

    TECHNOLOGIES FOR PRE-CONFIGURING ACCELERATORS BY PREDICTING BIT-STREAMS

    公开(公告)号:US20210334138A1

    公开(公告)日:2021-10-28

    申请号:US17365898

    申请日:2021-07-01

    Abstract: Technologies for pre-configuring accelerators by predicting bit-streams include communication circuitry and a compute device. The compute device includes a compute engine to determine one or more bit-streams registered on each accelerator of multiple accelerators. The compute engine is further to predict a next job to be requested for acceleration from an application of at least one compute sled of multiple compute sleds, predict a bit-stream from a bit-stream library that is to execute the predicted next job requested to be accelerated, and determine whether the predicted bit-stream is already registered on one of the accelerators. In response to a determination that the predicted bit-stream is not registered on one of the accelerators, the compute engine is to select an accelerator from the plurality of accelerators that satisfies characteristics of the predicted bit-stream and register the predicted bit-stream on the determined accelerator.

Patent Agency Ranking