-
公开(公告)号:US11416295B2
公开(公告)日:2022-08-16
申请号:US16563171
申请日:2019-09-06
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Suraj Prabhakaran , Timothy Verrall , Thomas Willhalm , Mark Schmisseur
IPC: G06F9/50 , G06F16/27 , G06F21/62 , G06F16/23 , H04L9/06 , H04L9/32 , H04L41/12 , H04L47/70 , H04L67/52 , H04L67/60 , G06F21/60 , H04L9/08
Abstract: Technologies for providing efficient data access in an edge infrastructure include a compute device comprising circuitry configured to identify pools of resources that are usable to access data at an edge location. The circuitry is also configured to receive a request to execute a function at an edge location. The request identifies a data access performance target for the function. The circuitry is also configured to map, based on a data access performance of each pool and the data access performance target of the function, the function to a set of the pools to satisfy the data access performance target.
-
公开(公告)号:US20220222274A1
公开(公告)日:2022-07-14
申请号:US17580436
申请日:2022-01-20
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Suraj Prabhakaran , Ramanathan Sethuraman , Timothy Verrall , Ned Smith
Abstract: Technologies for providing dynamic persistence of data in edge computing include a device including circuitry configured to determine multiple different logical domains of data storage resources for use in storing data from a client compute device at an edge of a network. Each logical domain has a different set of characteristics. The circuitry is also to configured to receive, from the client compute device, a request to persist data. The request includes a target persistence objective indicative of an objective to be satisfied in the storage of the data. Additionally, the circuitry is configured to select, as a function of the characteristics of the logical domains and the target persistence objective, a logical domain into which to persist the data and provide the data to the selected logical domain.
-
公开(公告)号:US20220197729A1
公开(公告)日:2022-06-23
申请号:US17133112
申请日:2020-12-23
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Patrick G. Kutch , Alexander Bachmutsky , Nicolae Octavian Popovici
Abstract: An apparatus comprising a network interface controller comprising a queue for messages for a thread executing on a host computing system, wherein the queue is dedicated to the thread; and circuitry to send a notification to the host computing system to resume execution of the thread when a monitoring rule for the queue has been triggered.
-
公开(公告)号:US20220138003A1
公开(公告)日:2022-05-05
申请号:US17504062
申请日:2021-10-18
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Ned M. Smith , Thomas Willhalm , Timothy Verrall
IPC: G06F9/48 , G06F16/23 , H04L9/06 , G06F16/27 , H04L9/32 , H04L12/66 , H04L41/12 , H04L47/70 , H04L67/52 , H04L67/60 , G06F9/50 , G06F21/60 , H04L9/08 , G06F11/30 , G06F9/455
Abstract: Methods, apparatus, systems and machine-readable storage media of an edge computing device which is enabled to access and select the use of local or remote acceleration resources for edge computing processing is disclosed. In an example, an edge computing device obtains first telemetry information that indicates availability of local acceleration circuitry to execute a function, and obtains second telemetry that indicates availability of a remote acceleration function to execute the function. An estimated time (and cost or other identifiable or estimateable considerations) to execute the function at the respective location is identified. The use of the local acceleration circuitry or the remote acceleration resource is selected based on the estimated time and other appropriate factors in relation to a service level agreement.
-
公开(公告)号:US20220121566A1
公开(公告)日:2022-04-21
申请号:US17561167
申请日:2021-12-23
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Thomas Willhalm , Alexander Bachmutsky , Marcos Carranza
Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed for network service management. An example apparatus includes microservice translation circuitry to query, at a first time, a memory address range corresponding to a plurality of services, and generate state information corresponding to the plurality of services at the first time. The example apparatus also includes microservice request circuitry to query, at a second time, the memory address range to identify a memory address state change, the memory address state change indicative of an instantiation request for at least one of the plurality of services, and microservice instantiation circuitry to cause a first compute device to instantiate the at least one of the plurality of services.
-
公开(公告)号:US20220121481A1
公开(公告)日:2022-04-21
申请号:US17561835
申请日:2021-12-24
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Alexander Bachmutsky , Marcos E. Carranza , Cesar Ignacio Martinez Spessot
Abstract: Examples described herein relate to offload service mesh management and selection of memory pool accessed by services associated with the service mesh to a switch. Based on telemetry data of one or more nodes and network traffic, one or more processes can be allocated to execute on the one or more nodes and a memory pool can be selected to store data generated by the one or more processes.
-
公开(公告)号:US20220038388A1
公开(公告)日:2022-02-03
申请号:US17500543
申请日:2021-10-13
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Thomas Willhalm , Mark A. Schmisseur , Timothy Verrall
IPC: H04L12/919 , H04L12/911 , G06F9/50 , G06N20/00
Abstract: There is disclosed in one example an application-specific integrated circuit (ASIC), including: an artificial intelligence (Al) circuit; and circuitry to: identify a flow, the flow including traffic diverted from a core cloud service of a network to be serviced by an edge node closer to an edge of the network than to the core of the network; receive telemetry related to the flow, the telemetry including fine-grained and flow-level network monitoring data for the flow; operate the Al circuit to predict, from the telemetry, a future service-level demand for the edge node; and cause a service parameter of the edge node to be tuned according to the prediction.
-
公开(公告)号:US11212085B2
公开(公告)日:2021-12-28
申请号:US16368982
申请日:2019-03-29
Applicant: Intel Corporation
Inventor: Timothy Verrall , Thomas Willhalm , Francesc Guim Bernat , Karthik Kumar , Ned M. Smith , Rajesh Poornachandran , Kapil Sood , Tarun Viswanathan , John J. Browne , Patrick Kutch
IPC: H04L9/08
Abstract: Technologies for accelerated key caching in an edge hierarchy include multiple edge appliance devices organized in tiers. An edge appliance device receives a request for a key, such as a private key. The edge appliance device determines whether the key is included in a local key cache and, if not, requests the key from an edge appliance device included in an inner tier of the edge hierarchy. The edge appliance device may request the key from an edge appliance device included in a peer tier of the edge hierarchy. The edge appliance device may activate per-tenant accelerated logic to identify one or more keys in the key cache for eviction. The edge appliance device may activate per-tenant accelerated logic to identify one or more keys for pre-fetching. Those functions of the edge appliance device may be performed by an accelerator such as an FPGA. Other embodiments are described and claimed.
-
69.
公开(公告)号:US20210349840A1
公开(公告)日:2021-11-11
申请号:US17443379
申请日:2021-07-26
Applicant: Intel Corporation
Inventor: Karthik Kumar , Francesc Guim Bernat
Abstract: In one embodiment, an apparatus includes: an interface to couple a plurality of devices of a system and enable communication according to a Compute Express Link (CXL) protocol. The interface may receive a consistent memory request having a type indicator to indicate a type of consistency to be applied to the consistent memory request. A request scheduler coupled to the interface may receive the consistent memory request and schedule it for execution according to the type of consistency, based at least in part on a priority of the consistent memory request and one or more pending consistent memory requests. Other embodiments are described and claimed.
-
公开(公告)号:US20210328934A1
公开(公告)日:2021-10-21
申请号:US17359204
申请日:2021-06-25
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Marcos Carranza , Rita Wouhaybi , Cesar Martinez-Spessot
IPC: H04L12/851 , H04L29/06
Abstract: Methods, apparatus, systems and articles of manufacture are disclosed for edge data prioritization. An example apparatus includes at least one memory, instructions, and processor circuitry to at least one of execute or instantiate the instructions to identify an association of a data packet with a data stream based on one or more data stream parameters included in the data packet corresponding to the data stream, the data packet associated with a first priority, execute a model based on the one or more data stream parameters to generate a model output, determine a second priority of at least one of the data packet or the data stream based on the model output, the model output indicative of an adjustment of the first priority to the second priority, and cause transmission of at least one of the data packet or the data stream based on the second priority.
-
-
-
-
-
-
-
-
-