-
公开(公告)号:US11870669B2
公开(公告)日:2024-01-09
申请号:US17556051
申请日:2021-12-20
Applicant: Intel Corporation
Inventor: Rajesh Poornachandran , Vincent Zimmer , Subrata Banik , Marcos Carranza , Kshitij Arun Doshi , Francesc Guim Bernat , Karthik Kumar
IPC: H04L43/0817 , H04L43/0894 , G06N20/00 , H04L41/5009 , H04L43/0864
CPC classification number: H04L43/0817 , G06N20/00 , H04L41/5009 , H04L43/0864 , H04L43/0894
Abstract: An apparatus to facilitate at-scale telemetry using interactive matrix for deterministic microservices performance is disclosed. The apparatus includes one or more processors to: receive user input comprising an objective or task corresponding to scheduling a microservice for a service, wherein the objective or task may include QoS, SLO, ML feedback; identify interaction matrix components in an interaction matrix that match the objective or tasks for the microservice; identify knowledgebase components in knowledgebase that match the objective or tasks for the microservice; and determine a scheduling operation for the microservice, the scheduling operation to deploy the microservice in a configuration that is in accordance with the objective or task, wherein the configuration comprises a set of hardware devices and microservice interaction points determined based on the interaction matrix components and the knowledgebase components.
-
52.
公开(公告)号:US11789878B2
公开(公告)日:2023-10-17
申请号:US16721706
申请日:2019-12-19
Applicant: Intel Corporation
Inventor: Benjamin Graniello , Francesc Guim Bernat , Karthik Kumar , Thomas Willhalm
CPC classification number: G06F13/1663 , G06F3/061 , G06F3/067 , G06F3/0635 , G06F3/0685 , G06F9/5016 , G06F11/3037 , G06F12/0246 , G06F13/1678 , G06F15/7807
Abstract: Methods, apparatus and systems for adaptive fabric allocation for local and remote emerging memories-based prediction schemes. In conjunction with performing memory transfers between a compute host and memory device connected via one or more interconnect segments, memory read and write traffic is monitored for at least one interconnect segment having reconfigurable upstream lanes and downstream lanes. Predictions of expected read and write bandwidths for the at least one interconnect segment are then made. Based on the expected read and write bandwidths, the upstream lanes and downstream lanes are dynamically reconfigured. The interconnect segments include interconnect links such as Compute Exchange Link (CXL) flex buses and memory channels for local memory implementations, and fabric links for remote memory implementations. For local memory, management messages may be used to provide telemetry information containing the expected read and write bandwidths. For remote memory, telemetry information is provided to a fabric management component that is used to dynamically reconfigure one or more fabric links.
-
公开(公告)号:US11675326B2
公开(公告)日:2023-06-13
申请号:US17330738
申请日:2021-05-26
Applicant: Intel Corporation
Inventor: Nicolas A. Salhuana , Karthik Kumar , Thomas Willhalm , Francesc Guim Bernat , Narayan Ranganathan
IPC: G06F9/06 , G05B19/042 , H03K19/17732 , G06F8/41 , H03K19/17728
CPC classification number: G05B19/0426 , G06F8/44 , G06F8/456 , H03K19/17728 , H03K19/17732 , G05B2219/21109
Abstract: In one embodiment, an apparatus comprises a fabric controller of a first computing node. The fabric controller is to receive, from a second computing node via a network fabric that couples the first computing node to the second computing node, a request to execute a kernel on a field-programmable gate array (FPGA) of the first computing node; instruct the FPGA to execute the kernel; and send a result of the execution of the kernel to the second computing node via the network fabric.
-
公开(公告)号:US11650951B2
公开(公告)日:2023-05-16
申请号:US17363867
申请日:2021-06-30
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Mustafa Hajeer
IPC: G06F15/173 , H04L67/1097 , G06F15/167 , H04L9/40
CPC classification number: G06F15/17331 , G06F15/167 , H04L63/08 , H04L63/0892 , H04L63/102 , H04L67/1097
Abstract: An apparatus is described. The apparatus includes logic circuitry embedded in at least one of a memory controller, network interface and peripheral control hub to process a function as a service (FaaS) function call embedded in a request. The request is formatted according to a protocol. The protocol allows a remote computing system to access a memory that is coupled to the memory controller without invoking processing cores of a local computing system that the memory controller is a component of.
-
公开(公告)号:US11611491B2
公开(公告)日:2023-03-21
申请号:US16235159
申请日:2018-12-28
Applicant: Intel Corporation
Inventor: Ned M. Smith , Ben McCahill , Francesc Guim Bernat , Felipe Pastor Beneyto , Karthik Kumar , Timothy Verrall
IPC: H04L41/5009 , H04L67/10 , H04L41/5019 , H04L67/51 , H04L67/12
Abstract: An architecture to enable verification, ranking, and identification of respective edge service properties and associated service level agreement (SLA) properties, such as in an edge cloud or other edge computing environment, is disclosed. In an example, management and use of service information for an edge service includes: providing SLA information for an edge service to an operational device, for accessing an edge service hosted in an edge computing environment, with the SLA information providing reputation information for computing functions of the edge service according to an identified SLA; receiving a service request for use of the computing functions of the edge service, under the identified SLA; requesting, from the edge service, performance of the computing functions of the edge service according to the service request; and tracking the performance of the computing functions of the edge service according to the service request and compliance with the identified SLA.
-
公开(公告)号:US11609859B2
公开(公告)日:2023-03-21
申请号:US16950233
申请日:2020-11-17
Applicant: Intel Corporation
Inventor: Karthik Kumar , Thomas Willhalm , Francesc Guim Bernat , Brian J. Slechta
IPC: G06F12/0891 , G06F9/30
Abstract: Embodiments of the invention include a machine-readable medium having stored thereon at least one instruction, which if performed by a machine causes the machine to perform a method that includes decoding, with a node, an invalidate instruction; and executing, with the node, the invalidate instruction for invalidating a memory range specified across a fabric interconnect.
-
公开(公告)号:US11570264B1
公开(公告)日:2023-01-31
申请号:US17557604
申请日:2021-12-21
Applicant: Intel Corporation
Inventor: Rajesh Poornachandran , Vincent Zimmer , Subrata Banik , Marcos Carranza , Kshitij Arun Doshi , Francesc Guim Bernat , Karthik Kumar
IPC: H04L67/51 , H04L41/5009 , H04L9/32 , H04L67/562 , H04L9/00
Abstract: An apparatus to facilitate provenance audit trails for microservices architectures is disclosed. The apparatus includes one or more processors to: obtain, by a microservice of a service hosted in a datacenter, provisioned credentials for the microservice based on an attestation protocol; generate, for a task performed by the microservice, provenance metadata for the task, the provenance metadata including identification of the microservice, operating state of at least one of a hardware resource or a software resource used to execute the microservice and the task, and operating state of a sidecar of the microservice during the task; encrypt the provenance metadata with the provisioned credentials for the microservice; and record the encrypted provenance metadata in a local blockchain of provenance metadata maintained for the hardware resource executing the task and the microservice.
-
公开(公告)号:US20230022544A1
公开(公告)日:2023-01-26
申请号:US17957735
申请日:2022-09-30
Applicant: Intel Corporation
Inventor: Thomas J. Willhalm , Francesc Guim Bernat , Karthik Kumar , Marcos E. Carranza
Abstract: In one embodiment, an apparatus couples to a host processor over a Compute Express Link (CXL)-based link. The apparatus includes a transaction queue to queue memory transactions to be completed in an addressable memory coupled to the apparatus, a transaction cache, conflict detection circuitry to determine whether a conflict exists between memory transactions, and transaction execution circuitry. The transaction execution circuitry may access a transaction from the transaction queue, the transaction to implement one or more memory operations in the memory, store data from the memory to be accessed by the transaction operations in the transaction cache, execute operations of the transaction, including modifying data from the memory location stored in the transaction cache, and based on completion of the transaction, cause the modified data from the transaction cache to be stored in the memory.
-
公开(公告)号:US11537447B2
公开(公告)日:2022-12-27
申请号:US16969728
申请日:2018-06-29
Applicant: INTEL CORPORATION
Inventor: Francesc Guim Bernat , Karthik Kumar , Susanne M. Balle , Ignacio Astilleros Diez , Timothy Verrall , Ned M. Smith
Abstract: Technologies for providing efficient migration of services include a server device. The server device includes compute engine circuitry to execute a set of services on behalf of a terminal device and migration accelerator circuitry. The migration accelerator circuitry is to determine whether execution of the services is to be migrated from an edge station in which the present server device is located to a second edge station in which a second server device is located, determine a prioritization of the services executed by the server device, and send, in response to a determination that the services are to be migrated and as a function of the determined prioritization, data utilized by each service to the second server device of the second edge station to migrate the services. Other embodiments are also described and claimed.
-
公开(公告)号:US11456966B2
公开(公告)日:2022-09-27
申请号:US17500543
申请日:2021-10-13
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Thomas Willhalm , Mark A. Schmisseur , Timothy Verrall
IPC: G06F15/173 , H04L47/765 , H04L47/70 , G06F9/50 , G06N20/00
Abstract: There is disclosed in one example an application-specific integrated circuit (ASIC), including: an artificial intelligence (Al) circuit; and circuitry to: identify a flow, the flow including traffic diverted from a core cloud service of a network to be serviced by an edge node closer to an edge of the network than to the core of the network; receive telemetry related to the flow, the telemetry including fine-grained and flow-level network monitoring data for the flow; operate the Al circuit to predict, from the telemetry, a future service-level demand for the edge node; and cause a service parameter of the edge node to be tuned according to the prediction.
-
-
-
-
-
-
-
-
-