-
1.
公开(公告)号:US20220321434A1
公开(公告)日:2022-10-06
申请号:US17848898
申请日:2022-06-24
Applicant: Intel Corporation
Inventor: Andrzej KURIATA , Francesc GUIM BERNAT , Karthik KUMAR , Susanne M. BALLE , Alexander BACHMUTSKY , Duane E. GALBI , Nagabhushan CHITLUR , Sundar NADATHUR
IPC: H04L43/04 , G06F9/54 , H04L67/133 , H04L43/0852 , H04L67/51
Abstract: Reliability and performance of a data center is increased by processing telemetry data in a network device in the data center. A Telemetry Correlation Engine (TCE) in the network device correlates host level telemetry received from a compute node with low-level network device telemetry collected in the network device to identify performance bottlenecks for microservices based applications. The Telemetry Correlation Engine processes and analyzes the telemetry data from the compute node and network statistics available in the network device.
-
公开(公告)号:US20220222010A1
公开(公告)日:2022-07-14
申请号:US17710657
申请日:2022-03-31
Applicant: Intel Corporation
Inventor: Alexander BACHMUTSKY , Francesc GUIM BERNAT , Karthik KUMAR , Marcos E. CARRANZA
IPC: G06F3/06
Abstract: Methods and apparatus for advanced interleaving techniques for fabric based pooling architectures. The method implemented in an environment including a switch connected to host servers and to pooled memory nodes or memory servers hosting memory pools. Memory is interleaved across the memory pools using interleaving units, with the interleaved memory mapped into a global memory address space. Applications running on the host servers are enabled to access data stored in the memory pools via memory read and write requests issued by the applications specifying address endpoints within the global memory space. The switch generates multi-cast or multiple unicast messages associated with the memory read and write requests that are sent to the pooled memory nodes or memory servers. For memory reads, the data returned from multiple memory pools is aggregated at the switch and returned to the application using one or more packets as a single response.
-
公开(公告)号:US20210120077A1
公开(公告)日:2021-04-22
申请号:US17134374
申请日:2020-12-26
Applicant: Intel Corporation
Inventor: Francesc GUIM BERNAT , Karthik KUMAR , Alexander BACHMUTSKY
Abstract: A multi-tenant dynamic secure data region in which encryption keys can be shared by services running in nodes reduces the need for decrypting data as encrypted data is transferred between nodes in the data center. Instead of using a key per process/service, that is created by a memory controller when the service is instantiated (for example, MKTME), a software stack can specify that a set of processes or compute entities (for example, bit-streams) share a private key that is created and provided by the data center.
-
公开(公告)号:US20230115259A1
公开(公告)日:2023-04-13
申请号:US17877647
申请日:2022-07-29
Applicant: Intel Corporation
Inventor: Francesc GUIM BERNAT , Suraj PRABHAKARAN , Alexander BACHMUTSKY , Raghu KONDAPALLI , Kshitij A. DOSHI
IPC: G06F18/214 , H04L67/1097 , G06N3/082 , H04L67/125 , H04L67/12 , G06N3/08 , H04L67/10 , G06N3/063
Abstract: An apparatus for training artificial intelligence (AI) models is presented. In embodiments, the apparatus may include an input interface to receive in real time model training data from one or more sources to train one or more artificial neural networks (ANNs) associated with the one or more sources, each of the one or more sources associated with at least one of the ANNs; a load distributor coupled to the input interface to distribute in real time the model training data for the one or more ANNs to one or more AI appliances; and a resource manager coupled to the load distributor to dynamically assign one or more computing resources on ones of the AI appliances to each of the ANNs in view of amounts of the training data received in real time from the one or more sources for their associated ANNs.
-
公开(公告)号:US20220113911A1
公开(公告)日:2022-04-14
申请号:US17558268
申请日:2021-12-21
Applicant: Intel Corporation
Inventor: Andrzej KURIATA , Susanne M. BALLE , Duane E. GALBI , Sundar NADATHUR , Nagabhushan CHITLUR , Francesc GUIM BERNAT , Alexander BACHMUTSKY
Abstract: Methods, apparatus, and software for remote storage of hardware microservices hosted on other processing units (XPUs) and SOC-XPU Platforms. The apparatus may be a platform including a System on Chip (SOC) and an XPU, such as a Field Programmable Gate Array (FPGA). Software, via execution on the SOC, enables the platform to pre-provision storage space on a remote storage node and assign the storage space to the platform, wherein the pre-provisioned storage space includes one or more container images to be implemented as one or more hardware (HW) microservice front-ends. The XPU/FPGA is configured to implement one or more accelerator functions used to accelerate HW microservice backend operations that are offloaded from the one or more HW microservice front-ends. The platform is also configured to pre-provision a remote storage volume containing worker node components and access and persistently store worker node components.
-
公开(公告)号:US20200259763A1
公开(公告)日:2020-08-13
申请号:US16859792
申请日:2020-04-27
Applicant: Intel Corporation
Inventor: Francesc GUIM BERNAT , Patrick CONNOR , Patrick G. KUTCH , John J. BROWNE , Alexander BACHMUTSKY
IPC: H04L12/911 , H04L12/26 , H04L12/24
Abstract: Examples described herein relate to a device configured to allocate memory resources for packets received by the network interface based on received configuration settings. In some examples, the device is a network interface. Received configuration settings can include one or more of: latency, memory bandwidth, timing of when the content is expected to be accessed, or encryption parameters. In some examples, memory resources include one or more of: a cache, a volatile memory device, a storage device, or persistent memory. In some examples, based on a configuration settings not being available, the network interface is to perform one or more of: dropping a received packet, store the received packet in a buffer that does not meet the configuration settings, or indicate an error. In some examples, configuration settings are conditional where the settings are applied if one or more conditions is met.
-
公开(公告)号:US20200226009A1
公开(公告)日:2020-07-16
申请号:US16836650
申请日:2020-03-31
Applicant: Intel Corporation
Inventor: Alexander BACHMUTSKY , Raghu KONDAPALLI , Francesc GUIM BERNAT , Vadim SUKHOMLINOV
Abstract: Examples described herein relate to requesting execution of a workload by a next function with data transport overhead tailored based on memory sharing capability with the next function. In some examples, data transport overhead is one or more of: sending a memory address pointer, virtual memory address pointer or sending data to the next function. In some examples, the memory sharing capability with the next function is based on one or more of: whether the next function shares an enclave with a sender function, the next function shares physical memory domain with a sender function, or the next function shares virtual memory domain with a sender function. In some examples, selection of the next function from among multiple instances of the next function based on one or more of: sharing of memory domain, throughput performance, latency, cost, load balancing, or service legal agreement (SLA) requirements.
-
公开(公告)号:US20230185760A1
公开(公告)日:2023-06-15
申请号:US17549727
申请日:2021-12-13
Applicant: Intel Corporation
Inventor: Susanne M. BALLE , Duane E. GALBI , Andrzej KURIATA , Sundar NADATHUR , Nagabhushan CHITLUR , Francesc GUIM BERNAT , Alexander BACHMUTSKY
IPC: G06F15/78
CPC classification number: G06F15/7889 , G06F15/7821 , G06F15/7871 , G06F2015/768
Abstract: Methods, apparatus, and software and for hardware microservices accelerated in other processing units (XPUs). The apparatus may be a platform including a System on Chip (SOC) and an XPU, such as a Field Programmable Gate Array (FPGA). The FPGA is configured to implement one or more Hardware (HW) accelerator functions associated with HW microservices. Execution of microservices is split between a software front-end that executes on the SOC and a hardware backend comprising the HW accelerator functions. The software front-end offloads a portion of a microservice and/or associated workload to the HW microservice backend implemented by the accelerator functions. An XPU or FPGA proxy is used to provide the microservice front-ends with shared access to HW accelerator functions, and schedules/multiplexes access to the HW accelerator functions using, e.g., telemetry data generated by the microservice front-ends and/or the HW accelerator functions. The platform may be an infrastructure processing unit (IPU) configured to accelerate infrastructure operations.
-
公开(公告)号:US20230023229A1
公开(公告)日:2023-01-26
申请号:US17952835
申请日:2022-09-26
Applicant: Intel Corporation
Inventor: Karthik KUMAR , Francesc GUIM BERNAT , Alexander BACHMUTSKY , Susanne M. BALLE , Andrzej KURIATA , Nagabhushan CHITLUR
IPC: G06F3/06
Abstract: In a server system, a host computing platform can have a processing unit separate from the host processor to detect and respond to failure of the host processor. The host computing platform includes a memory to store data for the host processor. The processing unit has an interface to the host processor and the memory and an interface to a network external to the host processor and has access to the memory. In response to detection of failure of the host processor, the processing unit migrates data from the memory to another memory or storage.
-
公开(公告)号:US20220272012A1
公开(公告)日:2022-08-25
申请号:US17744034
申请日:2022-05-13
Applicant: Intel Corporation
Inventor: S M Iftekharul ALAM , Ned SMITH , Vesh Raj SHARMA BANJADE , Satish C. JHA , Christian MACIOCCO , Mona VIJ , Kshitij A. DOSHI , Srikathyayani SRIKANTESWARA , Francesc GUIM BERNAT , Maruti GUPTA HYDE , Alexander BACHMUTSKY
IPC: H04L43/0811 , H04L43/0882 , H04L43/091 , H04L43/062
Abstract: Examples described herein relate to dynamically composing an application as a monolithic implementation or two or more microservices based on telemetry data. In some examples, based on composition of an application as two or more microservices, at least one connection between microservices based on telemetry data is adjusted. In some examples, a switch can be configured to perform forwarding of communications between microservices based on the adjusted at least one connection between microservices.
-
-
-
-
-
-
-
-
-