-
公开(公告)号:US11362904B2
公开(公告)日:2022-06-14
申请号:US16957626
申请日:2019-02-21
Applicant: INTEL CORPORATION
Inventor: Mrittika Ganguli , Dinesh Kumar , Robert Valiquette , Yadong Li , Mohan Kumar
Abstract: Technologies for enhanced network discovery and configuration include a network with a fabric manager and multiple network devices. A network device requests platform information from a management controller and receives the platform information via a sideband interface. The network device broadcasts a discovery message indicative of the platform information on a link layer network. The fabric manager discovers the network topology with an enhanced link layer discovery protocol and creates a vPOD in the network. The vPOD includes an application network with multiple racks. The fabric manager creates a tagged network domain for the vPOD. The fabric manager sends an out-of-band configuration command to the network device with a tag associated with the vPOD. After receiving the out-of-band configuration command, the network device receives a packet, compares domain metadata of the packet to the tag received from the fabric manager, and routes the packet. Other embodiments are described and claimed.
-
公开(公告)号:US11182324B2
公开(公告)日:2021-11-23
申请号:US16905395
申请日:2020-06-18
Applicant: Intel Corporation
Inventor: Mohan Kumar , Murugasamy Nachimuthu
Abstract: Mechanisms for Field Programmable Gate Array (FPGA) chaining and unified FPGA views to a composed system hosts and associated methods, apparatus, systems and software A rack is populated with pooled system drawers including pooled compute drawers and pooled FPGA drawers communicatively coupled via input-output (IO) cables. The FPGA resources in the pooled system drawers are enumerated, identifying a location of type of each FPGA and whether it is a chainable FPGA. Intra-drawer chaining mechanisms are identified for the chainable FPGAs in each pooled compute and pooled FPGA drawer. Inter-drawer chaining mechanism are also identified for chaining FPGAs in separate pooled system drawers. The enumerated FPGA and chaining mechanism data is aggregated to generate a unified system view of the FPGA resources and their chaining mechanisms. Based on available compute nodes and FPGAs in the unified system view, new compute nodes are composed using chained FPGAs. The chained FPGAs are exposed to a hypervisor or operating system virtualization layer, or to an operating system hosted by the composed compute node as a virtual monolithic FPGA or multiple local FPGAs.
-
公开(公告)号:US20190155620A1
公开(公告)日:2019-05-23
申请号:US16259608
申请日:2019-01-28
Applicant: Intel Corporation
Inventor: Meenakshi Arunachalam , Kushal Datta , Vikram Saletore , Vishal Verma , Deepthi Karkada , Vamsi Sripathi , Rahul Khanna , Mohan Kumar
Abstract: Systems, apparatuses and methods may provide for technology that identifies a first set of compute nodes and a second set of compute nodes, wherein the first set of compute nodes execute more slowly than the second set of compute nodes. The technology may also automatically determine a compute node configuration that results in a relatively low difference in completion time between the first set of compute nodes and the second set of compute nodes with respect to a neural network workload. In an example, the technology applies the compute node configuration to an execution of the neural network workload on one or more nodes in the first set of compute nodes and one or more nodes in the second set of compute nodes.
-
公开(公告)号:US20190065211A1
公开(公告)日:2019-02-28
申请号:US16050240
申请日:2018-07-31
Applicant: Intel Corporation
Inventor: Mohan Kumar , Sarathy Jayakumar , Neelam Chandwani
IPC: G06F9/4401 , G06F11/30 , G06F17/30 , G06F1/26 , G06F1/28 , G06F11/36 , G06F1/32 , G06F9/22 , G06F9/445 , G06F9/44 , G06F11/34 , G06F1/20 , G06F9/30 , G06F9/38 , G06F15/78
Abstract: In some embodiments, a PPM interface may be provided with functionality to facilitate to an OS memory power state management for one or more memory nodes, regardless of a particular platform hardware configuration, as long as the platform hardware is in conformance with the PPM interface.
-
公开(公告)号:US11029971B2
公开(公告)日:2021-06-08
申请号:US16259608
申请日:2019-01-28
Applicant: Intel Corporation
Inventor: Meenakshi Arunachalam , Kushal Datta , Vikram Saletore , Vishal Verma , Deepthi Karkada , Vamsi Sripathi , Rahul Khanna , Mohan Kumar
Abstract: Systems, apparatuses and methods may provide for technology that identifies a first set of compute nodes and a second set of compute nodes, wherein the first set of compute nodes execute more slowly than the second set of compute nodes. The technology may also automatically determine a compute node configuration that results in a relatively low difference in completion time between the first set of compute nodes and the second set of compute nodes with respect to a neural network workload. In an example, the technology applies the compute node configuration to an execution of the neural network workload on one or more nodes in the first set of compute nodes and one or more nodes in the second set of compute nodes.
-
公开(公告)号:US10541942B2
公开(公告)日:2020-01-21
申请号:US15941943
申请日:2018-03-30
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Anil Rao , Suraj Prabhakaran , Mohan Kumar , Karthik Kumar
IPC: H04L12/947 , H04L12/66 , H04L12/801 , H04L12/931
Abstract: Technologies for accelerating edge device workloads at a device edge network include a network computing device which includes a processor platform that includes at least one processor which supports a plurality of non-accelerated function-as-a-service (FaaS) operations and an accelerated platform that includes at least one accelerator which supports a plurality of accelerated FaaS (AFaaS) operation. The network computing device is configured to receive a request to perform a FaaS operation, determine whether the received request indicates that an AFaaS operation is to be performed on the received request, and identify compute requirements for the AFaaS operation to be performed. The network computing device is further configured to select an accelerator platform to perform the identified AFaaS operation and forward the received request to the selected accelerator platform to perform the identified AFaaS operation. Other embodiments are described and claimed.
-
公开(公告)号:US10055353B2
公开(公告)日:2018-08-21
申请号:US15250832
申请日:2016-08-29
Applicant: Intel Corporation
Inventor: Murugasamy K. Nachimuthu , Mohan Kumar
IPC: G06F12/08 , G06F12/0868 , G06F9/4401 , G06F12/00 , G06F13/14 , G06F12/0804 , G06F12/0866 , G06F12/02 , G06F12/0802 , G11C13/00
CPC classification number: G06F12/0868 , G06F9/4401 , G06F9/4403 , G06F12/00 , G06F12/0246 , G06F12/0802 , G06F12/0804 , G06F12/0866 , G06F13/14 , G06F2212/1041 , G06F2212/202 , G06F2212/214 , G06F2212/608 , G11C13/0004 , Y02D10/13
Abstract: A non-volatile random access memory (NVRAM) is used in a computer system to perform multiple roles in the platform storage hierarchy. The NVRAM is byte-rewritable and byte-erasable by the processor. The NVRAM is coupled to the processor to be directly accessed by the processor without going through an I/O subsystem. The NVRAM stores a Basic Input and Output System (BIOS). During a Pre-Extensible Firmware Interface (PEI) phase of the boot process, the cache within the processor can be used in a write-back mode for execution of the BIOS.
-
8.
公开(公告)号:US09454380B2
公开(公告)日:2016-09-27
申请号:US13977593
申请日:2012-11-21
Applicant: Intel Corporation
Inventor: Mohan Kumar , Sarathy Jayakumar , Jose Andy Vargas
IPC: G06F9/00 , G06F15/177 , G06F9/44 , G06F17/30 , G06F9/445 , G06F1/28 , G06F11/36 , G06F1/26 , G06F9/22 , G06F11/34 , G06F9/30 , G06F1/20 , G06F15/78 , G06F1/32 , G06F9/38
CPC classification number: G06F9/4403 , G06F1/206 , G06F1/26 , G06F1/28 , G06F1/32 , G06F1/3203 , G06F1/3234 , G06F1/324 , G06F1/3275 , G06F1/3296 , G06F9/22 , G06F9/30098 , G06F9/3012 , G06F9/384 , G06F9/44 , G06F9/4401 , G06F9/4418 , G06F9/445 , G06F11/3024 , G06F11/3409 , G06F11/3447 , G06F11/3466 , G06F11/3664 , G06F11/3672 , G06F11/3688 , G06F15/7871 , G06F17/30339 , G06F2209/501 , G06F2217/78 , Y02D10/126 , Y02D10/172
Abstract: In some embodiments, a PPM interface may be provided with functionality to facilitate to an OS RAS services for one or more hardware components, regardless of a particular platform hardware configuration, as long as the platform hardware and OS are in conformance with the PPM interface.
Abstract translation: 在一些实施例中,只要平台硬件和OS与PPM接口一致,可以向PPM接口提供功能,以便于针对一个或多个硬件组件的OS RAS服务,而不管特定的平台硬件配置。
-
公开(公告)号:US11809878B2
公开(公告)日:2023-11-07
申请号:US16790203
申请日:2020-02-13
Applicant: Intel Corporation
Inventor: Sarathy Jayakumar , Mohan Kumar
IPC: G06F9/445 , G06F13/16 , G06F16/22 , G06F9/4401 , G06F8/65
CPC classification number: G06F9/44505 , G06F8/65 , G06F9/4401 , G06F9/4411 , G06F13/1668 , G06F16/2228
Abstract: Systems, apparatuses and methods may provide for technology that stores first hardware related data to a basic input output system (BIOS) memory area and generates a mailbox data structure, wherein the mailbox data structure includes a first identifier-pointer pair associated with the first hardware related data. Additionally, the technology may generate an operating system (OS) interface table, wherein the OS interface table includes a pointer to the mailbox data structure. In one example, the technology also stores second hardware related data to the BIOS memory area at runtime and adds a second identifier-pointer pair to the mailbox data structure at runtime, wherein the second identifier-pointer pair is associated with the second hardware related data.
-
公开(公告)号:US11159454B2
公开(公告)日:2021-10-26
申请号:US16748232
申请日:2020-01-21
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Anil Rao , Suraj Prabhakaran , Mohan Kumar , Karthik Kumar
IPC: H04L12/947 , H04L12/66 , H04L12/801 , H04L12/931 , H04L12/24
Abstract: Technologies for accelerating edge device workloads at a device edge network include a network computing device which includes a processor platform that includes at least one processor which supports a plurality of non-accelerated function-as-a-service (FaaS) operations and an accelerated platform that includes at least one accelerator which supports a plurality of accelerated FaaS (AFaaS) operation. The network computing device is configured to receive a request to perform a FaaS operation, determine whether the received request indicates that an AFaaS operation is to be performed on the received request, and identify compute requirements for the AFaaS operation to be performed. The network computing device is further configured to select an accelerator platform to perform the identified AFaaS operation and forward the received request to the selected accelerator platform to perform the identified AFaaS operation. Other embodiments are described and claimed.
-
-
-
-
-
-
-
-
-