-
公开(公告)号:US20240012459A1
公开(公告)日:2024-01-11
申请号:US18371949
申请日:2023-09-22
Applicant: Intel Corporation
Inventor: Francesc GUIM BERNAT , Karthik KUMAR , John J. BROWNE , Chris MACNAMARA , Patrick CONNOR
IPC: G06F1/26
CPC classification number: G06F1/266
Abstract: Examples described herein relate to receiving a configuration, wherein the configuration is to specify a first level of renewable energy utilized by one or more devices based on telemetry, wherein the telemetry comprises a level of renewable energy supplied to the one or more devices. Based on a second level of available supplied renewable energy, a portion of the first level of available supplied renewable energy can be allocated to one or more devices to perform the process. Based on a third level of available supplied renewable energy, increase renewable energy allocated to the one or more devices, to perform the process, to above the first level.
-
公开(公告)号:US20220329450A1
公开(公告)日:2022-10-13
申请号:US17852174
申请日:2022-06-28
Applicant: Intel Corporation
Inventor: Harald SERVAT , Amruta MISRA , Mikko BYCKLING , Francesc GUIM BERNAT , Jaime ARTEAGA MOLINA , Karthik KUMAR
IPC: H04L12/12 , H04L67/1097 , G06F13/42 , G06F13/28
Abstract: Examples described herein relate to a network interface device that includes circuitry to perform switching and perform a received command in one or more packets while at least one of the at least one compute device is in a reduced power state, wherein the command is associated with operation of the at least one of the at least one compute device that is in a reduced power state. In some examples, the network interface device is able to control power available to at least one compute device.
-
公开(公告)号:US20220206849A1
公开(公告)日:2022-06-30
申请号:US17700313
申请日:2022-03-21
Applicant: Intel Corporation
Inventor: Francesc GUIM BERNAT , Karthik KUMAR , Alexander BACHMUTSKY
Abstract: Methods and apparatus for hardware support for low latency microservice deployments in switches. A switch is communicatively coupled via a network or fabric to a plurality of platforms configured to implement one or more microservices. The microservices are used to perform a distributed workload, job, or task as defined by a corresponding graph representation of the microservices including vertices (also referred to as nodes) associated with microservices and edges defining communication between microservices. The graph representation also defines dependencies between microservices. The switch is configured to schedule execution of the graph of microservices on the plurality of platforms, including generating an initial schedule that is dynamically revised during runtime in consideration of performance telemetry data for the microservices received from the platforms and network/fabric utilization monitored onboard the switch. The switch also may include memory in which graph representations, microservice tables, and node-to-microservice maps are stored.
-
公开(公告)号:US20210314245A1
公开(公告)日:2021-10-07
申请号:US17235135
申请日:2021-04-20
Applicant: Intel Corporation
Inventor: Francesc GUIM BERNAT , Susanne M. BALLE , Rahul KHANNA , Sujoy SEN , Karthik KUMAR
IPC: H04L12/26 , G06F16/901 , H04B10/25 , G02B6/38 , G02B6/42 , G02B6/44 , G06F1/18 , G06F1/20 , G06F3/06 , G06F8/65 , G06F9/30 , G06F9/4401 , G06F9/54 , G06F12/109 , G06F12/14 , G06F13/16 , G06F13/40 , G08C17/02 , G11C5/02 , G11C7/10 , G11C11/56 , G11C14/00 , H03M7/30 , H03M7/40 , H04L12/24 , H04L12/931 , H04L12/947 , H04L29/08 , H04L29/06 , H04Q11/00 , H05K7/14 , G06F15/16
Abstract: Technologies for dynamically managing resources in disaggregated accelerators include an accelerator. The accelerator includes acceleration circuitry with multiple logic portions, each capable of executing a different workload. Additionally, the accelerator includes communication circuitry to receive a workload to be executed by a logic portion of the accelerator and a dynamic resource allocation logic unit to identify a resource utilization threshold associated with one or more shared resources of the accelerator to be used by a logic portion in the execution of the workload, limit, as a function of the resource utilization threshold, the utilization of the one or more shared resources by the logic portion as the logic portion executes the workload, and subsequently adjust the resource utilization threshold as the workload is executed. Other embodiments are also described and claimed.
-
公开(公告)号:US20210103481A1
公开(公告)日:2021-04-08
申请号:US16969728
申请日:2018-06-29
Applicant: INTEL CORPORATION
Inventor: Francesc Guim BERNAT , Karthik KUMAR , Susanne M. BALLE , Ignacio ASTILLEROS DIEZ , Timothy VERRALL , Ned M. SMITH
Abstract: Technologies for providing efficient migration of services include a server device. The server device includes compute engine circuitry to execute a set of services on behalf of a terminal device and migration accelerator circuitry. The migration accelerator circuitry is to determine whether execution of the services is to be migrated from an edge station in which the present server device is located to a second edge station in which a second server device is located, determine a prioritization of the services executed by the server device, and send, in response to a determination that the services are to be migrated and as a function of the determined prioritization, data utilized by each service to the second server device of the second edge station to migrate the services. Other embodiments are also described and claimed.
-
公开(公告)号:US20190250916A1
公开(公告)日:2019-08-15
申请号:US16336884
申请日:2016-09-30
Applicant: Intel Corporation
Inventor: Patrick LU , Karthik KUMAR , Thomas WILLHALM , Francesc GUIM BERNAT , Martin P. DIMITROV
IPC: G06F9/30 , G06F12/0862 , G06F12/0811
CPC classification number: G06F9/30047 , G06F9/30043 , G06F9/383 , G06F12/0811 , G06F12/0862 , G06F2212/1024 , G06F2212/2022 , G06F2212/2024 , G06F2212/205 , G06F2212/6028
Abstract: An apparatus is described. The apparatus includes main memory control logic circuitry comprising prefetch intelligence logic circuitry. The prefetch intelligence circuitry to determine, from a read result of a load instruction, an address for a dependent load that is dependent on the read result and direct a read request for the dependent load to a main memory to fetch the dependent load's data.
-
公开(公告)号:US20190227978A1
公开(公告)日:2019-07-25
申请号:US16373339
申请日:2019-04-02
Applicant: Intel Corporation
Inventor: Francesc GUIM BERNAT , Karthik KUMAR , Mustafa HAJEER
IPC: G06F15/173 , H04L29/08 , G06F15/167 , H04L29/06
Abstract: An apparatus is described. The apparatus includes logic circuitry embedded in at least one of a memory controller, network interface and peripheral control hub to process a function as a service (FaaS) function call embedded in a request. The request is formatted according to a protocol. The protocol allows a remote computing system to access a memory that is coupled to the memory controller without invoking processing cores of a local computing system that the memory controller is a component of.
-
公开(公告)号:US20190227737A1
公开(公告)日:2019-07-25
申请号:US16221743
申请日:2018-12-17
Applicant: Intel Corporation
Inventor: Ginger GILSDORF , Karthik KUMAR , Thomas WILLHALM , Mark SCHMISSEUR , Francesc GUIM BERNAT
IPC: G06F3/06
Abstract: Examples relate to a method for a memory module, a method for a memory controller, a method for a processor, to a memory module controller device or apparatus, to a memory controller device or apparatus, to a processor device or apparatus, a memory module, a memory controller, a processor, a computer system and a computer program. The method for the memory module comprises obtaining one or more memory write instructions of a group memory write instruction. The group memory write instruction comprises a plurality of memory write instructions to be executed atomically. The one or more memory write instructions relate to one or more memory addresses associated with memory of the memory module. The method comprises executing the one or more memory write instructions using previously unallocated memory of the memory module. The method comprises obtaining a commit instruction for the group memory write instruction. The method comprises updating the one or more memory addresses based on the previously unallocated memory used for executing the one or more memory write instructions after obtaining the commit instruction for the group memory write instruction.
-
公开(公告)号:US20240143505A1
公开(公告)日:2024-05-02
申请号:US18393793
申请日:2023-12-22
Applicant: Intel Corporation
Inventor: Amruta MISRA , Ajay RAMJI , Rajendrakumar CHINNAIYAN , Chris MACNAMARA , Karan PUTTANNAIAH , Pushpendra KUMAR , Vrinda KHIRWADKAR , Sanjeevkumar Shankrappa ROKHADE , John J. BROWNE , Francesc GUIM BERNAT , Karthik KUMAR , Farheena Tazeen SYEDA
IPC: G06F12/0811
CPC classification number: G06F12/0811
Abstract: Methods and apparatus for dynamic selection of super queue size for CPUs with higher number of cores. An apparatus includes a plurality of compute modules, each module including a plurality of processor cores with integrated first level (L1) caches and a shared second level (L2) cache, a plurality of Last Level Caches (LLCs) or LLC blocks and a plurality of memory interface blocks interconnect via a mesh interconnect. A compute module is configured to arbitrate access to the shared L2 cache and enqueue L2 cache misses in a super queue (XQ). The compute module further is configured to dynamically adjust the size of the XQ during runtime operations. The compute module tracks parameters comprising an L2 miss rate or count and LLC hit latency and adjusts the XQ size as a function of these parameters. A lookup table using the L2 miss rate/count and LLC hit latency may be implemented to dynamically select the XQ size.
-
10.
公开(公告)号:US20240134726A1
公开(公告)日:2024-04-25
申请号:US18537703
申请日:2023-12-12
Applicant: Intel Corporation
Inventor: Akhilesh S. THYAGATURU , Francesc GUIM BERNAT , Karthik KUMAR , Adrian HOBAN , Marek PIOTROWSKI
Abstract: A method is described. The method includes invoking one of more functions from a set of API functions that expose the current respective cooling states of different, respective cooling devices for different components of a hardware platform. The method includes orchestrating concurrent execution of multiple applications on the hardware platform in view of the current respective cooling states. The method includes, in order to prepare the hardware platform for the concurrent execution of the multiple applications, prior to the concurrent execution of the multiple applications, sending one or more commands to the hardware platform to change a cooling state of at least one of the cooling devices.
-
-
-
-
-
-
-
-
-