-
公开(公告)号:US12130688B2
公开(公告)日:2024-10-29
申请号:US17133226
申请日:2020-12-23
Applicant: Intel Corporation
Inventor: Rahul Khanna , Xin Kang , Ali Taha , James Tschanz , William Zand , Robert Kwasnick
IPC: G06F1/324 , G06F1/3287 , G06F1/3296 , G06F9/50
CPC classification number: G06F1/324 , G06F1/3287 , G06F1/3296 , G06F9/5094
Abstract: Methods, apparatus, systems and articles of manufacture are disclosed to optimize a guard band of a hardware resource. An example apparatus includes at least one storage device, and at least one processor to execute instructions to identify a phase of a workload based on an output from a machine-learning model, the phase based on a utilization of one or more hardware resources, and based on the phase, control a guard band of a first hardware resource of the one or more hardware resources.
-
公开(公告)号:US20240345899A1
公开(公告)日:2024-10-17
申请号:US18634269
申请日:2024-04-12
Applicant: Media Tek Inc.
Inventor: Shih-Chieh Wang
CPC classification number: G06F9/5094 , G06F9/5038 , G06F11/3419
Abstract: A multi-core processor includes a plurality of cores and a central dynamic voltage and frequency scaling (DVFS) system coupled to the plurality of core. The DVFS system is configured to receive power parameters and performance parameters for the plurality of cores. The power parameters may indicate power indices each respective core and the performance parameters may indicate performance for each respective core. The DVFS system may determine a power margin based on a target power budget for the multi-core processor and the power indices for the plurality of cores. For one or more cores of the plurality of cores, the DVFS system may dynamically allocate power to the core by determining an adjusted power index based on the power margin and the performance of the core. Accordingly, the DVFS system may dynamically balance performance and power of the cores.
-
公开(公告)号:US12111711B2
公开(公告)日:2024-10-08
申请号:US17402927
申请日:2021-08-16
Applicant: Daedalus Prime LLC
Inventor: Travis T. Schluessler , Russell J. Fenger
IPC: G06F1/32 , G06F1/20 , G06F1/3203 , G06F1/3206 , G06F1/3234 , G06F1/3287 , G06F1/329 , G06F9/50
CPC classification number: G06F1/3206 , G06F1/206 , G06F1/3203 , G06F1/3253 , G06F1/3287 , G06F1/329 , G06F9/5094 , G06F9/50 , Y02D10/00
Abstract: An apparatus, method and system is described herein for efficiently balancing performance and power between processing elements based on measured workloads. If a workload of a processing element indicates that it is a bottleneck, then its performance may be increased. However, if a platform or integrated circuit including the processing element is already operating at a power or thermal limit, the increase in performance is counterbalanced by a reduction or cap in another processing elements performance to maintain compliance with the power or thermal limit. As a result, bottlenecks are identified and alleviated by balancing power allocation, even when multiple processing elements are operating at a power or thermal limit.
-
公开(公告)号:US20240330076A1
公开(公告)日:2024-10-03
申请号:US18190521
申请日:2023-03-27
Applicant: Advanced Micro Devices, Inc.
Inventor: Jerry Anton Ahrens , William Robert Alverson , Joshua Taylor Knight , Amitabh Mehra , Anil Harwani , Grant Evan Ley
IPC: G06F9/50
CPC classification number: G06F9/5094 , G06F9/5016 , G06F9/5033
Abstract: Task allocation with chipset attached memory and additional processing unit is described. In accordance with the described techniques, a computing device includes a main system and one or more sub-systems which are coupled to the main system via a chipset link. The main system includes at least a processing unit and a system memory. The one or more sub-systems each include at least a chipset attached processing unit and a chipset attached memory. Contents of the system memory are transferable to the chipset attached memory of the sub-system via the chipset link to enable the chipset attached processing unit to perform the one or more tasks using the contents from the chipset attached memory.
-
公开(公告)号:US20240320060A1
公开(公告)日:2024-09-26
申请号:US18189118
申请日:2023-03-23
Applicant: International Business Machines Corporation
Inventor: Thomas Jefferson Sandridge , Omar E. Colmenares , RAGHAVENDRA PRAHLADA MANCHI , Shreya Ayyagari , MAYUKO ARIKAWA
IPC: H04L67/1008 , H04L41/16 , H04L67/1012
CPC classification number: G06F9/5094 , G06F9/5083 , G06F2009/4557 , G06F2209/501
Abstract: A method, computer program product, and computer system are provided for load balancing in a hybrid cloud environment through optimization of energy resources. Real-time and historic data corresponding to a computing workload, one or more servers at one or more locations, and one or more clean energy sources accessible by the one or more servers at the one or more locations are collected. One or more key performance indicators, thresholds, or targets of a business associated with the computing workload are determined. The computing workload is routed to one or more servers at a location from among the one or more locations based on maximizing usage of clean energy from the one or more clean energy sources without affecting the key performance indicators, thresholds, or targets of the business.
-
公开(公告)号:US12093750B1
公开(公告)日:2024-09-17
申请号:US18584265
申请日:2024-02-22
Applicant: Greenlight AI LLC
Inventor: Karl Andersen , Vitaly Leokumovich
IPC: G06F1/00 , G05B13/04 , G06F1/3206 , G06F9/50 , G06F21/60 , G06F1/3203
CPC classification number: G06F9/5094 , G05B13/048 , G06F1/3206 , G06F9/5038 , G06F21/606 , G06F1/3203
Abstract: A GPU-on-demand system includes a computing device equipped with a graphics processing unit (GPU) and memory, an Energy Management System (EMS) and a distributed power resource, a database for storing energy metrics including energy expenditure, a Large Language Model (LLM) for processing the energy metrics so as to generate an energy management plan that defines when each of the plurality of distributed power resources shall be used, an API gateway comprising an API coupled to a network connection, the API gateway configured for providing external systems secure, on-demand access to the GPU, and, a software module executing on the computing device, the software module configured for managing the GPU-on-demand system according to the energy management plan.
-
7.
公开(公告)号:US12056525B2
公开(公告)日:2024-08-06
申请号:US17197690
申请日:2021-03-10
Applicant: SAMSUNG ELECTRONICS CO., LTD.
Inventor: Peter Morris , Jinseong Kim , Hyesun Hong
CPC classification number: G06F9/5005 , G06F9/48 , G06F9/4881 , G06F9/50 , G06F9/5011 , G06F9/5038 , G06F9/5094 , G06N3/08 , G06F2209/501 , G06F2209/505 , G06N20/00
Abstract: A scheduling method performed by a computing apparatus includes: generating an input vector including a resource status and metadata of each of tasks for parallel execution; determining an action for the input vector by executing an actor network based on the input vector; performing first resource scheduling for each of the tasks based on the determined action; performing second resource scheduling for each of the tasks based on the input vector; evaluating performance of first resource scheduling results of the first resource scheduling and second resource scheduling results of the second resource scheduling, for each of the tasks, using a critic network; selecting one of the first and second resource scheduling results for each of the tasks based on a result of the evaluating; and allocating resources to each of the tasks based on a resource scheduling result selected for each of the tasks.
-
公开(公告)号:US12034597B2
公开(公告)日:2024-07-09
申请号:US17497692
申请日:2021-10-08
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Kshitij Doshi , Ned Smith , Thijs Metsch
IPC: G06F9/46 , G06F1/20 , G06F9/48 , G06F9/50 , G06F9/54 , G06F11/30 , H04L9/06 , H04L9/32 , H04L41/084 , H04L41/0869 , H04L41/5054 , H04L47/78 , H04L49/00 , H04L67/10 , H04W4/08 , H04W12/04
CPC classification number: H04L41/0843 , G06F1/206 , G06F9/4881 , G06F9/505 , G06F9/5094 , G06F9/542 , G06F11/3006 , H04L9/0637 , H04L9/3213 , H04L9/3247 , H04L41/0869 , H04L41/5054 , H04L47/781 , H04L49/70 , H04L67/10 , H04W4/08 , H04W12/04 , G06F2209/5021
Abstract: Methods, apparatus, systems and articles of manufacture are disclosed to control processing of telemetry data at an edge platform. An example apparatus includes an orchestrator interface to, responsive to an amount of resources allocated to an orchestrator to orchestrate a workload at the edge platform meeting a first threshold, transmit telemetry data associated with the orchestrator to a computer to obtain a first orchestration result at a first granularity; a resource management controller to determine a second orchestration result at a second granularity to orchestrate the workload at the edge platform, the second granularity finer than the first granularity; and a scheduler to schedule a workload assigned to the edge platform based on the second orchestration result.
-
公开(公告)号:US12019898B2
公开(公告)日:2024-06-25
申请号:US17490199
申请日:2021-09-30
Applicant: Seagate Technology LLC
Inventor: Stacey Secatch , David W. Claude , Daniel J. Benjamin , Thomas V. Spencer , Matthew B. Lovell
CPC classification number: G06F3/0652 , G06F3/0604 , G06F3/0619 , G06F3/0635 , G06F3/0659 , G06F3/0673 , G06F9/5094 , G06F2209/5022
Abstract: A data storage system may have a data storage device with a memory arranged into a plurality of logical namespaces. A power module can be connected to the plurality of logical namespaces and configured to transition at least one memory cell in response to a workload computed for a namespace of the plurality of the logical namespaces to maintain a power consumption of 8 watts or less for the data storage device.
-
公开(公告)号:US11989076B2
公开(公告)日:2024-05-21
申请号:US16731885
申请日:2019-12-31
Applicant: INTEL CORPORATION
Inventor: Balaji Vembu , Josh B. Mastronarde , Nikos Kaburlasos
IPC: G06F1/32 , G06F1/3287 , G06F1/3296 , G06F9/50 , G06F13/40 , G06F1/26
CPC classification number: G06F1/3287 , G06F1/3296 , G06F9/5083 , G06F9/5088 , G06F9/5094 , G06F13/4022 , G06F1/26
Abstract: In an example, an apparatus comprises logic, at least partially comprising hardware logic, to power on a first set of processing clusters, dispatch a workload to the first set of processing clusters, detect a full operating state of the first set of processing clusters, and in response to the detection of a full operating state of the first set of processing clusters, to power on a second set of processing clusters. Other embodiments are also disclosed and claimed.
-
-
-
-
-
-
-
-
-