Abstract:
An embodiment includes determining a first power metric (e.g., memory module temperature) corresponding to a group of computing nodes that includes first and second computing nodes; and distributing a computing task to a third computing node (e.g., load balancing) in response to the determined first power metric; wherein the third computing node is located remotely from the first and second computing nodes. The first power metric may be specific to the group of computing nodes and is not specific to either of the first and second computing nodes. Such an embodiment may leverage knowledge of computing node group behavior, such as power consumption, to more efficiently manage power consumption in computing node groups. This "power tuning" may rely on data taken at the "silicon level" (e.g., an individual computing node such as a server) and/or a large group level (e.g., data center). Other embodiments are described herein.
Abstract:
A device may receive information that identifies a first task to be processed, may determine a performance metric value indicative of a behavior of a processor while processing a second task, and may assign, based on the performance metric value, the first task to a bin for processing the first task, the bin including a set of processors that operate based on a power characteristic.
Abstract:
The invention relates to a method of execution of a processing on a multiprocessor system (PE) whose clock frequencies are individually adjustable (DIV), comprising the following steps: upon compilation of the source code of the processing, subdividing the processing into elementary tasks (TO) each of which is executable as several occurrences (Oj); dynamically allocating difference processors to successive occurrences of one and the same elementary task; associating with each elementary task a central counter of occurrences (CT), and a threshold (n) of number of occurrences; incrementing the occurrences counter upon the execution of each occurrence of the associated elementary task, independently of the processor allocated to the occurrence; and switching the frequency of the processor allocated to a current occurrence of an elementary task onto a first frequency if the count reached in the associated counter is below the threshold, otherwise switching the frequency of the processor SUT to a second frequency distinct from the first.
Abstract:
Implementations disclosed herein relate to thermal based prioritized computing application scheduling. For example, a processor may determine a prioritized computing application. The processor may schedule the prioritized computing application to transfer execution from a first processing unit to a second processing unit based on a thermal reserve energy associated with the second processing unit.
Abstract:
Aspects of the disclosure relate to computing technologies. In particular, aspects of the disclosure relate to mobile computing device technologies, such as systems, methods, apparatuses, and computer-readable media for scheduling an execution of a task, such as a non-real time, non-latency sensitive background task on a computing device, for improving calibration data by increasing the diversity of orientations used for generating the calibration data and for improving the calibration data by taking into account the effects of change in temperature on motion sensors.
Abstract:
Power management for a processing system that has multiple processing units, (e.g., multiple graphics processing units (GPUs), is described herein. The processing system includes a power manager that obtains performance, power, operational or environmental data from a power management unit associated with each processor (e.g., GPU). The power manager determines, for example, an average value with respect to at least one of the performance, power, operational or environmental data. If the average value is below a predetermined threshold for a predetermined amount of time, then the power manager notifies a configuration manager to alter the number of active processors (e.g., GPUs), if possible. The power may then be distributed among the remaining GPUs or other processors, if beneficial for the operating and environmental conditions.
Abstract:
A processor may determine the actual residency time of a non-core domain residing in a power saving state and based on the actual residency time the processor may determine an optimal power saving state (P-state) for the processor. In response to the non-core domain entering a power saving state, an interrupt generator (IG) may generate a first interrupt and the device drivers or an operating system may use the first interrupt to start a timer (first value). In response to the non-core domain exiting the power saving state, the IG may generate a second interrupt and the device drivers or an operating system may use the second interrupt to stop the timer (final value). The power management unit may use the final and the first value to determine the actual residency time.
Abstract:
One or more tasks to be executed on one or more processors are formulated into a graph, with dependencies between the tasks defined as edges in the graph. In the case of a Radio Access Technology (RAT) application, the graph is iterative, whereby each task may be activated a number of times that may be unknown at compile time. A discrete number of allowable frequencies for processors while executing tasks are defined, and the power dissipation of the processors at those frequencies determined. A linear programming problem is then formulated and solved, which minimizes the overall power dissipation across all processors executing all tasks, subject to several constraints that guarantee complete and proper functionality. The switching of processors executing the tasks between operating points (frequency, voltage) may be controlled by embedding instructions into the tasks at design or compile time, or by a local supervisor monitoring execution of the tasks.