Abstract:
The present invention relates to a platform power management scheme. In some embodiments, a platform provides a relative performance scale using one or more parameters to be requested by an OSPM system.
Abstract:
The present invention relates to a platform power management scheme. In some embodiments, a platform provides a relative performance scale using one or more parameters to be requested by an OSPM system.
Abstract:
In one embodiment, a processor includes at least one core to execute instructions and a power control logic to receive power capability information from a plurality of devices to couple to the processor and allocate a platform power budget to the devices, set a first power level for the devices at which the corresponding device is allocated to be powered, communicate the first power level to the devices, and dynamically reduce a first power to be allocated to a first device and increase a second power to be allocated to a second device responsive to a request from the second device for a higher power level. Other embodiments are described and claimed.
Abstract:
Methods and systems may provide for identifying a workload associated with a platform and determining a scalability of the workload. Additionally, a performance policy of the platform may be managed based at least in part on the scalability of the workload. In one example, determining the scalability includes determining a ratio of productive cycles to actual cycles.
Abstract:
A processor of an aspect includes at least one lower processing capability and lower power consumption physical compute element and at least one higher processing capability and higher power consumption physical compute element. Migration performance benefit evaluation logic is to evaluate a performance benefit of a migration of a workload from the at least one lower processing capability compute element to the at least one higher processing capability compute element, and to determine whether or not to allow the migration based on the evaluated performance benefit. Available energy and thermal budget evaluation logic is to evaluate available energy and thermal budgets and to determine to allow the migration if the migration fits within the available energy and thermal budgets. Workload migration logic is to perform the migration when allowed by both the migration performance benefit evaluation logic and the available energy and thermal budget evaluation logic.
Abstract:
A method and system for determining an energy-efficient operating point of the platform or system. The platform has logic to dynamically manage setting(s) of the processing cores and/or platform components in the platform to achieve maximum system energy efficiency. By using the characteristics of the workload and/or platform to determine the optimum settings of the platform, the logic of the platform facilitates performance guarantees of the platform while minimizing the energy consumption of the processor core and/or platform. The logic of the platform identifies opportunities to run the processing cores at higher performance levels which decreases the execution time of the workload and transitions the platform to a low-power system idle state after the completion of the execution of the workload. Since the execution time of the workload is reduced, the platform spends more time in the low-power system idle state and therefore the overall system energy consumption is reduced.
Abstract:
Methods and apparatuses relating to circuitry to spawn multiple virtual serial bus hub instances on a same physical serial bus hub are described. In one embodiment, an apparatus includes a serial bus hub to electrically couple a plurality of hosts and a plurality of devices, and a circuit to spawn a first virtual hub instance that is bound to a first host of the plurality of hosts and a first device of the plurality of devices, and spawn a concurrently usable, second virtual hub instance that is bound to a second host of the plurality of hosts and a second device of the plurality of devices.
Abstract:
A heterogeneous processor architecture is described. For example, a processor according to one embodiment of the invention comprises: a set of large physical processor cores; a set of small physical processor cores having relatively lower performance processing capabilities and relatively lower power usage relative to the large physical processor cores; virtual-to-physical (V-P) mapping logic to expose the set of large physical processor cores to software through a corresponding set of virtual cores and to hide the set of small physical processor core from the software.
Abstract:
According to one embodiment, a processor includes a plurality of processor cores for executing a plurality of threads, a shared storage communicatively coupled to the plurality of processor cores, a power control unit (PCU) communicatively coupled to the plurality of processors to determine, without any software (SW) intervention, if a thread being performed by a first processor core should be migrated to a second processor core, and a migration unit, in response to receiving an instruction from the PCU to migrate the thread, to store at least a portion of architectural state of the first processor core in the shared storage and to migrate the thread to the second processor core, without any SW intervention, such that the second processor core can continue executing the thread based on the architectural state from the shared storage without knowledge of the SW.
Abstract:
The present invention relates to a platform power management scheme. In some embodiments, a platform provides a relative performance scale using one or more parameters to be requested by an OSPM system.