摘要:
Embodiments of the present invention relate to a system and method for providing processing capacity on demand. According to the embodiments, a processor package has a plurality of processing elements. One or more of the processing elements may be made active in response to increased demand for processing capacity based on modifiable authorization information.
摘要:
A method and machine-readable medium measure requests by threads requesting a lock to differentiate “hot” and “cold” locks in accordance with the level of contention for the locks. A hardware accelerator manages access to hot locks to improve performance.
摘要:
A method of and apparatus for selective delivery of an interrupt to one of multiple processors having independent operating systems is described. The interrupts are generated from various platform devices in the computer system. Depending on the mode of operation of the system, a controller is configured to deliver interrupts to a co-processor when the host processor is off, without turning on the host processor. The interrupt may be delivered to the correct processor using either a bus-based message or a dedicated interrupt line.
摘要:
A multiprocessor-scalable streaming data server arrangement in a multiprocessor data server having N processors, N being an integer greater than or equal to 2, includes implementing N NICs (Network Interface Cards), a first one of the N NICs being dedicated to receiving an incoming data stream. An interrupt from the first one of the N NICs is bound to a first one of the N processors and an interrupt for an nth NIC is bound to an nth processor, 0
摘要:
In one aspect of the invention is a method to synchronize accesses by multiple threads to shared resources. The method entails a first thread entering a processing queue to contend for a lock on a shared resource. If a second thread exists, where the second thread is currently executing code, then the first thread may execute the critical section of code if the second thread is not currently executing the critical section; or if the second thread is currently executing the critical section of code, then the first thread may continue to contend for ownership of the shared resource until the second thread relinquishes ownership of the shared resource, or until a yield count expires.
摘要:
A method and apparatus for power management is disclosed. The invention reduces power consumption in multiprocessing systems by dynamically adjusting processor power based on system workload. Particularly, the method and apparatus determines the number of required processors based on the number or active threads and sets a processor affinity to run the active threads on the determined number of required processors, thereby allowing the free processors to enter a low-power state.
摘要:
In an embodiment, a processor includes at least one core, a power management unit having a first test register including a first field to store a test patch identifier associated with a test patch and a second field to store a test mode indicator to request a core functionality test, and a microcode storage to store microcode to be executed by the at least one core. Responsive to the test patch identifier, the microcode may access a firmware interface table and obtain the test patch from a non-volatile storage according to an address obtained from the firmware interface table. Other embodiments are described and claimed.
摘要:
Mechanisms for handling multiple data errors that occur simultaneously are provided. A processing device may determine whether multiple data errors occur in memory locations that are within a range of memory locations. If the multiple memory locations are within the range of memory locations, the processing device may continue with a recovery process. If one of the multiple memory locations is outside of the range of memory locations, the processing device may halt the recovery process.
摘要:
In one aspect of the invention is a method to synchronize accesses by multiple threads to shared resources. The method entails a first thread entering a processing queue to contend for a lock on a shared resource. If a second thread exists, where the second thread is currently executing code, then the first thread may execute the critical section of code if the second thread is not currently executing the critical section; or if the second thread is currently executing the critical section of code, then the first thread may continue to contend for ownership of the shared resource until the second thread relinquishes ownership of the shared resource, or until a yield count expires.
摘要:
A method of and apparatus for selective delivery of an interrupt to one of multiple processors having independent operating systems is described. The interrupts are generated from various platform devices in the computer system. Depending on the mode of operation of the system, a controller is configured to deliver interrupts to a co-processor when the host processor is off, without turning on the host processor. The interrupt may be delivered to the correct processor using wither a bus-based message or a dedicated interrupt line.