摘要:
A virtual multithreading hardware mechanism provides multi-threading on a single-threaded processor. Thread switches are triggered by user-defined triggers. Synchronous triggers may be defined in the form of special trigger instructions. Asynchronous triggers may be defined via special marking instructions that identify an asynchronous trigger condition. The asynchronous trigger condition may be based on a plurality of atomic processor events. Minimal context information, such as only an instruction pointer address, is maintained by the hardware upon a thread switch. In contrast to traditional simultaneous multithreading schemes, the virtual multithreading hardware provides thread switches that are transparent to an operating system and that may be performed without operating system intervention.
摘要:
The latencies associated with retrieving instruction information for a main thread are decreased through the use of a simultaneous helper thread. The helper thread is a speculative prefetch thread to perform instruction prefetch and/or trace pre-build for the main thread.
摘要:
The latencies associated with cache misses or other long-latency instructions in a main thread are decreased through the use of a simultaneous helper thread. The helper thread is a speculative prefetch thread to perform a memory prefetch for the main thread. The instructions for the helper thread are dynamically incorporated into the main thread binary during post-pass operation of a compiler.
摘要:
Embodiments of an apparatus, system and method enhance the efficiency of processor resource utilization during instruction prefetching via one or more speculative threads. Renamer logic and a map table are utilized to perform filtering of instructions in a speculative thread instruction stream. The map table includes a yes-a-thing bit to indicate whether the associated physical register's content reflects the value that would be computed by the main thread. A thread progress beacon table is utilized to track relative progress of a main thread and a speculative helper thread. Based upon information in the thread progress beacon table, the main thread may effect termination of a helper thread that is not likely to provide a performance benefit for the main thread.
摘要:
The latencies associated with retrieving instruction information for a main thread are decreased through the use of a simultaneous helper thread. The helper thread is a speculative prefetch thread to perform instruction prefetch and/or trace pre-build for the main thread.
摘要:
Apparatus, system and methods are provided for performing speculative data prefetching in a chip multiprocessor (CMP). Data is prefetched by a helper thread that runs on one core of the CMP while a main program runs concurrently on another core of the CMP. Data prefetched by the helper thread is provided to the helper core. For one embodiment, the data prefetched by the helper thread is pushed to the main core. It may or may not be provided to the helper core as well. A push of prefetched data to the main core may occur during a broadcast of the data to all cores of an affinity group. For at least one other embodiment, the data prefetched by a helper thread is provided, upon request from the main core, to the main core from the helper core's local cache.
摘要:
Embodiments of an apparatus, system and method enhance the efficiency of processor resource utilization during instruction prefetching via one or more speculative threads. Renamer logic and a map table are utilized to perform filtering of instructions in a speculative thread instruction stream. The map table includes a yes-a-thing bit to indicate whether the associated physical register's content reflects the value that would be computed by the main thread. A thread progress beacon table is utilized to track relative progress of a main thread and a speculative helper thread. Based upon information in the thread progress beacon table, the main thread may effect termination of a helper thread that is not likely to provide a performance benefit for the main thread.
摘要:
The latencies associated with cache misses or other long-latency instructions in a main thread are decreased through the use of a simultaneous helper thread. The helper thread is a speculative prefetch thread to perform a memory prefetch for the main thread. The instructions for the helper thread are dynamically incorporated into the main thread binary during post-pass operation of a compiler.
摘要:
Apparatus, system and methods are provided for performing speculative data prefetching in a chip multiprocessor (CMP). Data is prefetched by a helper thread that runs on one core of the CMP while a main program runs concurrently on another core of the CMP. Data prefetched by the helper thread is provided to the helper core. For one embodiment, the data prefetched by the helper thread is pushed to the main core. It may or may not be provided to the helper core as well. A push of prefetched data to the main core may occur during a broadcast of the data to all cores of an affinity group. For at least one other embodiment, the data prefetched by a helper thread is provided, upon request from the main core, to the main core from the helper core's local cache.
摘要:
The latencies associated with cache misses or other long-latency instructions in a main thread are decreased through the use of a simultaneous helper thread. The helper thread is a speculative prefetch thread to perform a memory prefetch for the main thread. The instructions for the helper thread are dynamically incorporated into the main thread binary during post-pass operation of a compiler.