摘要:
A processor may comprise a core area, a control unit, an uncore area. The core area may comprise multiple processing cores and line-fill buffers. A first processing core of the core area may store a first weakly ordered transaction in a first line-fill buffer. The firs processing core may offload the first weakly ordered transaction to the extended buffer space provisioned in the uncore area after receiving a request from the uncore area. The first processing core may then de-allocate the first line-fill buffer after the first weakly ordered transaction is offloaded to the extended buffer space. The uncore may then post the first weakly ordered transaction to a memory or a memory system. The control unit may track the first weakly ordered transaction to ensure that the first weakly ordered transaction is posted to the memory or the system.
摘要:
A processor may comprise a core area, a control unit, an uncore area. The core area may comprise multiple processing cores and line-fill buffers. A first processing core of the core area may store a first weakly ordered transaction in a first line-fill buffer. The firs processing core may offload the first weakly ordered transaction to the extended buffer space provisioned in the uncore area after receiving a request from the uncore area. The first processing core may then de-allocate the first line-fill buffer after the first weakly ordered transaction is offloaded to the extended buffer space. The uncore may then post the first weakly ordered transaction to a memory or a memory system. The control unit may track the first weakly ordered transaction to ensure that the first weakly ordered transaction is posted to the memory or the system.
摘要:
Systems, apparatuses, and methods for a hardware and software system to automatically decompose a program into multiple parallel threads are described. In some embodiments, the systems and apparatuses execute a method of original code decomposition and/or generated thread execution.
摘要:
Systems, apparatuses, and methods for a hardware and software system to automatically decompose a program into multiple parallel threads are described. In some embodiments, the systems and apparatuses execute a method of original code decomposition and/or generated thread execution.
摘要:
Disclosed are embodiments of a system, methods and mechanism for using idle thread units to perform acceleration threads that are transparent to the operating system. When the operating system scheduler has no work to schedule on the idle thread units, the operating system may issue a halt or monitor/mwait or other instruction to place the thread unit into an idle state. While the thread unit is idle, from the operating system perspective, the thread unit may be utilized to perform speculative acceleration threads in order to accelerate threads running on non-idle thread units. The context of the idle thread unit is saved prior to execution of the acceleration thread and is restored when the operating system requires use of the thread unit. The acceleration threads are transparent to the operating system. Other embodiments are also described and claimed.
摘要:
In an embodiment, the present invention includes an execution unit to execute instructions of a first type, a local power gate circuit coupled to the execution unit to power gate the execution unit while a second execution unit is to execute instructions of a second type, and a controller coupled to the local power gate circuit to cause it to power gate the execution unit when an instruction stream does not include the first type of instructions. Other embodiments are described and claimed.
摘要:
In one embodiment, the present invention includes an instruction decoder that can receive an incoming instruction and a path select signal and decode the incoming instruction into a first instruction code or a second instruction code responsive to the path select signal. The two different instruction codes, both representing the same incoming instruction may be used by an execution unit to perform an operation optimized for different data lengths. Other embodiments are described and claimed.
摘要:
Forming a plurality of pipeline orderings, each pipeline ordering comprising one of a sequential, a parallel, or a sequential and parallel combination of a plurality of stages of a pipeline, analyzing the plurality of pipeline orderings to determine a total power of each of the orderings, and selecting one of the plurality of pipeline orderings based on the determined total power of each of the plurality of pipeline orderings.
摘要:
An embodiment of the present invention is a technique to simulate performance of a multi-core system. A micro-architecture effect is estimated from each core in the multi-core system. A model of a memory hierarchy associated with each core is simulated. The simulated model of the memory hierarchy is superpositioned on the estimated micro-architecture effect to produce a performance figure for the multi-core system.
摘要:
Forming a plurality of pipeline orderings, each pipeline ordering comprising one of a sequential, a parallel, or a sequential and parallel combination of a plurality of stages of a pipeline, analyzing the plurality of pipeline orderings to determine a total power of each of the orderings, and selecting one of the plurality of pipeline orderings based on the determined total power of each of the plurality of pipeline orderings.