摘要:
In one embodiment, the present invention includes a method for providing a cache block in an exclusive state to a first cache and providing the same cache block in the exclusive state to a second cache when cores accessing the two caches are executing redundant threads. Other embodiments are described and claimed.
摘要:
In one embodiment, the present invention includes a method for providing a cache block in an exclusive state to a first cache and providing the same cache block in the exclusive state to a second cache when cores accessing the two caches are executing redundant threads. Other embodiments are described and claimed.
摘要:
A redundant multithreading processor is presented. In one embodiment, the processor performs execution of a thread and its duplicate thread in parallel and determines, when in a redundant multithreading mode, whether or not to synchronize an operation of the thread and an operation of the duplicate thread.
摘要:
In one embodiment, the present invention includes a method for controlling redundant execution such that if an exceptional event occurs, the redundant execution is stopped, non-redundant execution is performed in one of the threads until the exceptional event has been-resolved, after which a state of the threads is synchronized, and redundant execution is continued. Other embodiments are described and claimed.
摘要:
A redundant multithreading processor is presented. In one embodiment, the processor performs execution of a thread and its duplicate thread in parallel and determines, when in a redundant multithreading mode, whether or not to synchronize an operation of the thread and an operation of the duplicate thread.
摘要:
In one embodiment, the present invention includes a method for controlling redundant execution such that if an exceptional event occurs, the redundant execution is stopped, non-redundant execution is performed in one of the threads until the exceptional event has been-resolved, after which a state of the threads is synchronized, and redundant execution is continued. Other embodiments are described and claimed.
摘要:
Embodiments of the present invention provide a method, apparatus and system for memory renaming. In one embodiment, a decode unit may decode a load instruction. If the load instruction is predicted to be memory renamed, the load instruction may have a predicted store identifier associated with the load instruction. The decode unit may transform the load instruction that is predicted to be memory renamed into a data move instruction and a load check instruction. The data move instruction may read data from the cache based on the predicted store identifier and load check instruction may compare an identifier associated with an identified source store with the predicted store identifier. A retirement unit may retire the load instruction if the predicted store identifier matches an identifier associated with the identified source store. In another embodiment of the present invention, the processor may re-execute the load instruction without memory renaming if the predicted store identifier does not match the identifier associated with the identified source store.
摘要:
Microarchitecture policies and structures partition execution resource clusters. In disclosed microarchitecture embodiments, micro-operations representing a sequential instruction ordering are partitioned into a two sets. To one set of micro-operations execution resources are allocated from a cluster of execution resources that can perform memory access operations but not branching operations. To the other set of micro-operations execution resources are allocated from a cluster of execution resources that can perform branching operations but not memory access operations. The first and second sets of micro-operations may be executed out of sequential order but are retired to represent their sequential instruction ordering.
摘要:
A method is disclosed. The method includes scheduling a load operation at least twice the size of a maximum access supported by a memory device, dividing the load operation into a plurality of separate load operation segments having a size equivalent to the maximum access supported by the memory device, and performing each of the plurality of load operation segments. A further method is disclosed where a temporary register is used to minimize the number of memory accesses to support unaligned accesses.
摘要:
Microarchitecture policies and structures partition execution resource clusters. In disclosed microarchitecture embodiments, micro-operations representing a sequential instruction ordering are partitioned into a two sets. To one set of micro-operations execution resources are allocated from a cluster of execution resources that can perform memory access operations but not branching operations. To the other set of micro-operations execution resources are allocated from a cluster of execution resources that can perform branching operations but not memory access operations. The first and second sets of micro-operations may be executed out of sequential order but are retired to represent their sequential instruction ordering.