摘要:
In one embodiment, the present invention includes a method for executing an operation on low order portions of first and second source operands using a first execution stack of a processor and executing the operation on high order portions of the first and second source operands using a second execution stack of the processor, where the operation in the second execution stack is staggered by one or more cycles from the operation in the first execution stack. Other embodiments are described and claimed.
摘要:
In one embodiment, the present invention includes a method for executing an operation on low order portions of first and second source operands using a first execution stack of a processor and executing the operation on high order portions of the first and second source operands using a second execution stack of the processor, where the operation in the second execution stack is staggered by one or more cycles from the operation in the first execution stack. Other embodiments are described and claimed.
摘要:
A method is disclosed. The method includes scheduling a load operation at least twice the size of a maximum access supported by a memory device, dividing the load operation into a plurality of separate load operation segments having a size equivalent to the maximum access supported by the memory device, and performing each of the plurality of load operation segments. A further method is disclosed where a temporary register is used to minimize the number of memory accesses to support unaligned accesses.
摘要:
A method is disclosed. The method includes scheduling a load operation at least twice the size of a maximum access supported by a memory device, dividing the load operation into a plurality of separate load operation segments having a size equivalent to the maximum access supported by the memory device, and performing each of the plurality of load operation segments. A further method is disclosed where a temporary register is used to minimize the number of memory accesses to support unaligned accesses.
摘要:
Embodiments of the present invention relate to selectively re-executing instructions in a computer processor based on their association with a particular cache miss.
摘要:
In one embodiment, the present invention includes a method for receiving a cache access request for data present in a lower-level cache line of a lower-level cache, and sending recency information regarding the lower-level cache line to a higher-level cache. The higher-level cache may be inclusive with the lower-level cache and may update age data associated with the cache line, thus reducing the likelihood of eviction of the cache line. Other embodiments are described and claimed.
摘要:
In one embodiment, the present invention includes a method for receiving a cache access request for data present in a lower-level cache line of a lower-level cache, and sending recency information regarding the lower-level cache line to a higher-level cache. The higher-level cache may be inclusive with the lower-level cache and may update age data associated with the cache line, thus reducing the likelihood of eviction of the cache line. Other embodiments are described and claimed.
摘要:
In one embodiment, the present invention includes a method for providing a cache block in an exclusive state to a first cache and providing the same cache block in the exclusive state to a second cache when cores accessing the two caches are executing redundant threads. Other embodiments are described and claimed.
摘要:
In one embodiment, the present invention includes a method for controlling redundant execution such that if an exceptional event occurs, the redundant execution is stopped, non-redundant execution is performed in one of the threads until the exceptional event has been-resolved, after which a state of the threads is synchronized, and redundant execution is continued. Other embodiments are described and claimed.
摘要:
A redundant multithreading processor is presented. In one embodiment, the processor performs execution of a thread and its duplicate thread in parallel and determines, when in a redundant multithreading mode, whether or not to synchronize an operation of the thread and an operation of the duplicate thread.