摘要:
The present invention is in the field of memory architecture and management. More particularly, the present invention provides a method, apparatus, system, and machine-readable medium to hide refresh cycles of a memory array such as dynamic random access memory.
摘要:
A modified least recently allocated cache enables a computer to use a modified least recently allocated cache block replacement policy. In a first embodiment, an indicator of the least recently allocated cache block is tracked. When a cache block is referenced, the referenced cache block is compared with the least recently allocated cache block indicator. If the two identify the same cache block, the least recently allocated cache block indicator is adjusted to identify a different cache block. This adjustment prevents the most recently referenced cache block from being replaced. In an alternative embodiment, the most recently referenced cache block is similarly tracked, but the least recently allocated cache block is not immediately adjusted. Only when a new cache block is to be a located are the least recently allocated cache block indicator and the most recently referenced cache block indicator compared. Then, if the two indicators identify the same block, a different cache block is selected for the allocating the new cache block.
摘要:
A circuit including a plurality of latches including feedback control circuitry and a plurality of data input terminals and data output terminals respectively coupled to alternative sides of said plurality of latches.
摘要:
A circuit including a plurality of latches including feedback control circuitry and a plurality of data input terminals and data output terminals respectively coupled to alternative sides of said plurality of latches.
摘要:
The present invention is in the field of memory architecture and management. More particularly, the present invention provides a method, apparatus, system, and machine-readable medium to hide refresh cycles of a memory array such as dynamic random access memory.
摘要:
In accordance with embodiments disclosed herein, there is provided systems and methods for tracking the mode of processing devices in an instruction tracing system. The method may include receiving an indication of a change in a current execution mode of the processing device. The method may also include determining that the current execution mode of the received indication is different than a value of an execution mode of a first execution mode (EM) packet previously-generated by the IT module. The method may also include generating, based on the determining that the current execution mode is different, a second EM packet that provides a value of the current execution mode of the processing device to indicate the change in the execution mode for an instruction in a trace generated by the IT module. The method may further include generating transactional memory (TMX) packets having n bit mode pattern in the packet log. The n is at least two and the n bit mode indicates transaction status of the TMX operation.
摘要:
Methods and apparatus to provide transactional memory execution in out-of-order processors are described. In one embodiment, a stored value corresponds to the number of transactional memory access requests that are uncommitted. The stored value may be used to provide nested recovery in case of an error, fault, etc. in accordance with a described embodiment.
摘要:
An embodiment of the present invention provides for an apparatus for memory access demarcation. Data is accessed from a first cache, which comprises a first set of addresses and corresponding data at each of the addresses in the first set. A plurality of addresses is generated for a second set of addresses. The second set of addresses follows the first set of addresses. The second set of addresses are calculated based on a fixed stride, where the second set of addresses are associated with data from a first stream. A plurality of addresses is generated for a third set of addresses. The third set of addresses follows the first set of addresses. Each address in the third set of addresses is generated by tracing a link associated with another address in the third set of addresses. The third set of addresses is associated with data from a second stream.
摘要:
A shared-memory system includes processing modules communicating with each other through a network. Each of the processing modules includes a processor, a cache, and a memory unit that is locally accessible by the processor and remotely accessible via the network by all other processors. A home directory records states and locations of data blocks in the memory unit. A prediction facility that contains reference history information of the data blocks predicts a next requester of a number of the data blocks that have been referenced recently. The next requester is informed by the prediction facility of the current owner of the data block. As a result, the next requester can issue a request to the current owner directly without an additional hop through the home directory.
摘要:
A computer system incorporating a pipelined bus that maintains data coherency, supports long latency transactions and provides processor order is described. The computer system includes bus agents having in-order-queues that track multiple outstanding transactions across a system bus and that perform snoops in response to transaction requests providing snoop results and modified data within one transaction. Additionally, the system supports long latency transactions by providing deferred identifiers during transaction requests that are used to restart deferred transactions.