摘要:
A method of controlling memory operations in a transactional shared memory system having a plurality of nodes connected through an interconnect network. The method includes initiating a memory operation at a first node including a first memory controller and a transaction table where the transaction table is configured to store a list of nodes affected by the memory operation, transmitting a store request signal through the interconnect network to a second node including a second memory controller and an access table where the store request signal includes memory operation data from the first memory controller, storing memory operation data to the access table in entries corresponding to one or more memory addresses affected by the memory operation, identifying a memory conflict with one or more nodes in the list of nodes when the one or more memory addresses affected by the memory operation are also affected by one or more conflicting transactions listed in the access table, transmitting an abort signal from the second node to each of the one or more nodes corresponding to the memory conflict, and transmitting an intent to commit signal from the first node to the second node.
摘要:
A proximity interconnect module includes a plurality of processors operatively connected to a plurality of off-chip cache memories by proximity communication. Due to the high bandwidth capability of proximity interconnect, enhancements to the cache protocol to improve latency may be made despite resulting increased bandwidth consumption.
摘要:
The invention relates to a method for reducing cache flush time of a cache in a computer system. The method includes populating at least one of a plurality of directory entries of a dirty line directory based on modification of the cache to form at least one populated directory entry, and de-populating a pre-determined number of the plurality of directory entries according to a dirty line limiter protocol causing a write-back from the cache to a main memory, where the dirty line limiter protocol is based on a number of the at least one populated directory entry exceeding a pre-defined limit.
摘要:
A multiprocessing node in a snooping-based cache-coherent cluster of processing nodes maintains a cache-to-cache transfer prediction directory of addresses of data last transferred by cache-to-cache transfers. In response to a local cache miss, the multiprocessing node may use the cache-to-cache transfer prediction directory to predict a cache-to-cache transfer and issue a restricted broadcast for requested data that allows only cache memories in the cluster to return copies of the requested data to the requesting multiprocessing node, thereby reducing the consumption of bandwidth that would otherwise be consumed by having a home memory return a copy of the requested data in response to an unrestricted broadcast for requested data that allows cache memories and home memories in a cluster to return copies of the requested data to the requesting multiprocessing node.
摘要:
Approaches for processing an event in an objects store, such as an MySQL database management system or a memcached caching system, that are maintained on one or more solid state devices. A plurality of threads may be instantiated. Each of the threads may be configured to retrieve items from a queue of items. Each item in the queue of items may be associated with a particular event occurring within the object store. Each event is a message that indicates an activity requiring work has occurred within the object store. When a particular thread retrieves an item from the queue of items, the particular thread processes the particular event associated with the item retrieved by the particular thread. In this way, event handling in object stores such as MySQL and memcached may be performed more efficiently on a solid state device.
摘要:
Approaches for replicating data in a distributed transactional system. At a first node of a cluster, a per-transaction write set that comprises a plurality of write operations that are performed against a first data store maintained by the first node is committed. The per-transaction write set is replicated from the first node to a second node of the cluster. At the second node, the plurality of write operations, specified by the per-transaction write set, may be performed in parallel against a second data store maintained by the second node. At the second node, two or more threads may perform a portion of the plurality of write operations against data blocks stored within an in-memory buffer.
摘要:
The invention relates to a method for reducing cache flush time of a cache in a computer system. The method includes populating at least one of a plurality of directory entries of a dirty line directory based on modification of the cache to form at least one populated directory entry, and de-populating a pre-determined number of the plurality of directory entries according to a dirty line limiter protocol causing a write-back from the cache to a main memory, where the dirty line limiter protocol is based on a number of the at least one populated directory entry exceeding a pre-defined limit.
摘要:
A shared cache is point-to-point connected to a plurality of point-to-point connected processing nodes, wherein the processing nodes may be integrated circuits or multiprocessing systems. In response to a local cache miss, a requesting processing node issues a broadcast for requested data which is observed by the shared cache. If the shared cache has a copy of the requested data, the shared cache forwards the copy of the requested data to the requesting processing node.
摘要:
Approaches for replicating data in a distributed transactional system. At a first node of a cluster, a per-transaction write set that comprises a plurality of write operations that are performed against a first data store maintained by the first node is committed. The per-transaction write set is replicated from the first node to a second node of the cluster. At the second node, the plurality of write operations, specified by the per-transaction write set, may be performed in parallel against a second data store maintained by the second node. At the second node, two or more threads may perform a portion of the plurality of write operations against data blocks stored within an in-memory buffer.
摘要:
Approaches for processing an event in an objects store, such as an MySQL database management system or a memcached caching system, that are maintained on one or more solid state devices. A plurality of threads may be instantiated. Each of the threads may be configured to retrieve items from a queue of items. Each item in the queue of items may be associated with a particular event occurring within the object store. Each event is a message that indicates an activity requiring work has occurred within the object store. When a particular thread retrieves an item from the queue of items, the particular thread processes the particular event associated with the item retrieved by the particular thread. In this way, event handling in object stores such as MySQL and memcached may be performed more efficiently on a solid state device.