Abstract:
A multilevel cache system comprises a first data array, a second data array coupled to the first data array, and a merged tag array coupled to the second data array.
Abstract:
In a cache memory whose addresses are split into tag, index and offset parts, a transformation device is provided in hardware form for performing a transformation between a respective tag part of the address and a coded tag address that is unambiguous in both directions. In addition, the index field of the addresses of the cache memory can be encoded by another mapping procedure that maps the index field onto a coded index field and is unambiguous in both directions. A hardware unit of suitable configuration is also used for this purpose.
Abstract:
Transactions are granted concurrent access to a data item through the use of an optimistic concurrency algorithm. Each transaction gets its own instance of the data item, such as in a cache or in an entity bean, such that it is not necessary to lock the data. The instances can come from the data or from other instances. When a transaction updates the data item, the optimistic concurrency algorithm ensures that the other instances are notified that the data item has been changed and that it is necessary to read a new instance, from the database or from an updated instance. This description is not intended to be a complete description of, or limit the scope of, the invention. Other features, aspects, and objects of the invention can be obtained from a review of the specification, the figures, and the claims.
Abstract:
A data processing system (20) is able to perform parameter-selectable prefetch instructions to prefetch data for a cache (38). When attempting to be backward compatible with previously written code, sometimes performing this instruction can result in attempting to prefetch redundant data by prefetching the same data twice. In order to prevent this, the parameters of the instruction are analyzed to determine if such redundant data will be prefetched. If so, then the parameters are altered to avoid prefetching redundant data. In some of the possibilities for the parameters of the instruction, the altering of the parameters requires significant circuitry so that an alternative approach is used. This alternative but slower approach, which can be used in the same system with the first approach, detects if the line of the cache that is currently being requested is the same as the previous request. If so, the current request is not executed.
Abstract:
A memory circuit (14) having features specifically adapted to permit the memory circuit (14) to serve as a video frame memory is disclosed. The memory circuit (14) contains a dynamic random access memory array (24) with buffers (18, 20) on input and output data ports (22) thereof to permit asynchronous read, write and refresh accesses to the memory array (24). The memory circuit (14) is accessed both serially and randomly. An address generator (28) contains an address buffer register (36) which stores a random access address and an address sequencer (40) which provides a stream of addresses to the memory array (24). An initial address for the stream of addresses is the random access address stored in the address buffer register (36).
Abstract:
An apparatus and a method for the implementation of a skip-list based cache is shown. While the traditional cache is basically a fixed length line based or fixed size block based structure, resulting in several performance problems for certain application, the skip-list based cache provides for a variable size line or block that enables a higher level of flexibility in the cache usage.
Abstract:
A method and system for performing variable-sized memory coherency transactions. A bus interface unit coupled between a slave and a master may be configured to receive a request (master request) comprising a plurality of coherency granules from the master. Each snooping unit in the system may be configured to snoop a different number of coherency granules in the master request at a time. Once the bus interface unit has received a collection of sets of indications from each snooping logic unit indicating that the associated collection of coherency granules in the master request have been snooped by each snooping unit and that the data at the addresses for the collection of coherency granules snooped has not been updated, the bus interface unit may allow the data at the addresses of those coherency granules not updated to be transferred between the requesting master and the slave.
Abstract:
A method is described for protecting dirty data in cache memories in a cost-effective manner. When an instruction to write data to a memory location is received, and that memory location is being cached, the data is written to a plurality of cache lines, which are referred to as duplicate cache lines. When the data is written back to memory, one of the duplicate cache lines is read. If the cache line is not corrupt, it is written back to the appropriate memory location and marked available. In one embodiment, if more duplicate cache lines exist, they are invalidated. In another embodiment, the other corresponding cache lines may be read for the highest confidence of reliability, and then marked clean or invalid.
Abstract:
A network of PCs includes an I/O channel adapter and network adapter, and is configured for management of a distributed cache memory stored in the plurality of PCs interconnected by the network. The use of standard PCs reduces the cost of the data storage system. The use of the network of PCs permits building large, high-performance, data storage systems.
Abstract:
Method and apparatus for transferring data between a host device and a data storage device having a first memory space and a second memory space. The host issues access commands to store and retrieve data. The device stores commands in the first memory space pending transfer to the second memory space. An interface circuit evaluates relative proximity of first and second sets of LBAs associated with pending first and second commands, and delays promotion of later pending commands in front of earlier pending commands during an overlap condition. If the overlap is caused by performance enhancing features (PEF) the PEFs are disabled so the commands can be scheduled for disc access. Indicators are set in the commands to signal that a PEF has caused the overlap and that PEF can be disabled. Values are added to indicators in the commands such that the PEFs can be modified and avoid overlaps.