Abstract:
According to the present invention, an apparatus and method for improving reads from and writes to shared memory locations is disclosed. By giving writes priority over reads, the current invention can decrease the time associated with certain sequences of reads from and writes to shared memory locations. In particular, load-invalidate-load sequences are changed to load-load sequences with the current invention. Furthermore, contention for a shared memory location will be reduced in particular situations when using the current invention.
Abstract:
The present invention is a cache system comprising a data memory for storing data in an external memory, and a tag memory for storing address information for data held in the data memory and a valid data bit indicating whether data controlled by the address information is valid; wherein the address information in the tag memory commonly controls a plurality of data items with consecutive addresses; wherein reading from tag memory is prohibited in a case where an address to be accessed corresponds to data controlled by address information in tag memory that matches a preceding address to be accessed; and wherein tag memory is read and a cache hit determination is performed in a case where the address to be accessed corresponds to data controlled by address information in tag memory that does not match the preceding address to be accessed.
Abstract:
In a system including a collection of cooperating cache servers, such as proxy cache servers, a request can be forwarded to a cooperating cache server if the requested object cannot be found locally. An overload condition is detected if for example, due to reference skew, some objects are in high demand by all the clients and the cache servers that contain those hot objects become overloaded due to forwarded requests. In response, the load is balanced by shifting some or all of the forwarded requests from an overloaded cache server to a less loaded one. Both centralized and distributed load balancing environments are described.
Abstract:
A memory device includes a data array, a tag array, and control logic. The data array is adapted to store a plurality of data array entries. The tag array is adapted to store a plurality of data array entries corresponding to the data array entries. The control logic adapted to access a subset of the data array entries in the data array using a burst access and to access the tag array during the burst access. A method for accessing a memory device is provided. The memory device includes a data array and a tag array. The method includes receiving a data array burst access command. The data array is accessed in response to the data array burst access command. A tag array access is received. The tag array is accessed in response to the tag array access command while the data array is being accessed.
Abstract:
Relating to a method succeeded among a plurality of classes having a hierarchical relationship and the method overwriting the succeeded method, a method table stores therein method information including starting addresses of the storage locations of the respective methods, in which the respective method information is connected in series along the hierarchical relationship between the position classes of the respective methods. When a storage location of a message called by the message of a method call is retrieved, the retrieval of the method table is executed by the key which is the class designated by the message. Unless any designated method is retrieved by this retrieval, the key which is a super class of the designated class executes the retrieval based on the class table.
Abstract:
An information processing system and a multi-level hierarchical storage device for use in the information processing system having a plurality of instruction processors and a plurality of main storage devices. The multi-level hierarchical storage device includes a first-cache storage device of a write-through type provided for each instruction processor, a second-cache storage device of a write-back type provided for each main storage device, and a third-cache storage device of a write-through type provided between the first-cache storage device and the second-cache storage device.
Abstract:
An upgradeable cache circuit is described which automatically routes those control signals necessary to maintain cache coherency in a computer system having a processor (with integrated L1 cache) coupled with main memory by a controller. The cache circuit includes an L2 cache module connector and a high speed multiplexer having minimal propagation delay. The multiplexer selects one of two sets of control signals to route to and from the processor, controller and cache circuit, corresponding to the presence or absence of an L2 cache module in the cache module connector.
Abstract:
A set-associative cache-management method combines one-cycle reads and two-cycle pipelined writes. The one-cycle reads involve accessing data from multiple sets in parallel before a tag match is determined. Once a tag match is determined, it is used to select the one of the accessed cache memory locations to be coupled to the processor for the read operation. The two-cycle write involves finding a match in a first cycle and performing the write in the second cycle. During the write, the first stage of the write pipeline is available to begin another write operation. Also, the first-stage of the pipeline can be used to begin a two-cycle read operationnullwhich results in a power saving relative to the one-cycle read operation. Due to the pipeline, there is no time penalty involved in the two-cycle read performed after the pipelined write. Also, instead of a wait, a no-op can be executed in the first stage of the write pipeline while the second stage of the pipeline is fulfilling a write request.
Abstract:
A cache DRAM includes a main memory, a main cache memory for storing data which is accessed at a high frequency out of data stored in the main memory, a main tag memory for storing an address in the main memory of the data stored in the main cache memory, a subcache memory for always receiving data withdrawn from the main cache memory for storage and supplying the stored data to the main memory when the main memory is in a ready state, and a subtag memory for storing an address in the main memory of the data stored in the subcache memory. Since the subcache memory serves as a buffer for data to be transferred from the main cache memory to the main memory, the main cache memory withdraws data to the subcache memory even if the main memory is in a busy state.
Abstract:
A set-associative cache-management method utilizes both parallel reads and single-cycle single-set reads. The parallel reads involve accessing data from all cache sets in parallel before a tag match is determined. Once a tag match is determined, it is used to select the one of the accessed cache memory locations to be coupled to the processor for the read operation. Single-cycle single-set reads occur when the line address of one read operation matches the line address of a immediately preceding read operation satisfied from the cache. In such a case, only the set from which the previous read request was satisfied is accessed in the present read operation. If a sequential read operation is indicated, the same-set can also be accessed to the exclusion of the other sets provided the requested address does not correspond to the beginning of a line address. (In that case, the sequential read crosses a cache-line boundary.) However, the invention further provides for comparing the tag at the same-set location with the successor index with the tag associated with a location from which a read request was satisfied. If the next read request matches the common tag and the index of the successor location, a single-set read is also used. The single-set reads save power relative to the parallel reads, while maintaining the speed advantages of the parallel reads over serial nulltag-then-datanull reads.