摘要:
A computer method and apparatus causes the load-store instruction grouping in a microprocessor instruction pipeline to be disrupted at appropriate times. The computer method and apparatus employs a memory access member which periodically stalls the issuance of store instructions when there are prior store instructions pending in the store queue. The periodic stalls bias the issue stage to issue load groups and store instruction groups. In the latter case, the store queue is free to update the data cache with the data from previous store instructions. Thus, the invention memory access member biases issuance of store instructions in a manner that prevents the store queue from becoming full, and as such enables the store queue to write to the data cache before the store queue becomes full.
摘要:
A computer system has a set-associative, multi-way cache system, in which at least one way is designated as a fast lane, and remaining way(s) are designated slow lanes. Any data that needs to be loaded into cache, but is not likely to be needed again in the future, preferably is loaded into the fast lane. Data loaded into the fast lane is earmarked for immediate replacement. Data loaded into the slow lanes preferably is data that may not needed again in the near future. Slow data is kept in cache to permit it to be reused if necessary. The high-performance mechanism of data access in a modem microprocessor is with a prefetch; data is moved, with a special prefetch instruction, into cache prior to its intended use. The prefetch instruction requires less machine resources, than carrying out the same intent with an ordinary load instruction. So, the slow-lane, fast-lane decision is accomplished by having a multiplicity of prefetch instructions. By loading “not likely to be needed again” data into the fast lane, and designating such data for immediate replacement, data in other cache blocks, in the other ways, may not be evicted, and overall system performance is increased.
摘要:
A data caching system comprises a hashing function, a data store, a tag array, a page translator, a comparator and a duplicate tag array. The hashing function combines an index portion of a virtual address with a virtual page portion of the virtual address to form a cache index. The data store comprises a plurality of data blocks for holding data. The tag array comprises a plurality of tag entries corresponding to the data blocks, and both the data store and tag array are addressed with the cache index. The tag array provides a plurality of physical address tags corresponding to physical addresses of data resident within corresponding data blocks in the data store addressed by the cache index. The page translator translates a tag portion of the virtual address to a corresponding physical address tag. The comparator verifies a match between the physical address tag from the page translator and the plurality of physical address tags from the tag array, a match indicating that data addressed by the virtual address is resident within the data store. Finally, the duplicate tag array resolves synonym issues caused by hashing. The hashing function is such that addresses which are equivalent mod 213 are pseudo-randomly displaced within the cache. The preferred hashing function maps VA to bits of the cache index.
摘要:
According to the present invention a cache within a multiprocessor system is speculatively filled. To speculatively fill a designated cache, the present invention first determines an address which identifies information located in a main memory. The address may also identify one or more other versions of the information located in one or more caches. The process of filling the designated cache with the information is started by locating the information in the main memory and locating other versions of the information identified by the address in the caches. The validity of the information located in the main memory is determined after locating the other versions of the information. The process of filling the designated cache with the information located in the main memory is initiated before determining the validity of the information located in main memory. Thus, the memory reference is speculative.
摘要:
A memory management system couples processors to each other and to a main memory. Each processor may have one or more associated caches local to that processor. A system port of the memory management system receives a request from a source processor of the processors to access a block of data from the main memory. A memory manager of the memory management system then converts the request into a probe command having a data movement part identifying a condition for movement of the block out of a cache of a target processor and a next coherence state part indicating a next state of the block in the cache of the target processor.
摘要:
A computing apparatus connectable to a cache and a memory, includes a system port configured to receive an atomic probe command or a system data control response command having an address part identifying data stored in the cache which is associated with data stored in the memory and a next coherence state part indicating a next state of the data in the cache. The computing apparatus further includes an execution unit configured to execute the command to change the state of the data stored in the cache according to the next coherence state part of the command.
摘要:
A computer system includes an external unit governing a cache which generates a set-dirty request as a function of a coherence state of a block in the cache to be modified. The external unit modifies the block of the cache only if an acknowledgment granting permission is received from a memory management system responsive to the set-dirty request. The memory management system receives the set-dirty request, determines the acknowledgment based on contents of the plurality of caches and the main memory according to a cache protocol and sends the acknowledgment to the external unit in response to the set-dirty request. The acknowledgment will either grant permission or deny permission to set the block to the dirty state.
摘要:
A multiprocessor system includes a plurality of processors, each processor having one or more caches local to the processor, and a memory controller connectable to the plurality of processors and a main memory. The memory controller manages the caches and the main memory of the multiprocessor system. A processor of the multiprocessor system is configurable to evict from its cache a block of data. The selected block may have a clean coherence state or a dirty coherence state. The processor communicates a notify signal indicating eviction of the selected block to the memory controller. In addition to sending a write victim notify signal if the selected block has a dirty coherence state, the processor sends a clean victim notify signal if the selected block has a clean coherence state.
摘要:
A processor of a multiprocessor system is configured to transmit a full probe to a cache associated with the processor to transfer data from the stored data of the cache. The data corresponding to the full probe is transferred during a time period. A first tag-only probe is also transmitted to the cache during the same time period to determine if the data corresponding to the tag-only probe is part of the stored data stored in the cache. A stream of probes accesses the cache in two stages. The cache is composed of a tag structure and a data structure. In the first stage, a probe is designated a tag-only probe and accesses the tag structure, but not the data structure, to determine tag information indicating a hit or a miss. In the second stage, if the probe returns tag information indicating a cache hit the probe is designated to be a full probe and accesses the data structure of the cache. If the probe returns tag information indicating a cache miss the probe does not proceed to the second stage.
摘要:
A computing apparatus has a mode selector configured to select one of a long-bus mode corresponding to a first memory size and a short-bus mode corresponding to a second memory size which is less than the first memory size. An address bus of the computing apparatus is configured to transmit an address consisting of address bits defining the first memory size and a subset of the address bits defining the second memory size. The address bus has N communication lines each configured to transmit one of a first number of bits of the address bits defining the first memory size in the long-bus mode and M of the N communication lines each configured to transmit one of a second number of bits of the address bits defining the second memory size in the short-bus mode, where M is less than N.