摘要:
A data processing system is provided. The data processing system includes a plurality of processors, a cache memory shared by the plurality of processors, in which memory a cache line is divided into a plurality of partial writable regions. The plurality of processors are given exclusive access rights to the partial writable region waits.
摘要:
A cache-memory control apparatus controls a level-1 (L1) cache and a level-2 (L2) cache having a cache line divided into a plurality of sub-lines for storing data from the L1 cache. The cache-memory control apparatus includes a control-flag adding unit, an L1 cache control unit, and an L2 cache control unit. The control-flag adding unit provides an SP flag to each of the sub-lines. The L1-cache control unit acquires an access virtual address, and, when there is no data at the access virtual address, outputs an L2 cache-access address to the L2-cache control unit. The L2-cache control unit switches the SP flag based on a virtual page number in an L1 index and a physical page number in an L2 index. Based on the SP flag, corresponding one of the sub-lines is written back to the L1 cache.
摘要:
A data processing system is provided. The data processing system includes a plurality of processors, a cache memory shared by the plurality of processors, in which memory a cache line is divided into a plurality of partial writable regions. The plurality of processors are given exclusive access rights to the partial writable region waits.
摘要:
A cache memory control unit that controls a cache memory comprises: a PF-PORT 22 and MI-PORT 21 that receive a prefetch request and demand fetch request issued from a primary cache; and a processing pipeline 27 that performs swap processing when the MI-PORT 21 receives a demand fetch request designating the same memory address as that designated by a prefetch request that has already been received by the PF-PORT 22, the swap processing being performed so that an MIB 28 that has been ensured for replying the prefetch request is used for a reply to the demand fetch request following the prefetch request.
摘要:
A transmitting side device (10) and a receiving side device (20) are connected to each other via a bus (30) comprising TAG bits (31), data bits (32) and error detection/correction ECC bits (33). The transmitting side device (10) uses a redundant bit inversion circuit (14) to invert different bits of the ECC bits (33) corresponding to trigger signals (41 & 42). In the receiving side device (20), a determination circuit (24), which has received an error report signal (26) from an error detection/correction circuit (22), determines, from the position of an error bit in the ECC bits (33), which one of the trigger signals (41 & 42) has been transmitted from the transmitting side device (10).
摘要:
To reduce the number of bits required for LRU control when the number of target entries is large, and achieve complete LRU control. Each time an entry is used, an ID of the used entry is stored to configure LRU information so that storage data 0 stored in the leftmost position indicates an ID of an entry with the oldest last use time (that is, LRU entry), for example as shown in FIG. 1(1). An LRU control apparatus according to a first embodiment of the present invention refers to the LRU information, and selects an entry corresponding to the storage data 0 (for example, entry 1) from the LRU information as a candidate for the LRU control, based on the storage data 0 as the ID of the entry with the oldest last use time.
摘要:
A cache controller that writes data to a cache memory, includes a first buffer unit that retains data flowing in from outside to be written to the cache memory, a second buffer unit that retains a data piece to be currently written to the cache memory, among pieces of the data retained in the first buffer unit, and a write controlling unit that controls writing of the data piece retained in the second buffer unit to the cache memory.
摘要:
A cache receives a request from an instruction execution unit, searches for necessary data, outputs the data to the instruction execution unit if there is a cache hit, and instructs a request storage unit to request a move-in of the data if a cache miss occurs. The request storage unit stores therein the request corresponding to the instruction of the cache while the requested process is being executed. A REQID assignment unit reads the request stored in the request storage unit, selects an unused REQID from a REQID table, and assigns the unused REQID to the read request. The REQID is an identification number of the request based on the number of requests set as the maximum number that can be received at a simultaneous time by a system controller of the response side.
摘要:
A cache controller controls at least one cache. The cache includes ways including a plurality of blocks that stores therein entry data. A writing unit writes degradation data to a failed block. The degradation data indicates that the failed block is in a degradation state. A reading unit reads entry data from a block. A determining unit determines, if the entry data obtained by the reading unit includes the degradation data, that the block is in the degradation state.
摘要:
A cache memory device comprises a secondary tag RAM that partially constitutes a secondary cache memory and employs a set associative scheme having a plurality of ways, and a secondary cache access controller that, when the number of ways in the secondary tag RAM is changed, allocates tags to respective entries so that the total number of entries constituting the secondary tag RAM and the total number of entries after the number of ways is changed are constant.