摘要:
A method of handling a stuck bit in a directory of a cache memory, by defining multiple binary encodings to indicate a defective cache state, detecting an error in a tag stored in a member of the directory (wherein the tag at least includes an address field, a state field and an error-correction field), determining that the error is associated with a stuck bit of the directory member, and writing new state information to the directory member which is selected from one of the binary encodings based on a field location of the stuck bit within the directory member. The multiple binary encodings may include a first binary encoding when the stuck bit is in the address field, a second binary encoding when the stuck bit is in the state field, and a third binary encoding when the stuck bit is in the error-correction field. The new state information may also further be selected based on the value of the stuck bit, e.g., a state bit corresponding to the stuck bit is assigned a bit value from the new state information which matches the value of the stuck bit.
摘要:
A method of handling a stuck bit in a directory of a cache memory which detects an error in a stored tag having an address field, a state field and an error-correction field, determines that the error is associated with a stuck bit of the directory member, marks the directory member as defective, and casts out corrected address information. The error is detected during processing of a cache directory access request, and is determined to be associated with a stuck bit of the directory member by attempting to correct a first error and then detecting a second error after the first correction attempt. The address information is cast out by routing a surrogate tag contained in a surrogate member of the cache directory through error-correction pipeline circuitry while transmitting the address information from the surrogate member to a cast-out machine.
摘要:
A method of handling a stuck bit in a directory of a cache memory, by defining multiple binary encodings to indicate a defective cache state, detecting an error in a tag stored in a member of the directory (wherein the tag at least includes an address field, a state field and an error-correction field), determining that the error is associated with a stuck bit of the directory member, and writing new state information to the directory member which is selected from one of the binary encodings based on a field location of the stuck bit within the directory member. The multiple binary encodings may include a first binary encoding when the stuck bit is in the address field, a second binary encoding when the stuck bit is in the state field, and a third binary encoding when the stuck bit is in the error-correction field. The new state information may also further be selected based on the value of the stuck bit, e.g., a state bit corresponding to the stuck bit is assigned a bit value from the new state information which matches the value of the stuck bit.
摘要:
A method and apparatus for preventing selection of Deleted (D) members as an LRU victim during LRU victim selection. During each cache access targeting the particular congruence class, the deleted cache line is identified from information in the cache directory. A location of a deleted cache line is pipelined through the cache architecture during LRU victim selection. The information is latched and then passed to MRU vector generation logic. An MRU vector is generated and passed to the MRU update logic, which is selects/tags the deleted member as a MRU member. The make MRU operation affects only the lower level LRU state bits arranged in a tree-based structure state bits so that the make MRU operation only negates selection of the specific member in the D state, without affecting LRU victim selection of the other members.
摘要:
A method of handling a stuck bit in a directory of a cache memory, by defining multiple binary encodings to indicate a defective cache state, detecting an error in a tag stored in a member of the directory (wherein the tag at least includes an address field, a state field and an error-correction field), determining that the error is associated with a stuck bit of the directory member, and writing new state information to the directory member which is selected from one of the binary encodings based on a field location of the stuck bit within the directory member. The multiple binary encodings may include a first binary encoding when the stuck bit is in the address field, a second binary encoding when the stuck bit is in the state field, and a third binary encoding when the stuck bit is in the error-correction field. The new state information may also further be selected based on the value of the stuck bit, e.g., a state bit corresponding to the stuck bit is assigned a bit value from the new state information which matches the value of the stuck bit.
摘要:
A memory lock mechanism within a multi-processor system is disclosed. A lock control section is initially assigned to a data block within a system memory of the multiprocessor system. In response to a request for accessing the data block by a processing unit within the multiprocessor system, a determination is made by a memory controller whether or not the lock control section of the data block has been set. If the lock control section of the data block has been set, the request for accessing the data block is denied. Otherwise, if the lock control section of the data block has not been set, the lock control section of the data block is set, and the request for accessing the data block is allowed.
摘要:
A technique for processing an instruction sequence that includes a barrier instruction, a load instruction preceding the barrier instruction, and a subsequent memory access instruction following the barrier instruction includes determining that the load instruction is resolved based upon receipt of an earliest of a good combined response for a read operation corresponding to the load instruction and data for the load instruction. The technique also includes if execution of the subsequent memory access instruction is not initiated prior to completion of the barrier instruction, initiating in response to determining the barrier instruction completed, execution of the subsequent memory access instruction. The technique further includes if execution of the subsequent memory access instruction is initiated prior to completion of the barrier instruction, discontinuing in response to determining the barrier instruction completed, tracking of the subsequent memory access instruction with respect to invalidation.
摘要:
A multiprocessor data processing system includes at least first and second coherency domains, where the first coherency domain includes a system memory and a cache memory. According to a method of data processing, a cache line is buffered in a data array of the cache memory and a state field in a cache directory of the cache memory is set to a coherency state to indicate that the cache line is valid in the data array, that the cache line is held in the cache memory non-exclusively, and that another cache in said second coherency domain may hold a copy of the cache line.
摘要:
In response to a data request of a first of a plurality of processing units, the first processing unit selects a victim cache line to be castout from the lower level cache of the first processing unit and determines whether a mode is set. If not, the first processing unit issues on the interconnect fabric an LCO command identifying the victim cache line and indicating that a lower level cache is the intended destination. If the mode is set, the first processing unit issues a castout command with an alternative intended destination. In response to a coherence response to the LCO command indicating success of the LCO command, the first processing unit removes the victim cache line from its lower level cache, and the victim cache line is held elsewhere in the data processing system. The mode can be set to inhibit castouts to system memory, for example, for testing.
摘要:
A processing unit for a multiprocessor data processing system includes a processor core and a cache hierarchy coupled to the processor core to provide low latency data access. The cache hierarchy includes an upper level cache coupled to the processor core and a lower level victim cache coupled to the upper level cache. In response to a prefetch request of the processor core that misses in the upper level cache, the lower level victim cache determines whether the prefetch request misses in the directory of the lower level victim cache and, if so, allocates a state machine in the lower level victim cache that services the prefetch request by issuing the prefetch request to at least one other processing unit of the multiprocessor data processing system.