摘要:
Defining a set of correctable error and uncorrectable error syndrome code points, generating an error correction code (ECC) syndrome decode, regarding the uncorrectable error syndrome code points as “don't cares” and logically minimizing the ECC syndrome decode for the determination of the correctable error syndrome code points based on the regarding of the uncorrectable error syndrome code points as the “don't cares” whereby output data can be ignored for the uncorrectable error syndrome code points.
摘要:
User-controlled targeted cache purging includes receiving a request to perform an operation to purge data from a cache, the request including an index identifier identifying an index associated with the cache. The index specifies a portion of the cache to be purged. The user-controlled targeted cache purging also includes purging the data from the cache, and providing notification of successful completion of the operation.
摘要:
Defining a set of correctable error and uncorrectable error syndrome code points, generating an error correction code (ECC) syndrome decode, regarding the uncorrectable error syndrome code points as “don't cares” and logically minimizing the ECC syndrome decode for the determination of the correctable error syndrome code points based on the regarding of the uncorrectable error syndrome code points as the “don't cares” whereby output data can be ignored for the uncorrectable error syndrome code points.
摘要:
User-controlled targeted cache purging includes receiving a request to perform an operation to purge data from a cache, the request including an index identifier identifying an index associated with the cache. The index specifies a portion of the cache to be purged. The user-controlled targeted cache purging also includes purging the data from the cache, and providing notification of successful completion of the operation.
摘要:
The method includes initiating a processor request to a cache in a requesting node and broadcasting the processor request to remote nodes when the processor request encounters a local cache miss, performing a directory search of each remote cache to determine a state of a target line's address and an ownership state of a specified address, returning the state of the target line to the requesting node and forming a combined response, and broadcasting the combined response to each remote node. During a fetch operation, when the directory search indicates an IM or a Target Memory node on a remote node, data is sourced from the respective remote cache and forwarded to the requesting node while protecting the data, and during a store operation, the data is sourced from the requesting node and protected while being forwarded to the IM or the Target Memory node after coherency has been established.
摘要:
Dynamic pipeline cache error correction includes receiving a request to perform an operation that requires a storage cache slot, the storage cache slot residing in a cache. The dynamic pipeline cache error correction also includes accessing the storage cache slot, determining a cache hit for the storage cache slot, identifying and correcting any correctable soft errors associated with the storage cache slot. The dynamic cache error correction further includes updating the cache with results of corrected data.
摘要:
A method of managing a temporary memory includes: receiving a request to transfer data from a source location to a destination location, the data transfer request associated with an operation to be performed, the operation selected from an input into an intermediate temporary memory and an output; checking a two-state indicator associated with the temporary memory, the two-state indicator having a first state indicating that an immediately preceding operation on the temporary memory was an input to the temporary memory and a second state indicating that the immediately preceding operation was an output from the temporary memory; and performing the operation responsive to one of: the operation being an input operation and the two-state indicator being in the second state, indicating that the immediately preceding operation was an output; and the operation being an output operation and the two-state indicator being in the first state, indicating that the immediately preceding operation was an input.
摘要:
A pipelined processing device includes: a device controller configured to receive a request to perform an operation; a plurality of subcontrollers configured to receive at least one instruction associated with the operation, each of the plurality of subcontrollers including a counter configured to generate an active time value indicating at least a portion of a time taken to process the at least one instruction; a pipeline processor configured to receive and process the at least one instruction, the pipeline processor configured to receive the active time value; and a shared pipeline storage area configured to store the active time value for each of the plurality of subcontrollers.
摘要:
A pipelined processing device includes: a processor configured to receive a request to perform an operation; a plurality of processing controllers configured to receive at least one instruction associated with the operation, each of the plurality of processing controllers including a memory to store at least one instruction therein; a pipeline processor configured to receive and process the at least one instruction, the pipeline processor including shared error detection logic configured to detect a parity error in the at least one instruction as the at least one instruction is processed in a pipeline and generate an error signal; and a pipeline bus connected to each of the plurality of processing controllers and configured to communicate the error signal from the error detection logic.
摘要:
A pipelined processing device includes: a device controller configured to receive a request to perform an operation; a plurality of subcontrollers configured to receive at least one instruction associated with the operation, each of the plurality of subcontrollers including a counter configured to generate an active time value indicating at least a portion of a time taken to process the at least one instruction; a pipeline processor configured to receive and process the at least one instruction, the pipeline processor configured to receive the active time value; and a shared pipeline storage area configured to store the active time value for each of the plurality of subcontrollers.