摘要:
A method of debugging a memory element is provided. The method includes initializing a line fetch controller with at least one of write data and read data; utilizing at least two separate clocks for performing at least one of write requests and read requests based on the at least one of the write data and the read data; and debugging the memory element based on the at least one of write requests and read requests.
摘要:
A computer-implemented method for managing data transfer in a multi-level memory hierarchy that includes receiving a fetch request for allocation of data in a higher level memory, determining whether a data bus between the higher level memory and a lower level memory is available, bypassing an intervening memory between the higher level memory and the lower level memory when it is determined that the data bus is available, and transferring the requested data directly from the higher level memory to the lower level memory.
摘要:
A pipelined processing device includes: a processor configured to receive a request to perform an operation; a plurality of processing controllers configured to receive at least one instruction associated with the operation, each of the plurality of processing controllers including a memory to store at least one instruction therein; a pipeline processor configured to receive and process the at least one instruction, the pipeline processor including shared error detection logic configured to detect a parity error in the at least one instruction as the at least one instruction is processed in a pipeline and generate an error signal; and a pipeline bus connected to each of the plurality of processing controllers and configured to communicate the error signal from the error detection logic.
摘要:
A memory refresh requestor, a memory request interpreter, a cache memory, and a cache controller on a single chip. The cache controller configured to receive a memory access request, the memory access request for a memory address range in the cache memory, detect that the cache memory located at the memory address range is available, and send the memory access request to the memory request interpreter when the memory address range is available. The memory request interpreter configured to receive the memory access request from the cache controller, determine if the memory access request is a request to refresh a contents of the memory address range, and refresh data in the memory address range when the memory access request is a request to refresh memory.
摘要:
A method of managing a temporary memory includes: receiving a request to transfer data from a source location to a destination location, the data transfer request associated with an operation to be performed, the operation selected from an input into an intermediate temporary memory and an output; checking a two-state indicator associated with the temporary memory, the two-state indicator having a first state indicating that an immediately preceding operation on the temporary memory was an input to the temporary memory and a second state indicating that the immediately preceding operation was an output from the temporary memory; and performing the operation responsive to one of: the operation being an input operation and the two-state indicator being in the second state, indicating that the immediately preceding operation was an output; and the operation being an output operation and the two-state indicator being in the first state, indicating that the immediately preceding operation was an input.
摘要:
A pipelined processing device includes: a device controller configured to receive a request to perform an operation; a plurality of subcontrollers configured to receive at least one instruction associated with the operation, each of the plurality of subcontrollers including a counter configured to generate an active time value indicating at least a portion of a time taken to process the at least one instruction; a pipeline processor configured to receive and process the at least one instruction, the pipeline processor configured to receive the active time value; and a shared pipeline storage area configured to store the active time value for each of the plurality of subcontrollers.
摘要:
Dynamic re-allocation of cache buffer slots includes moving data out of a reserved buffer slot upon detecting an error in the reserved buffer slot, creating a new buffer slot, and storing the data moved out of the reserved buffer slot in the new buffer slot.
摘要:
A cache includes a cache pipeline, a request receiver configured to receive off chip coherency requests from an off chip cache and a plurality of state machines coupled to the request receiver. The cache also includes an arbiter coupled between the plurality of state machines and the cache pipe line and is configured to give priority to off chip coherency requests as well as a counter configured to count the number of coherency requests sent from the cache pipeline to a lower level cache. The cache pipeline is halted from sending coherency requests when the counter exceeds a predetermined limit.
摘要:
Dynamic pipeline cache error correction includes receiving a request to perform an operation that requires a storage cache slot, the storage cache slot residing in a cache. The dynamic pipeline cache error correction also includes accessing the storage cache slot, determining a cache hit for the storage cache slot, identifying and correcting any correctable soft errors associated with the storage cache slot. The dynamic cache error correction further includes updating the cache with results of corrected data.
摘要:
A computer implemented method of optimizing sequential data fetches in a computer system is provided. The method includes fetching a data segment from a main memory, the data segment having a plurality of target data entries; extracting a first portion of the data segment and storing the first portion into a target data cache, the first portion having a first target data entry; and storing the data segment into an intermediate cache line buffer in communication with the target data cache to enable subsequent fetches to a number target data entries in the data segment.