摘要:
A method of managing a temporary memory includes: receiving a request to transfer data from a source location to a destination location, the data transfer request associated with an operation to be performed, the operation selected from an input into an intermediate temporary memory and an output; checking a two-state indicator associated with the temporary memory, the two-state indicator having a first state indicating that an immediately preceding operation on the temporary memory was an input to the temporary memory and a second state indicating that the immediately preceding operation was an output from the temporary memory; and performing the operation responsive to one of: the operation being an input operation and the two-state indicator being in the second state, indicating that the immediately preceding operation was an output; and the operation being an output operation and the two-state indicator being in the first state, indicating that the immediately preceding operation was an input.
摘要:
A pipelined processing device includes: a device controller configured to receive a request to perform an operation; a plurality of subcontrollers configured to receive at least one instruction associated with the operation, each of the plurality of subcontrollers including a counter configured to generate an active time value indicating at least a portion of a time taken to process the at least one instruction; a pipeline processor configured to receive and process the at least one instruction, the pipeline processor configured to receive the active time value; and a shared pipeline storage area configured to store the active time value for each of the plurality of subcontrollers.
摘要:
A pipelined processing device includes: a processor configured to receive a request to perform an operation; a plurality of processing controllers configured to receive at least one instruction associated with the operation, each of the plurality of processing controllers including a memory to store at least one instruction therein; a pipeline processor configured to receive and process the at least one instruction, the pipeline processor including shared error detection logic configured to detect a parity error in the at least one instruction as the at least one instruction is processed in a pipeline and generate an error signal; and a pipeline bus connected to each of the plurality of processing controllers and configured to communicate the error signal from the error detection logic.
摘要:
A computer-implemented method for managing data transfer in a multi-level memory hierarchy that includes receiving a fetch request for allocation of data in a higher level memory, determining whether a data bus between the higher level memory and a lower level memory is available, bypassing an intervening memory between the higher level memory and the lower level memory when it is determined that the data bus is available, and transferring the requested data directly from the higher level memory to the lower level memory.
摘要:
A cache includes a cache pipeline, a request receiver configured to receive off chip coherency requests from an off chip cache and a plurality of state machines coupled to the request receiver. The cache also includes an arbiter coupled between the plurality of state machines and the cache pipe line and is configured to give priority to off chip coherency requests as well as a counter configured to count the number of coherency requests sent from the cache pipeline to a lower level cache. The cache pipeline is halted from sending coherency requests when the counter exceeds a predetermined limit.
摘要:
A memory refresh requestor, a memory request interpreter, a cache memory, and a cache controller on a single chip. The cache controller configured to receive a memory access request, the memory access request for a memory address range in the cache memory, detect that the cache memory located at the memory address range is available, and send the memory access request to the memory request interpreter when the memory address range is available. The memory request interpreter configured to receive the memory access request from the cache controller, determine if the memory access request is a request to refresh a contents of the memory address range, and refresh data in the memory address range when the memory access request is a request to refresh memory.
摘要:
A method of debugging an embedded dynamic random access memory (eDRAM) element of a processor core is provided. An aspect includes, based on an error occurring in the eDRAM element, stopping a functional clock, and not stopping a refresh clock. Another aspect includes, based on the functional clock being stopped, creating a fence signal that prevents all commands other than a refresh command, the refresh command being based on the refresh clock, from entering into the eDRAM element. Another aspect includes initializing a line fetch controller of the processor core with at least one of write data and read data. Another aspect includes restarting the functional clock. Another aspect includes performing at least one of write requests and read requests to the eDRAM element based on the at least one of the write data and the read data from the line fetch controller based on the functional clock.
摘要:
Controlling refresh request transmission rates in a cache comprising: a refresh requestor configured to transmit a refresh request to a cache memory at a first refresh rate, the first refresh rate comprising an interval, the interval comprising receiving a plurality of first signals, the first refresh rate corresponding to a maximum refresh rate, and a refresh counter operatively coupled to the refresh requestor and configured to reset in response to receiving a second signal, increment in response to receiving each of a plurality of refresh requests from the refresh requestor, and reset and transmit a current count to the refresh requestor in response to receiving a third signal, wherein the refresh requestor is configured to transmit a refresh request at a second refresh rate, in response to receiving the current count from the refresh counter and determining that the current count is greater than a refresh threshold.
摘要:
Dynamic pipeline cache error correction includes receiving a request to perform an operation that requires a storage cache slot, the storage cache slot residing in a cache. The dynamic pipeline cache error correction also includes accessing the storage cache slot, determining a cache hit for the storage cache slot, identifying and correcting any correctable soft errors associated with the storage cache slot. The dynamic cache error correction further includes updating the cache with results of corrected data.
摘要:
Embodiments relate to embedded Dynamic Random Access Memory (eDRAM) refresh rates in a high performance cache architecture. An aspect includes receiving a plurality of first signals. A refresh request is transmitted via a refresh requestor to a cache memory at a first refresh rate which includes an interval, including a subset of the first signals. The first refresh rate corresponds to a maximum refresh rate. A refresh counter is reset based on receiving a second signal. The refresh counter is incremented after receiving each of a number of refresh requests. A current count is transmitted from a refresh counter to the refresh requestor based on receiving a third signal. The refresh request is transmitted at a second refresh rate, which is less than the first refresh rate. The refresh request is transmitted based on receiving the current count from the refresh counter and determining that the current count is greater than a refresh threshold.