Abstract:
A stream of data is accessed from a memory system using a stream of addresses generated in a first mode of operating a streaming engine in response to executing a first stream instruction. A block cache preload operation is performed on a cache in the memory using a block of addresses generated in a second mode of operating the streaming engine in response to executing a second stream instruction.
Abstract:
A stream of data is accessed from a memory system using a stream of addresses generated in a first mode of operating a streaming engine in response to executing a first stream instruction. A block cache management operation is performed on a cache in the memory using a block of addresses generated in a second mode of operating the streaming engine in response to executing a second stream instruction.
Abstract:
A prefetch unit generates a prefetch address in response to an address associated with a memory read request received from the first or second cache. The prefetch unit includes a prefetch buffer that is arranged to store the prefetch address in an address buffer of a selected slot of the prefetch buffer, where each slot of the prefetch unit includes a buffer for storing a prefetch address, and two sub-slots. Each sub-slot includes a data buffer for storing data that is prefetched using the prefetch address stored in the slot, and one of the two sub-slots of the slot is selected in response to a portion of the generated prefetch address. Subsequent hits on the prefetcher result in returning prefetched data to the requestor in response to a subsequent memory read request received after the initial received memory read request.
Abstract:
This invention addresses implements a range of interesting technologies into a single block. Each DSP CPU has a streaming engine. The streaming engines include: a SE to L2 interface that can request 512 bits/cycle from L2; a loose binding between SE and L2 interface, to allow a single stream to peak at 1024 bits/cycle; one-way coherence where the SE sees all earlier writes cached in system, but not writes that occur after stream opens; full protection against single-bit data errors within its internal storage via single-bit parity with semi-automatic restart on parity error.
Abstract:
This invention mitigates these deadlocking issues by a adding a separate non-blocking pipeline for snoop returns. This separate pipeline would not be blocked behind coherent requests. This invention also repartitions the master initiated traffic to move cache evictions (both with and without data) and non-coherent writes to the new non-blocking channel. This non-blocking pipeline removes the need for any coherent requests to complete before the snoop request can reach the memory controller. Repartitioning cache initiated evictions to the non-blocking pipeline prevents deadlock when snoop and eviction occur concurrently. The non-blocking channel of this invention combines snoop responses from memory controller initiated requests and master initiated evictions/non-coherent writes.
Abstract:
To enable efficient tracking of transactions, an acknowledgement expected signal is used to give the cache coherent interconnect a hint for whether a transaction requires coherent ownership tracking. This signal informs the cache coherent interconnect to expect an ownership transfer acknowledgement signal from the initiating master upon read/write transfer completion. The cache coherent interconnect can therefore continue tracking the transaction at its point of coherency until it receives the acknowledgement from the initiating master only when necessary.
Abstract:
The MSMC (Multicore Shared Memory Controller) described is a module designed to manage traffic between multiple processor cores, other mastering peripherals or DMA, and the EMIF (External Memory InterFace) in a multicore SoC. Each processor has an associated return buffer allowing out of order responses of memory read data and cache snoop responses to ensure maximum bandwidth at the endpoints, and all endpoints receive status messages to simplify the return queue.
Abstract:
The MSMC (Multicore Shared Memory Controller) described is a module designed to manage traffic between multiple processor cores, other mastering peripherals or DMA, and the EMIF (External Memory InterFace) in a multicore SoC. The invention unifies all transaction sizes belonging to a slave previous to arbitrating the transactions in order to reduce the complexity of the arbitration process and to provide optimum bandwidth management among all masters. Two consecutive slots are assigned per cache line access to automatically guarantee the atomicity of all transactions within a single cache line. The need for synchronization among all the banks of a particular SRAM is eliminated, as synchronization is accomplished by assigning back to back slots.
Abstract:
This invention combines a multicore shared memory controller and an asynchronous protocol converting bridge to create a very efficient heterogeneous multi-processor system. After traversing the protocol converting bridge the commands travel through the regular processor port. This allows the interconnect to remain unchanged while having any combination of different processors connected. This invention tightly integrates all of the processors into the same memory controller/interconnect.
Abstract:
This invention speeds operation for coherence writes to shared memory. This invention immediately commits to the memory endpoint coherence write data. Thus this data will be available earlier than if the memory controller stalled this write pending snoop responses. This invention computes write enable strobes for the coherence write data based upon the cache dirty tags. This invention initiates a snoop cycle based upon the address of the coherence write. The stored write enable strobes enable determination of which data to write to the endpoint memory upon a cached and dirty snoop response.