Abstract:
This invention addresses implements a range of interesting technologies into a single block. Each DSP CPU has a streaming engine. The streaming engines include: a SE to L2 interface that can request 512 bits/cycle from L2; a loose binding between SE and L2 interface, to allow a single stream to peak at 1024 bits/cycle; one-way coherence where the SE sees all earlier writes cached in system, but not writes that occur after stream opens; full protection against single-bit data errors within its internal storage via single-bit parity with semi-automatic restart on parity error.
Abstract:
A power supply for an electronic circuit enables a low effort retention mode. During a normal mode a circuit module is supplied a first voltage sufficient for a controlled circuit to operate. During the low effort retention mode the circuit module is supplied with a second voltage lower than the first voltage. The second voltage is sufficient for flop-flops to retain their state but not sufficient to guarantee proper circuit operation. The second voltage is produced by a voltage drop (droop) from the first voltage. The preferred embodiment includes a System On Chip and one external voltage regulator and an on-chip droop circuit for each circuit module.
Abstract:
In an embodiment of the invention, an integrated circuit includes a pipelined memory array and a memory control circuit. The pipelined memory array contains a plurality of memory banks. Based partially on the read access time information of a memory bank, the memory control circuit is configured to select the number of clock cycles used during read latency.
Abstract:
One example includes an integrated circuit (IC). The IC includes non-volatile memory and logic. The logic is configured to receive repair code associated with a memory instance and assign a compression parameter to the repair code based on a configuration of the memory instance. The logic is also configured to compress the repair code based on the compression parameter to produce compressed repair code and to provide compressed repair data that includes the compressed repair code and compression control data that identifies the compression parameter. A non-volatile memory controller is coupled between the non-volatile memory and the logic. The non-volatile memory controller is configured to transfer the compressed repair data to and/or from the non-volatile memory.
Abstract:
One example includes an integrated circuit (IC). The IC includes non-volatile memory and logic. The logic is configured to receive repair code associated with a memory instance and assign a compression parameter to the repair code based on a configuration of the memory instance. The logic is also configured to compress the repair code based on the compression parameter to produce compressed repair code and to provide compressed repair data that includes the compressed repair code and compression control data that identifies the compression parameter. A non-volatile memory controller is coupled between the non-volatile memory and the logic. The non-volatile memory controller is configured to transfer the compressed repair data to and/or from the non-volatile memory.
Abstract:
In described examples, an SoC includes at least two voltage domains interconnected with a communication bus. Detection logic in a first voltage domain determines when a voltage error occurs in a second voltage domain and isolates communication via the communication bus when a voltage error or a timing error is detected.
Abstract:
Disclosed embodiments include a data processing apparatus having a processing core, a memory, and a streaming engine. The streaming engine is configured to receive a plurality of data elements stored in the memory and to provide the plurality of data elements as a data stream to the processing core, and includes an address generator to generate addresses corresponding to locations in the memory, a buffer to store the data elements received from the locations in the memory corresponding to the generated addresses, and an output to supply the data elements received from the memory to the processing core as the data stream.
Abstract:
This invention involves a particular cache hazard. It is possible that an instruction request that is a miss in the cache occurs while the cache system is servicing a pending prefetch for the same instructions. In the prior art, this hazard is detected by comparing request addresses for all entries in a scoreboard. The program memory controller stores the allocated way in the scoreboard. The program memory controller compares the allocated way of the demand request to the allocated way of all the scoreboard entries. The cache hazard only occurs when the allocated ways match. Following way compare, the demand request address is compared to the request addresses of only those scoreboard entries having matching ways. Other address comparators are not powered during this time. This serves to reduce the electrical power required in detecting this cache hazard.
Abstract:
This invention hides the page miss translation latency for program fetches. In this invention whenever an access is requested by CPU that crosses a memory page boundary, the L1I cache controller request a next page translation along with the current page. This pipelines requests to the μTLB without waiting for L1I cache controller to begin processing the next page requests. This becomes a deterministic prefetch of the second page translation request. The translation information for the second page is stored locally in L1I cache controller and used when the access crosses the next page boundary.
Abstract:
This invention is data processing apparatus and method. Data is protecting from corruption using an error correction code by generating an error correction code corresponding to the data. In this invention the data and the corresponding error correction code are carried forward to another set of registers without regenerating the error correction code or using the error correction code for error detection or correction. Only later are error correction detection and correction actions taken. The differing data/error correction code registers may be in differing pipeline phases in the data processing apparatus. This invention forwards the error correction code with the data through the entire datapath that carries the data. This invention provides error protection to the whole datapath without requiring extensive hardware or additional time.