Abstract:
This invention involves a cache system in a digital data processing apparatus including: a central processing unit core; a level one instruction cache; and a level two cache. The cache lines in the second level cache are twice the size of the cache lines in the first level instruction cache. The central processing unit core requests additional program instructions when needed via a request address. Upon a miss in the level one instruction cache that causes a hit in the upper half of a level two cache line, the level two cache supplies the upper half level cache line to the level one instruction cache. On a following level two cache memory cycle, the level two cache supplies the lower half of the cache line to the level one instruction cache. This cache technique thus prefetches the lower half level two cache line employing fewer resources than an ordinary prefetch.
Abstract:
Disclosed embodiments provide a technique in which a memory controller determines whether a fetch address is a miss in an L1 cache and, when a miss occurs, allocates a way of the L1 cache, determines whether the allocated way matches a scoreboard entry of pending service requests, and, when such a match is found, determine whether a request address of the matching scoreboard entry matches the fetch address. When the matching scoreboard entry also has a request address matching the fetch address, the scoreboard entry is modified to a demand request.
Abstract:
Disclosed embodiments include an electronic device having a processor core, a memory, a register, and a data load unit to receive a plurality of data elements stored in the memory in response to an instruction. All of the data elements hare the same data size, which is specified by one or more coding bits. The data load unit includes an address generator to generate addresses corresponding to locations in the memory at which the data elements are located, and a formatting unit to format the data elements. The register is configured to store the formatted data elements, and the processor core is configured to receive the formatted data elements from the register.
Abstract:
In described examples, an SoC includes at least two voltage domains interconnected with a communication bus. Detection logic in a first voltage domain determines when a voltage error occurs in a second voltage domain and isolates communication via the communication bus when a voltage error or a timing error is detected.
Abstract:
Disclosed embodiments include an electronic device having a processor core, a memory, a register, and a data load unit to receive a plurality of data elements stored in the memory in response to an instruction. All of the data elements hare the same data size, which is specified by one or more coding bits. The data load unit includes an address generator to generate addresses corresponding to locations in the memory at which the data elements are located, and a formatting unit to format the data elements. The register is configured to store the formatted data elements, and the processor core is configured to receive the formatted data elements from the register.
Abstract:
A power supply for an electronic circuit enables a low effort retention mode. During a normal mode a circuit module is supplied a first voltage sufficient for a controlled circuit to operate. During the low effort retention mode the circuit module is supplied with a second voltage lower than the first voltage. The second voltage is sufficient for flop-flops to retain their state but not sufficient to guarantee proper circuit operation. The second voltage is produced by a voltage drop (droop) from the first voltage. The preferred embodiment includes a System On Chip and one external voltage regulator and an on-chip droop circuit for each circuit module.
Abstract:
This invention addresses implements a range of interesting technologies into a single block. Each DSP CPU has a streaming engine. The streaming engines include: a SE to L2 interface that can request 512 bits/cycle from L2; a loose binding between SE and L2 interface, to allow a single stream to peak at 1024 bits/cycle; one-way coherence where the SE sees all earlier writes cached in system, but not writes that occur after stream opens; full protection against single-bit data errors within its internal storage via single-bit parity with semi-automatic restart on parity error.
Abstract:
A clock divider is provided that is configured to divide a high speed input clock signal by an odd, even or fractional divide ratio. The input clock may have a clock cycle frequency of 1 GHz or higher, for example. The input clock signal is divided to produce an output clock signal by first receiving a divide factor value F representative of a divide ratio N, wherein the N may be an odd or an even integer. A fractional indicator indicates the divide ratio is N.5 when the fractional indicator is one and indicates the divide ratio is N when the fractional indicator is zero. F is set to 2(N.5)/2 for a fractional divide ratio and F is set to N/2 for an integer divide ratio. A count indicator is asserted every N/2 input clock cycles when N is even. The count indicator is asserted alternately N/2 input clock cycles and then 1+N/2 input clock cycles when N is odd. One period of an output clock signal is synthesized in response to each assertion of the count indicator when the fractional indicator indicates the divide ratio is N.5. One period of the output clock signal is synthesized in response to two assertions of the count indicator when the fractional indicator indicates the divide ratio is an integer.
Abstract:
In an embodiment of the invention, an integrated circuit includes a pipelined memory array and a memory control circuit. The pipelined memory array contains a plurality of memory banks. Based partially on the read access time information of a memory bank, the memory control circuit is configured to select the number of clock cycles used during read latency.
Abstract:
Disclosed embodiments provide a technique in which a memory controller determines whether a fetch address is a miss in an L1 cache and, when a miss occurs, allocates a way of the L1 cache, determines whether the allocated way matches a scoreboard entry of pending service requests, and, when such a match is found, determine whether a request address of the matching scoreboard entry matches the fetch address. When the matching scoreboard entry also has a request address matching the fetch address, the scoreboard entry is modified to a demand request.