Abstract:
A nested loop controller includes a first register having a first value initialized to an initial first value, a second register having a second value initialized to an initial second value, and a third register configured as a predicate FIFO, initialized to have a third value. The second value is advanced in response to a tick instruction during execution of a loop. In response to the second value reaching a second threshold, the second register is reset to the initial second value. The nested loop controller further includes a comparator coupled to the second register and to the predicate FIFO and configured to provide an outer loop indicator value as input to the predicate FIFO when the second value is equal to the second threshold, and provide an inner loop indicator value as input to the predicate FIFO when the second value is not equal to the second threshold.
Abstract:
Techniques related to executing a plurality of instructions by a processor comprising receiving a first instruction for execution on an instruction execution pipeline, wherein the instruction execution pipeline is in a first execution mode, beginning execution of the first instruction on the instruction execution pipeline, receiving an execution mode instruction to switch the instruction execution pipeline to a second execution mode, switching the instruction execution pipeline to the second execution mode based on the received execution mode instruction, annulling the first instruction based on the execution mode instruction, receiving a second instruction for execution on the instruction execution pipeline, the second instruction, and executing the second instruction.
Abstract:
Disclosed embodiments include a data processing apparatus having a processing core, a memory, and a streaming engine. The streaming engine is configured to receive a plurality of data elements stored in the memory and to provide the plurality of data elements as a data stream to the processing core, and includes an address generator to generate addresses corresponding to locations in the memory, a buffer to store the data elements received from the locations in the memory corresponding to the generated addresses, and an output to supply the data elements received from the memory to the processing core as the data stream.
Abstract:
A method to compare first and second source data in a processor in response to a vector maximum with indexing instruction includes specifying first and second source registers containing first and second source data, a destination register storing compared data, and a predicate register. Each of the registers includes a plurality of lanes. The method includes executing the instruction by, for each lane in the first and second source register, comparing a value in the lane of the first source register to a value in the corresponding lane of the second source register to identify a maximum value, storing the maximum value in a corresponding lane of the destination register, asserting a corresponding lane of the predicate register if the maximum value is from the first source register, and de-asserting the corresponding lane of the predicate register if the maximum value is from the second source register.
Abstract:
This invention is a bus communication protocol. A master device stores bus credits. The master device may transmit a bus transaction only if it holds sufficient number and type of bus credits. Upon transmission, the master device decrements the number of stored bus credits. The bus credits correspond to resources on a slave device for receiving bus transactions. The slave device must receive the bus transaction if accompanied by the proper credits. The slave device services the transaction. The slave device then transmits a credit return. The master device adds the corresponding number and types of credits to the stored amount. The slave device is ready to accept another bus transaction and the master device is re-enabled to initiate the bus transaction. In many types of interactions a bus agent may act as both master and slave depending upon the state of the process.
Abstract:
A streaming engine employed in a digital data processor specifies a fixed read only data stream defined by plural nested loops. An address generator produces addresses of data elements. A steam head register stores data elements next to be supplied to functional units for use as operands. Stream metadata is stored in response to a stream store instruction. Stored stream metadata is restored to the stream engine in response to a stream restore instruction. An interrupt changes an open stream to a frozen state discarding stored stream data. A return from interrupt changes a frozen stream to an active state.
Abstract:
This invention addresses implements a range of interesting technologies into a single block. Each DSP CPU has a streaming engine. The streaming engines include: a SE to L2 interface that can request 512 bits/cycle from L2; a loose binding between SE and L2 interface, to allow a single stream to peak at 1024 bits/cycle; one-way coherence where the SE sees all earlier writes cached in system, but not writes that occur after stream opens; full protection against single-bit data errors within its internal storage via single-bit parity with semi-automatic restart on parity error.
Abstract:
This invention is a digital signal processor form plural sums of absolute values (SAD) in a single operation. An operational unit performing a sum of absolute value operation comprising two sets of a plurality of rows, each row producing a SAD output. Plural absolute value difference units receive corresponding packed candidate pixel data and packed reference pixel data. A row summer sums the output of the absolute value difference units in the row. The candidate pixels are offset relative to the reference pixels by one pixel for each succeeding row in a set of rows. The two sets of rows operate on opposite halves of the candidate pixels packed within an instruction specified operand. The SAD operations can be performed on differing data widths employing carry chain control in the absolute difference unit and the row summers.
Abstract:
This invention is an integrated memory controller/interconnect that provides very high bandwidth access to both on-chip memory and externally connected off-chip memory. This invention includes an arbitration for all memory endpoints including priority, fairness, and starvation bounds; virtualization; and error detection and correction hardware to protect the on-chip SRAM banks including automated scrubbing.
Abstract:
This invention is an integrated memory controller/interconnect that provides very high bandwidth access to both on-chip memory and externally connected off-chip memory. This invention includes an arbitration for all memory endpoints including priority, fairness, and starvation bounds; virtualization; and error detection and correction hardware to protect the on-chip SRAM banks including automated scrubbing.