Abstract:
A method to compare first and second source data in a processor in response to a vector maximum with indexing instruction includes specifying first and second source registers containing first and second source data, a destination register storing compared data, and a predicate register. Each of the registers includes a plurality of lanes. The method includes executing the instruction by, for each lane in the first and second source register, comparing a value in the lane of the first source register to a value in the corresponding lane of the second source register to identify a maximum value, storing the maximum value in a corresponding lane of the destination register, asserting a corresponding lane of the predicate register if the maximum value is from the first source register, and de-asserting the corresponding lane of the predicate register if the maximum value is from the second source register.
Abstract:
This invention is a bus communication protocol. A master device stores bus credits. The master device may transmit a bus transaction only if it holds sufficient number and type of bus credits. Upon transmission, the master device decrements the number of stored bus credits. The bus credits correspond to resources on a slave device for receiving bus transactions. The slave device must receive the bus transaction if accompanied by the proper credits. The slave device services the transaction. The slave device then transmits a credit return. The master device adds the corresponding number and types of credits to the stored amount. The slave device is ready to accept another bus transaction and the master device is re-enabled to initiate the bus transaction. In many types of interactions a bus agent may act as both master and slave depending upon the state of the process.
Abstract:
Disclosed embodiments include a data processing apparatus having a processing core, a memory, and a streaming engine. The streaming engine is configured to receive a plurality of data elements stored in the memory and to provide the plurality of data elements as a data stream to the processing core, and includes an address generator to generate addresses corresponding to locations in the memory, a buffer to store the data elements received from the locations in the memory corresponding to the generated addresses, and an output to supply the data elements received from the memory to the processing core as the data stream.
Abstract:
This invention is a streaming engine employed in a digital signal processor. A fixed data stream sequence is specified by a control register. The streaming engine fetches stream data ahead of use by a central processing unit and stores it in a stream buffer. Upon occurrence of a fault reading data from memory, the streaming engine identifies the data element triggering the fault preferably storing this address in a fault address register. The streaming engine defers signaling the fault to the central processing unit until this data element is used as an operand. If the data element is never used by the central processing unit, the streaming engine never signals the fault. The streaming engine preferably stores data identifying the fault in a fault source register. The fault address register and the fault source register are preferably extended control registers accessible only via a debugger.
Abstract:
This invention is a streaming engine employed in a digital signal processor. A fixed data stream sequence is specified by a control register. The streaming engine fetches stream data ahead of use by a central processing unit and stores it in a stream buffer. Upon occurrence of a fault reading data from memory, the streaming engine identifies the data element triggering the fault preferably storing this address in a fault address register. The streaming engine defers signaling the fault to the central processing unit until this data element is used as an operand. If the data element is never used by the central processing unit, the streaming engine never signals the fault. The streaming engine preferably stores data identifying the fault in a fault source register. The fault address register and the fault source register are preferably extended control registers accessible only via a debugger.
Abstract:
This invention is data processing apparatus and method. Data is protecting from corruption using an error correction code by generating an error correction code corresponding to the data. In this invention the data and the corresponding error correction code are carried forward to another set of registers without regenerating the error correction code or using the error correction code for error detection or correction. Only later are error correction detection and correction actions taken. The differing data/error correction code registers may be in differing pipeline phases in the data processing apparatus. This invention forwards the error correction code with the data through the entire datapath that carries the data. This invention provides error protection to the whole datapath without requiring extensive hardware or additional time.
Abstract:
A predictive adder produces the result of incrementing and/or decrementing a sum of A and B by a one-bit constant of the form of the form 2k, where k is a bit position at which the sum is to be incremented or decremented. The predictive adder predicts the ripple portion of bits in the potential sum of the first operand A and the second operand B that would be toggled by incrementing or decrementing the sum A+B by the one-bit constant to generate and indication of the ripple portion of bits in the potential sum. The predictive adder uses the indication of the ripple portion of bits in the potential sum and the carry output generated by evaluating A+B to produce the results of at least one of A+B+2k and A+B−2k.
Abstract:
A predictive adder produces the result of incrementing and/or decrementing a sum of A and B by a one-bit constant of the form of the form 2k, where k is a bit position at which the sum is to be incremented or decremented. The predictive adder predicts the ripple portion of bits in the potential sum of the first operand A and the second operand B that would be toggled by incrementing or decrementing the sum A+B by the one-bit constant to generate and indication of the ripple portion of bits in the potential sum. The predictive adder uses the indication of the ripple portion of bits in the potential sum and the carry output generated by evaluating A+B to produce the results of at least one of A+B+2k and A+B−2k.
Abstract:
An asynchronous dual domain bridge is implemented between the cache coherent master and the coherent system interconnect. The bridge has 2 halves, one in each clock/powerdown domain-master and interconnect. The powerdown mechanism is isolated to just the asynchronous bridge implemented between the master and the interconnect with a basic request/acknowledge handshake between the master subsystem and the asynchronous bridge.
Abstract:
Disclosed herein are systems and methods for executing multiple instruction set architectures (ISAs) on a singular processing unit. In an implementation, a processor that includes a first decoder, a second decoder, instruction fetch circuitry, and instruction dispatch circuitry is configured to execute two separate instruction set architectures. In an implementation, the instruction fetch circuitry is configured to fetch instructions from an associated memory. In an implementation the instruction dispatch circuitry is coupled to the instruction fetch circuitry, the first decoder, and the second decoder and is configured to route instructions associated with a first ISA to the first decoder, and route instructions associated with a second ISA to the second decoder.