Abstract:
Disclosed approaches for processing a circuit design include identifying duplicate instances of a module in a representation of the circuit design. A processor circuit performs folding operations for at least one pair of the duplicate instances of the module. One instance of the duplicates is removed from the circuit design, and a multiplexer is inserted. The multiplexer receives and selects one of the input signals to the duplicate instances and provides the selected input signal to the remaining instance. For each flip-flop in the remaining instance, a pipelined flip-flop is inserted. Connections to a first clock signal in the remaining instance are replaced with connections to a second clock signal having twice the frequency of the first clock signal. An alignment circuit is inserted to receive the output signal from the first instance and provide concurrent first and second output signals.
Abstract:
Compiling a circuit design includes receiving the circuit design specified in a hardware description language, detecting, using a processor, a slice of a vector within the circuit design, and determining that the slice is defined by a left slice boundary variable and a right slice boundary variable. A hardware description is generated from the circuit design using the processor by including a first shifter circuit receiving the left slice boundary variable as an input signal, a second shifter circuit receiving the right slice boundary variable as an input signal, a control signal generator coupled to the first and second shifter circuits, and an output stage. The output stage, responsive to a control signal dependent upon an output from the first shifter circuit and an output from second shifter circuit, generates an output signal including newly received values from a data signal only for bit locations of the output signal corresponding to the slice.
Abstract:
Time-multiplexing implementation of hardware accelerated functions includes associating each function of a plurality of functions from program code with an accelerator binary image specifying a hardware accelerated version of the associated function and determining which accelerator binary images are data independent. Using the computer hardware, the accelerator binary images can be scheduled for implementation in a programmable integrated circuit within each of a plurality of partial reconfiguration regions based on data independence.
Abstract:
At least one neural network accelerator performs operations of a first subset of layers of a neural network on an input data set, generates an intermediate data set, and stores the intermediate data set in a shared memory queue in a shared memory. A first processor element of a host computer system provides input data to the neural network accelerator and signals the neural network accelerator to perform the operations of the first subset of layers of the neural network on the input data set. A second processor element of the host computer system reads the intermediate data set from the shared memory queue, performs operations of a second subset of layers of the neural network on the intermediate data set, and generates an output data set while the neural network accelerator is performing the operations of the first subset of layers of the neural network on another input data set.
Abstract:
Examples described herein relate to dynamically structured single instruction, multiple data (SIMD) instructions, and systems and circuits implementing such dynamically structured SIMD instructions. An example is a method for processing data. A first SIMD structure is determined by a processor. A characteristic of the first SIMD structure is altered by the processor to obtain a second SIMD structure. An indication of the second SIMD structure is communicated from the processor to a numerical engine. Data is packed by the numerical engine into an SIMD instruction according to the second SIMD structure. The SIMD instruction is transmitted from the numerical engine.
Abstract:
Local retiming for a circuit design includes determining, using computer hardware, a load of a synchronous circuit element within the circuit design tagged for forward retiming, traversing, using the computer hardware, each input of the load backward through the circuit design until a sequential circuit element or a primary input is reached, and adding, using the computer hardware, each synchronous circuit element encountered in the traversing to a forward retiming list. In response to determining that forward retiming criteria is met for the forward retiming list, the computer hardware modifies the circuit design by creating a new synchronous circuit element at an output of the load.
Abstract:
An example multiply accumulate (MACC) circuit includes a multiply-accumulator having an accumulator output register, a scaler, coupled to the multiply accumulator, and a control circuit coupled to the multiply-accumulator and the scaler. The control circuit is configured to provide control data to the scaler, the control data indicative of: a most-significant bit (MSB) to least significant bit (LSB) range for selecting bit indices from the accumulator output register for implementing a first right shift; a multiplier; and a second right shift.
Abstract:
A and a request generator circuit is configured to read data elements of a three-dimensional (3-D) input feature map (IFM) from a memory and store a subset of the data elements in one of a plurality of N line buffers. Each line buffer is configured for storage of M data elements. A pixel iterator circuit is coupled to the line buffers and is configured to generate a sequence of addresses for reading the stored data elements from the line buffers based on a sequence of IFM height values and a sequence of IFM width values.
Abstract:
An example preprocessor circuit for formatting image data into a plurality of streams of image samples includes: a plurality of memory banks configured to store the image data; multiplexer circuitry coupled to the memory banks; a first plurality of registers coupled to the multiplexer circuitry; a second plurality of registers coupled to the first plurality of registers, outputs of the second plurality of registers configured to provide the plurality of streams of image samples; and control circuitry configured to generate addresses for the plurality of memory banks, control the multiplexer circuitry to select among outputs of the plurality of memory banks, control the first plurality of registers to store outputs of the second plurality of multiplexers, and control the second plurality of registers to store outputs of the first plurality of registers.
Abstract:
Disclosed circuits and methods include N line buffers. Each line buffer is configured for storage of M data elements of a three-dimensional (3-D) input feature map (IFM). A request generator circuit is coupled to the N line buffers and to a memory configured for storage of the 3-D IFM. The request generator circuit is divides the 3-D IFM into a plurality of IFM sub-volumes based on values of N, M, and dimensions of the 3-D IFM. The request generator circuit reads from the memory, data elements at addresses of an unprocessed one of the IFM sub-volumes and stores the data elements of the unprocessed one of the IFM sub-volumes in the N line buffers. In response to a completion signal, the request generator circuit repeats the reading of an unprocessed one of the IFM sub-volumes and storing the data elements in the N line buffers.