MACHINE LEARNING RUNTIME LIBRARY FOR NEURAL NETWORK ACCELERATION

    公开(公告)号:US20190114533A1

    公开(公告)日:2019-04-18

    申请号:US15785679

    申请日:2017-10-17

    Applicant: Xilinx, Inc.

    Abstract: Embodiments herein describe techniques for interfacing a neural network application with a neural network accelerator using a library. The neural network application may execute on a host computing system while the neural network accelerator executes on a massively parallel hardware system, e.g., a FPGA. The library operates a pipeline for submitting the tasks received from the neural network application to the neural network accelerator. In one embodiment, the pipeline includes a pre-processing stage, an FPGA execution stage, and a post-processing stage which each correspond to different threads. When receiving a task from the neural network application, the library generates a packet that includes the information required for the different stages in the pipeline to perform the tasks. Because the stages correspond to different threads, the library can process multiple packets in parallel which can increase the utilization of the neural network accelerator on the hardware system.

    Software-defined buffer/transposer for general matrix multiplication in a programmable IC

    公开(公告)号:US11036827B1

    公开(公告)日:2021-06-15

    申请号:US15786346

    申请日:2017-10-17

    Applicant: Xilinx, Inc.

    Abstract: Methods and apparatus are described for simultaneously buffering and reformatting (e.g., transposing) a matrix for high-speed data streaming in general matrix multiplication (GEMM), which may be implemented by a programmable integrated circuit (IC). Examples of the present disclosure increase the effective double data rate (DDR) memory throughput for streaming data into GEMM digital signal processing (DSP) engine multifold, as well as eliminate slow data reformatting on a host central processing unit (CPU). This may be accomplished through software-defined (e.g., C++) data structures and access patterns that result in hardware logic that simultaneously buffers and reorganizes the data to achieve linear DDR addressing.

    Inline image preprocessing for convolution operations using a matrix multiplier on an integrated circuit

    公开(公告)号:US10984500B1

    公开(公告)日:2021-04-20

    申请号:US16576365

    申请日:2019-09-19

    Applicant: Xilinx, Inc.

    Abstract: An example preprocessor circuit for formatting image data into a plurality of streams of image samples includes: a plurality of memory banks configured to store the image data; multiplexer circuitry coupled to the memory banks; a first plurality of registers coupled to the multiplexer circuitry; a second plurality of registers coupled to the first plurality of registers, outputs of the second plurality of registers configured to provide the plurality of streams of image samples; bank address and control circuitry coupled to control inputs of the plurality of memory banks, the multiplexer circuitry, and the first plurality of registers; output control circuitry coupled to control inputs of the second plurality of registers; and a control state machine coupled to the bank address and control circuitry and the output control circuitry.

    NEURAL NETWORK PROCESSING SYSTEM HAVING HOST CONTROLLED KERNEL ACCLERATORS

    公开(公告)号:US20190114535A1

    公开(公告)日:2019-04-18

    申请号:US15786288

    申请日:2017-10-17

    Applicant: Xilinx, Inc.

    Abstract: A disclosed neural network processing system includes a host computer system, a RAMs coupled to the host computer system, and neural network accelerators coupled to the RAMs, respectively. The host computer system is configured with software that when executed causes the host computer system to write input data and work requests to the RAMS. Each work request specifies a subset of neural network operations to perform and memory locations in a RAM of the input data and parameters. A graph of dependencies among neural network operations is built and additional dependencies added. The operations are partitioned into coarse grain tasks and fine grain subtasks for optimal scheduling for parallel execution. The subtasks are scheduled to accelerator kernels of matching capabilities. Each neural network accelerator is configured to read a work request from the respective RAM and perform the subset of neural network operations on the input data using the parameters.

    PROGRAMMABLE NON-LINEAR ACTIVATION ENGINE FOR NEURAL NETWORK ACCELERATION

    公开(公告)号:US20230297824A1

    公开(公告)日:2023-09-21

    申请号:US17655489

    申请日:2022-03-18

    Applicant: Xilinx, Inc.

    CPC classification number: G06N3/08

    Abstract: A programmable, non-linear (PNL) activation engine for a neural network is capable of receiving input data within a circuit. In response to receiving an instruction corresponding to the input data, the PNL activation engine is capable of selecting a first non-linear activation function from a plurality of non-linear activation functions by decoding the instruction. The PNL activation engine is capable of fetching a first set of coefficients corresponding to the first non-linear activation function from a memory. The PNL activation engine is capable of performing a polynomial approximation of the first non-linear activation function on the input data using the first set of coefficients. The PNL activation engine is capable of outputting a result from the polynomial approximation of the first non-linear activation function.

    Neural network processing system having multiple processors and a neural network accelerator

    公开(公告)号:US11222256B2

    公开(公告)日:2022-01-11

    申请号:US15785685

    申请日:2017-10-17

    Applicant: Xilinx, Inc.

    Abstract: At least one neural network accelerator performs operations of a first subset of layers of a neural network on an input data set, generates an intermediate data set, and stores the intermediate data set in a shared memory queue in a shared memory. A first processor element of a host computer system provides input data to the neural network accelerator and signals the neural network accelerator to perform the operations of the first subset of layers of the neural network on the input data set. A second processor element of the host computer system reads the intermediate data set from the shared memory queue, performs operations of a second subset of layers of the neural network on the intermediate data set, and generates an output data set while the neural network accelerator is performing the operations of the first subset of layers of the neural network on another input data set.

Patent Agency Ranking