-
公开(公告)号:US12067484B2
公开(公告)日:2024-08-20
申请号:US16449264
申请日:2019-06-21
Applicant: Xilinx, Inc.
Inventor: Yaman Umuroglu , Nicholas Fraser , Michaela Blott , Kristof Denolf , Kornelis A. Vissers
Abstract: An example method of training a neural network includes defining hardware building blocks (HBBs), neuron equivalents (NEQs), and conversion procedures from NEQs to HBBs; defining the neural network using the NEQs in a machine learning framework; training the neural network on a training platform; and converting the neural network as trained into a netlist of HBBs using the conversion procedures to convert the NEQs in the neural network to the HBBs of the netlist.
-
公开(公告)号:US11615300B1
公开(公告)日:2023-03-28
申请号:US16007884
申请日:2018-06-13
Applicant: Xilinx, Inc.
Inventor: Julian Faraone , Michaela Blott , Nicholas Fraser
Abstract: A neural network system includes an input layer, one or more hidden layers, and an output layer. A first layer circuit implements a first layer of the one or more hidden layers. The first layer includes a first weight space including one or more subgroups. A forward path circuit of the first layer circuit includes a multiply and accumulate circuit to receive an input from a layer preceding the first layer; and provide a first subgroup weighted sum using the input and a first plurality weights associated with a first subgroup. A scaling coefficient circuit provides a first scaling coefficient associated with the first subgroup, and applies the first scaling coefficient to the first subgroup weighted sum to generate a first subgroup scaled weighted sum. An activation circuit generates an activation based on the first subgroup scaled weighted sum and provide the activation to a layer following the first layer.
-
公开(公告)号:US11934932B1
公开(公告)日:2024-03-19
申请号:US17094598
申请日:2020-11-10
Applicant: XILINX, INC.
Inventor: Giulio Gambardella , Nicholas Fraser , Ussama Zahid , Michaela Blott , Kornelis A. Vissers
Abstract: Examples herein propose operating redundant ML models which have been trained using a boosting technique that considers hardware faults. The embodiments herein describe performing an evaluation process where the performance of a first ML model is measured in the presence of a hardware fault. The errors introduced by the hardware fault can then be used to train a second ML model. In one embodiment, a second evaluation process is performed where the combined performance of both the first and second trained ML models is measured in the presence of a hardware fault. The resulting errors can then be used when training a third ML model. In this manner, the three trained ML models are trained to be error aware. As a result, during operation, if a hardware fault occurs, the three ML models have better performance relative to three ML models that where not trained to be error aware.
-
公开(公告)号:US20190080223A1
公开(公告)日:2019-03-14
申请号:US15705033
申请日:2017-09-14
Applicant: Xilinx, Inc.
Inventor: Nicholas Fraser , Michaela Blott
Abstract: A neural network system includes an input layer, one or more hidden layers, and an output layer. The input layer receives a training set including a sequence of batches and provides to its following layer output activations associated with the sequence of batches respectively. A first hidden layer receives, from its preceding layer, a first input activation associated with a first batch, receive a first input gradient associated with a second batch preceding the first batch, and provide, to its following layer a first output activation associated with the first batch based on the first input activation and first input gradient. The first and second batches have a delay factor associated with at least two batches. The output layer receives, from its preceding layer, a second input activation, and provide, to its preceding layer, a first output gradient based on the second input activation and the first training set.
-
-
-