ACCELERATING LONG SHORT-TERM MEMORY NETWORKS VIA SELECTIVE PRUNING

    公开(公告)号:US20200234089A1

    公开(公告)日:2020-07-23

    申请号:US16844572

    申请日:2020-04-09

    Abstract: A system and method for pruning. A neural network includes a plurality of long short-term memory cells, each of which includes an input having a weight matrix Wc, an input gate having a weight matrix Wi, a forget gate having a weight matrix Wf, and an output gate having a weight matrix Wo. In some embodiments, after initial training, one or more of the weight matrices Wi, Wf, and Wo are pruned, and the weight matrix Wc is left unchanged. The neural network is then retrained, the pruned weights being constrained to remain zero during retraining.

    Jointly pruning and quantizing deep neural networks

    公开(公告)号:US11475308B2

    公开(公告)日:2022-10-18

    申请号:US16396619

    申请日:2019-04-26

    Abstract: A system and a method generate a neural network that includes at least one layer having weights and output feature maps that have been jointly pruned and quantized. The weights of the layer are pruned using an analytic threshold function. Each weight remaining after pruning is quantized based on a weighted average of a quantization and dequantization of the weight for all quantization levels to form quantized weights for the layer. Output feature maps of the layer are generated based on the quantized weights of the layer. Each output feature map of the layer is quantized based on a weighted average of a quantization and dequantization of the output feature map for all quantization levels. Parameters of the analytic threshold function, the weighted average of all quantization levels of the weights and the weighted average of each output feature map of the layer are updated using a cost function.

    Self-pruning neural networks for weight parameter reduction

    公开(公告)号:US11250325B2

    公开(公告)日:2022-02-15

    申请号:US15894921

    申请日:2018-02-12

    Abstract: A technique to prune weights of a neural network using an analytic threshold function h(w) provides a neural network having weights that have been optimally pruned. The neural network includes a plurality of layers in which each layer includes a set of weights w associated with the layer that enhance a speed performance of the neural network, an accuracy of the neural network, or a combination thereof. Each set of weights is based on a cost function C that has been minimized by back-propagating an output of the neural network in response to input training data. The cost function C is also minimized based on a derivative of the cost function C with respect to a first parameter of the analytic threshold function h(w) and on a derivative of the cost function C with respect to a second parameter of the analytic threshold function h(w).

    Accelerating long short-term memory networks via selective pruning

    公开(公告)号:US10657426B2

    公开(公告)日:2020-05-19

    申请号:US15937558

    申请日:2018-03-27

    Abstract: A system and method for pruning. A neural network includes a plurality of long short-term memory cells, each of which includes an input having a weight matrix Wc, an input gate having a weight matrix Wi, a forget gate having a weight matrix Wf, and an output gate having a weight matrix Wo. In some embodiments, after initial training, one or more of the weight matrices Wi, Wf, and Wo are pruned, and the weight matrix Wc is left unchanged. The neural network is then retrained, the pruned weights being constrained to remain zero during retraining.

    ACCELERATING LONG SHORT-TERM MEMORY NETWORKS VIA SELECTIVE PRUNING

    公开(公告)号:US20190228274A1

    公开(公告)日:2019-07-25

    申请号:US15937558

    申请日:2018-03-27

    Abstract: A system and method for pruning. A neural network includes a plurality of long short-term memory cells, each of which includes an input having a weight matrix Wc, an input gate having a weight matrix Wi, a forget gate having a weight matrix Wf, and an output gate having a weight matrix Wo. In some embodiments, after initial training, one or more of the weight matrices Wi, Wf, and Wo are pruned, and the weight matrix Wc is left unchanged. The neural network is then retrained, the pruned weights being constrained to remain zero during retraining.

    Lossless compression of neural network weights

    公开(公告)号:US11588499B2

    公开(公告)日:2023-02-21

    申请号:US16223105

    申请日:2018-12-17

    Abstract: A system and a method provide compression and decompression of weights of a layer of a neural network. For compression, the values of the weights are pruned and the weights of a layer are configured as a tensor having a tensor size of H×W×C in which H represents a height of the tensor, W represents a width of the tensor, and C represents a number of channels of the tensor. The tensor is formatted into at least one block of values. Each block is encoded independently from other blocks of the tensor using at least one lossless compression mode. For decoding, each block is decoded independently from other blocks using at least one decompression mode corresponding to the at least one compression mode used to compress the block; and deformatted into a tensor having the size of H×W×C.

    Accelerating long short-term memory networks via selective pruning

    公开(公告)号:US11151428B2

    公开(公告)日:2021-10-19

    申请号:US16844572

    申请日:2020-04-09

    Abstract: A system and method for pruning. A neural network includes a plurality of long short-term memory cells, each of which includes an input having a weight matrix Wc, an input gate having a weight matrix Wi, a forget gate having a weight matrix Wf, and an output gate having a weight matrix Wo. In some embodiments, after initial training, one or more of the weight matrices Wi, Wf, and Wo are pruned, and the weight matrix Wc is left unchanged. The neural network is then retrained, the pruned weights being constrained to remain zero during retraining.

Patent Agency Ranking