Universal Loss-Error-Aware Quantization for Deep Neural Networks with Flexible Ultra-Low-Bit Weights and Activations

    公开(公告)号:US20220129759A1

    公开(公告)日:2022-04-28

    申请号:US17441622

    申请日:2019-06-26

    Abstract: Apparatuses, methods, and GPUs are disclosed for universal loss-error-aware quantization (ULQ) of a neural network (NN). In one example, an apparatus includes data storage to store data including activation sets and weight sets, and a network processor coupled to the data storage. The network processor is configured to implement the ULQ by constraining a low-precision NN model based on a full-precision NN model, to perform a loss-error-aware activation quantization to quantize activation sets into ultra-low-bit versions with given bit-width values, to optimize the NN with respect to a loss function that is based on the full-precision NN model, and to perform a loss-error-aware weight quantization to quantize weight sets into ultra-low-bit versions.

Patent Agency Ranking