STOCHASTIC ROUNDING OF NUMERICAL VALUES
    2.
    发明申请

    公开(公告)号:US20190377549A1

    公开(公告)日:2019-12-12

    申请号:US16001838

    申请日:2018-06-06

    Abstract: A method, computer readable medium, and system are disclosed for rounding numerical values. A set of bits from an input value is identified as a rounding value. A second set of bits representing a second value is extracted from the input value and added with the rounding value to produce a sum. The sum is truncated to produce the rounded output value. Thus, the present invention provides a stochastic rounding technique that rounds up an input value as a function of a second value and a rounding value, both of which were obtained from the input value. When the second value and rounding value are obtained from consistent bit locations of the input value, the resulting output value is deterministic. Stochastic rounding, which is deterministic, is advantageously applicable in deep learning applications.

    TRANSPOSED SPARSE MATRIX MULTIPLY BY DENSE MATRIX FOR NEURAL NETWORK TRAINING

    公开(公告)号:US20250148286A1

    公开(公告)日:2025-05-08

    申请号:US18740361

    申请日:2024-06-11

    Inventor: Hao Wu

    Abstract: Machine learning systems that implement neural networks typically operate in an inference mode or a training mode. In the training mode, inference operations are performed to help guide the training process. Inference mode operation typically involves forward propagation and intensive access to certain sparse matrices, encoded as a set of vectors. Back propagation and intensive access to transposed versions of the same sparse matrices provide training refinements. Generating a transposed version of a sparse matrix can consume significant additional memory and computation resources. In one embodiment, two additional encoding vectors are generated, providing efficient operations on sparse matrices and also on transposed representations of the same sparse matrices. In a neural network the efficient operations can reduce the amount of memory needed for backpropagation and reduce power consumption.

    LOSS-SCALING FOR DEEP NEURAL NETWORK TRAINING WITH REDUCED PRECISION

    公开(公告)号:US20240078433A1

    公开(公告)日:2024-03-07

    申请号:US18385871

    申请日:2023-10-31

    CPC classification number: G06N3/084 G06N3/04 G06N3/063

    Abstract: In training a deep neural network using reduced precision, gradient computation operates on larger values without affecting the rest of the training procedure. One technique trains the deep neural network to develop loss, scales the loss, computes gradients at a reduced precision, and reduces the magnitude of the computed gradients to compensate for scaling of the loss. In one example non-limiting arrangement, the training forward pass scales a loss value by some factor S and the weight update reduces the weight gradient contribution by 1/S. Several techniques can be used for selecting scaling factor S and adjusting the weight update.

    Loss-scaling for deep neural network training with reduced precision

    公开(公告)号:US11842280B2

    公开(公告)日:2023-12-12

    申请号:US15971884

    申请日:2018-05-04

    CPC classification number: G06N3/084 G06N3/04 G06N3/063

    Abstract: In training a deep neural network using reduced precision, gradient computation operates on larger values without affecting the rest of the training procedure. One technique trains the deep neural network to develop loss, scales the loss, computes gradients at a reduced precision, and reduces the magnitude of the computed gradients to compensate for scaling of the loss. In one example non-limiting arrangement, the training forward pass scales a loss value by some factor S and the weight update reduces the weight gradient contribution by 1/S. Several techniques can be used for selecting scaling factor S and adjusting the weight update.

    TRAINING A NEURAL NETWORK USING SELECTIVE WEIGHT UPDATES

    公开(公告)号:US20200380369A1

    公开(公告)日:2020-12-03

    申请号:US16428760

    申请日:2019-05-31

    Inventor: Carl Case Hao Wu

    Abstract: Training one or more neural networks using selective updates to weight information of the one or more neural networks. In at least one embodiment, one or more neural networks are trained by at least updating one or more portions of weight information of the one or more neural networks based, at least in part, on metadata that indicate how recently the one or more portions of weight information has been updated.

    Transposed sparse matrix multiply by dense matrix for neural network training

    公开(公告)号:US12008475B2

    公开(公告)日:2024-06-11

    申请号:US16191201

    申请日:2018-11-14

    Inventor: Hao Wu

    CPC classification number: G06N3/084

    Abstract: Machine learning systems that implement neural networks typically operate in an inference mode or a training mode. In the training mode, inference operations are performed to help guide the training process. Inference mode operation typically involves forward propagation and intensive access to certain sparse matrices, encoded as a set of vectors. Back propagation and intensive access to transposed versions of the same sparse matrices provide training refinements. Generating a transposed version of a sparse matrix can consume significant additional memory and computation resources. In one embodiment, two additional encoding vectors are generated, providing efficient operations on sparse matrices and also on transposed representations of the same sparse matrices. In a neural network the efficient operations can reduce the amount of memory needed for backpropagation and reduce power consumption.

    Stochastic rounding of numerical values

    公开(公告)号:US10684824B2

    公开(公告)日:2020-06-16

    申请号:US16001838

    申请日:2018-06-06

    Abstract: A method, computer readable medium, and system are disclosed for rounding numerical values. A set of bits from an input value is identified as a rounding value. A second set of bits representing a second value is extracted from the input value and added with the rounding value to produce a sum. The sum is truncated to produce the rounded output value. Thus, the present invention provides a stochastic rounding technique that rounds up an input value as a function of a second value and a rounding value, both of which were obtained from the input value. When the second value and rounding value are obtained from consistent bit locations of the input value, the resulting output value is deterministic. Stochastic rounding, which is deterministic, is advantageously applicable in deep learning applications.

Patent Agency Ranking