DEEP LEARNING ACCELERATION OF PHYSICS-BASED MODELING

    公开(公告)号:US20210319312A1

    公开(公告)日:2021-10-14

    申请号:US17007489

    申请日:2020-08-31

    Abstract: Values of physical variables that represent a first state of a first physical system are estimated using a deep learning (DL) algorithm that is trained based on values of physical variables that represent states of other physical systems that are determined by one or more physical equations and subject to one or more conservation laws. A physics-based model modifies the estimated values based on the one or more physical equations so that the resulting modified values satisfy the one or more conservation laws.

    DROPOUT FOR ACCELERATED DEEP LEARNING IN HETEROGENEOUS ARCHITECTURES

    公开(公告)号:US20200097822A1

    公开(公告)日:2020-03-26

    申请号:US16141648

    申请日:2018-09-25

    Inventor: Abhinav VISHNU

    Abstract: A heterogeneous processing system includes at least one central processing unit (CPU) core and at least one graphics processing unit (GPU) core. The heterogeneous processing system is configured to compute an activation for each one of a plurality of neurons for a first network layer of a neural network. The heterogeneous processing system randomly drops a first subset of the plurality of neurons for the first network layer and keeps a second subset of the plurality of neurons for the first network layer. Activation for each one of the second subset of the plurality of neurons is forwarded to the CPU core and coalesced to generate a set of coalesced activation sub-matrices.

    DYNAMIC PRECISION SCALING AT EPOCH GRANULARITY IN NEURAL NETWORKS

    公开(公告)号:US20200151573A1

    公开(公告)日:2020-05-14

    申请号:US16425403

    申请日:2019-05-29

    Abstract: A processor determines losses of samples within an input volume that is provided to a neural network during a first epoch, groups the samples into subsets based on losses, and assigns the subsets to operands in the neural network that represent the samples at different precisions. Each subset is associated with a different precision. The processor then processes the subsets in the neural network at the different precisions during the first epoch. In some cases, the samples in the subsets are used in a forward pass and a backward pass through the neural network. A memory configured to store information representing the samples in the subsets at the different precisions. In some cases, the processor stores information representing model parameters of the neural network in the memory at the different precisions of the subsets of the corresponding samples.

    ADAPTIVE BATCH REUSE ON DEEP MEMORIES
    4.
    发明申请

    公开(公告)号:US20200151510A1

    公开(公告)日:2020-05-14

    申请号:US16424115

    申请日:2019-05-28

    Inventor: Abhinav VISHNU

    Abstract: A method of adaptive batch reuse includes prefetching, from a CPU to a GPU, a first plurality of mini-batches comprising a subset of a training dataset. The GPU trains the neural network for the current epoch by reusing, without discard, the first plurality of mini-batches in training the neural network for the current epoch based on a reuse count value. The GPU also runs a validation set to identify a validation error for the current epoch. If the validation error for the current epoch is less than a validation error of a previous epoch, the reuse count value is incremented for a next epoch. However, if the validation error for the current epoch is greater than a validation error of a previous epoch, the reuse count value is decremented for the next epoch.

Patent Agency Ranking