SHAPING A NEURAL NETWORK ARCHITECTURE UTILIZING LEARNABLE SAMPLING LAYERS

    公开(公告)号:US20210241111A1

    公开(公告)日:2021-08-05

    申请号:US16782793

    申请日:2020-02-05

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to shaping the architecture of a neural network. For example, the disclosed systems can provide a neural network shaping mechanism for at least one sampling layer of a neural network. The neural network shaping mechanism can include a learnable scaling factor between a sampling rate of the at least one sampling layer and an additional sampling function. The disclosed systems can learn the scaling factor based on a dataset while jointly learning the network weights of the neural network. Based on the learned scaling factor, the disclosed systems can shape the architecture of the neural network by modifying the sampling rate of the at least one sampling layer.

    NEURAL NETWORK ARCHITECTURE PRUNING

    公开(公告)号:US20210264278A1

    公开(公告)日:2021-08-26

    申请号:US16799191

    申请日:2020-02-24

    Applicant: Adobe Inc.

    Abstract: The disclosure describes one or more implementations of a neural network architecture pruning system that automatically and progressively prunes neural networks. For instance, the neural network architecture pruning system can automatically reduce the size of an untrained or previously-trained neural network without reducing the accuracy of the neural network. For example, the neural network architecture pruning system jointly trains portions of a neural network while progressively pruning redundant subsets of the neural network at each training iteration. In many instances, the neural network architecture pruning system increases the accuracy of the neural network by progressively removing excess or redundant portions (e.g., channels or layers) of the neural network. Further, by removing portions of a neural network, the neural network architecture pruning system can increase the efficiency of the neural network.

    GENERATING AND UTILIZING PRUNED NEURAL NETWORKS

    公开(公告)号:US20230259778A1

    公开(公告)日:2023-08-17

    申请号:US18309367

    申请日:2023-04-28

    Applicant: Adobe Inc.

    CPC classification number: G06N3/082 G06N3/04

    Abstract: The disclosure describes one or more implementations of a neural network architecture pruning system that automatically and progressively prunes neural networks. For instance, the neural network architecture pruning system can automatically reduce the size of an untrained or previously-trained neural network without reducing the accuracy of the neural network. For example, the neural network architecture pruning system jointly trains portions of a neural network while progressively pruning redundant subsets of the neural network at each training iteration. In many instances, the neural network architecture pruning system increases the accuracy of the neural network by progressively removing excess or redundant portions (e.g., channels or layers) of the neural network. Further, by removing portions of a neural network, the neural network architecture pruning system can increase the efficiency of the neural network.

    Generating and utilizing pruned neural networks

    公开(公告)号:US11983632B2

    公开(公告)日:2024-05-14

    申请号:US18309367

    申请日:2023-04-28

    Applicant: Adobe Inc.

    CPC classification number: G06N3/082 G06N3/04

    Abstract: The disclosure describes one or more implementations of a neural network architecture pruning system that automatically and progressively prunes neural networks. For instance, the neural network architecture pruning system can automatically reduce the size of an untrained or previously-trained neural network without reducing the accuracy of the neural network. For example, the neural network architecture pruning system jointly trains portions of a neural network while progressively pruning redundant subsets of the neural network at each training iteration. In many instances, the neural network architecture pruning system increases the accuracy of the neural network by progressively removing excess or redundant portions (e.g., channels or layers) of the neural network. Further, by removing portions of a neural network, the neural network architecture pruning system can increase the efficiency of the neural network.

    Shaping a neural network architecture utilizing learnable sampling layers

    公开(公告)号:US11710042B2

    公开(公告)日:2023-07-25

    申请号:US16782793

    申请日:2020-02-05

    Applicant: Adobe Inc.

    CPC classification number: G06N3/082 G06N3/04

    Abstract: The present disclosure relates to shaping the architecture of a neural network. For example, the disclosed systems can provide a neural network shaping mechanism for at least one sampling layer of a neural network. The neural network shaping mechanism can include a learnable scaling factor between a sampling rate of the at least one sampling layer and an additional sampling function. The disclosed systems can learn the scaling factor based on a dataset while jointly learning the network weights of the neural network. Based on the learned scaling factor, the disclosed systems can shape the architecture of the neural network by modifying the sampling rate of the at least one sampling layer.

    Neural network architecture pruning

    公开(公告)号:US11663481B2

    公开(公告)日:2023-05-30

    申请号:US16799191

    申请日:2020-02-24

    Applicant: Adobe Inc.

    CPC classification number: G06N3/082 G06N3/04

    Abstract: The disclosure describes one or more implementations of a neural network architecture pruning system that automatically and progressively prunes neural networks. For instance, the neural network architecture pruning system can automatically reduce the size of an untrained or previously-trained neural network without reducing the accuracy of the neural network. For example, the neural network architecture pruning system jointly trains portions of a neural network while progressively pruning redundant subsets of the neural network at each training iteration. In many instances, the neural network architecture pruning system increases the accuracy of the neural network by progressively removing excess or redundant portions (e.g., channels or layers) of the neural network. Further, by removing portions of a neural network, the neural network architecture pruning system can increase the efficiency of the neural network.

Patent Agency Ranking