-
公开(公告)号:US20210241111A1
公开(公告)日:2021-08-05
申请号:US16782793
申请日:2020-02-05
Applicant: Adobe Inc.
Inventor: Shikun Liu , Zhe Lin , Yilin Wang , Jianming Zhang , Federico Perazzi
Abstract: The present disclosure relates to shaping the architecture of a neural network. For example, the disclosed systems can provide a neural network shaping mechanism for at least one sampling layer of a neural network. The neural network shaping mechanism can include a learnable scaling factor between a sampling rate of the at least one sampling layer and an additional sampling function. The disclosed systems can learn the scaling factor based on a dataset while jointly learning the network weights of the neural network. Based on the learned scaling factor, the disclosed systems can shape the architecture of the neural network by modifying the sampling rate of the at least one sampling layer.
-
公开(公告)号:US20210264278A1
公开(公告)日:2021-08-26
申请号:US16799191
申请日:2020-02-24
Applicant: Adobe Inc.
Inventor: Shikun Liu , Zhe Lin , Yilin Wang , Jianming Zhang , Federico Perazzi
Abstract: The disclosure describes one or more implementations of a neural network architecture pruning system that automatically and progressively prunes neural networks. For instance, the neural network architecture pruning system can automatically reduce the size of an untrained or previously-trained neural network without reducing the accuracy of the neural network. For example, the neural network architecture pruning system jointly trains portions of a neural network while progressively pruning redundant subsets of the neural network at each training iteration. In many instances, the neural network architecture pruning system increases the accuracy of the neural network by progressively removing excess or redundant portions (e.g., channels or layers) of the neural network. Further, by removing portions of a neural network, the neural network architecture pruning system can increase the efficiency of the neural network.
-
公开(公告)号:US20230259778A1
公开(公告)日:2023-08-17
申请号:US18309367
申请日:2023-04-28
Applicant: Adobe Inc.
Inventor: Shikun Liu , Zhe Lin , Yilin Wang , Jianming Zhang , Federico Perazzi
Abstract: The disclosure describes one or more implementations of a neural network architecture pruning system that automatically and progressively prunes neural networks. For instance, the neural network architecture pruning system can automatically reduce the size of an untrained or previously-trained neural network without reducing the accuracy of the neural network. For example, the neural network architecture pruning system jointly trains portions of a neural network while progressively pruning redundant subsets of the neural network at each training iteration. In many instances, the neural network architecture pruning system increases the accuracy of the neural network by progressively removing excess or redundant portions (e.g., channels or layers) of the neural network. Further, by removing portions of a neural network, the neural network architecture pruning system can increase the efficiency of the neural network.
-
公开(公告)号:US11983632B2
公开(公告)日:2024-05-14
申请号:US18309367
申请日:2023-04-28
Applicant: Adobe Inc.
Inventor: Shikun Liu , Zhe Lin , Yilin Wang , Jianming Zhang , Federico Perazzi
Abstract: The disclosure describes one or more implementations of a neural network architecture pruning system that automatically and progressively prunes neural networks. For instance, the neural network architecture pruning system can automatically reduce the size of an untrained or previously-trained neural network without reducing the accuracy of the neural network. For example, the neural network architecture pruning system jointly trains portions of a neural network while progressively pruning redundant subsets of the neural network at each training iteration. In many instances, the neural network architecture pruning system increases the accuracy of the neural network by progressively removing excess or redundant portions (e.g., channels or layers) of the neural network. Further, by removing portions of a neural network, the neural network architecture pruning system can increase the efficiency of the neural network.
-
公开(公告)号:US11710042B2
公开(公告)日:2023-07-25
申请号:US16782793
申请日:2020-02-05
Applicant: Adobe Inc.
Inventor: Shikun Liu , Zhe Lin , Yilin Wang , Jianming Zhang , Federico Perazzi
Abstract: The present disclosure relates to shaping the architecture of a neural network. For example, the disclosed systems can provide a neural network shaping mechanism for at least one sampling layer of a neural network. The neural network shaping mechanism can include a learnable scaling factor between a sampling rate of the at least one sampling layer and an additional sampling function. The disclosed systems can learn the scaling factor based on a dataset while jointly learning the network weights of the neural network. Based on the learned scaling factor, the disclosed systems can shape the architecture of the neural network by modifying the sampling rate of the at least one sampling layer.
-
公开(公告)号:US11663481B2
公开(公告)日:2023-05-30
申请号:US16799191
申请日:2020-02-24
Applicant: Adobe Inc.
Inventor: Shikun Liu , Zhe Lin , Yilin Wang , Jianming Zhang , Federico Perazzi
Abstract: The disclosure describes one or more implementations of a neural network architecture pruning system that automatically and progressively prunes neural networks. For instance, the neural network architecture pruning system can automatically reduce the size of an untrained or previously-trained neural network without reducing the accuracy of the neural network. For example, the neural network architecture pruning system jointly trains portions of a neural network while progressively pruning redundant subsets of the neural network at each training iteration. In many instances, the neural network architecture pruning system increases the accuracy of the neural network by progressively removing excess or redundant portions (e.g., channels or layers) of the neural network. Further, by removing portions of a neural network, the neural network architecture pruning system can increase the efficiency of the neural network.
-
-
-
-
-