-
公开(公告)号:US20230316591A1
公开(公告)日:2023-10-05
申请号:US17709895
申请日:2022-03-31
Applicant: Adobe Inc.
Inventor: Zhixin Shu , Zhe Lin , Yuchen Liu , Yijun Li , Richard Zhang
IPC: G06T11/00 , G06V10/40 , G06V10/774
CPC classification number: G06T11/00 , G06V10/40 , G06V10/7747
Abstract: Techniques for identity preserved controllable facial image manipulation are described that support generation of a manipulated digital image based on a facial image and a render image. For instance, a facial image depicting a facial representation of an individual is received as input. A feature space including an identity parameter and at least one other visual parameter is extracted from the facial image. An editing module edits one or more of the visual parameters and preserves the identity parameter. A renderer generates a render image depicting a morphable model reconstruction of the facial image based on the edit. The render image and facial image are encoded, and a generator of a neural network is implemented to generate a manipulated digital image based on the encoded facial image and the encoded render image.
-
公开(公告)号:US11934958B2
公开(公告)日:2024-03-19
申请号:US17147912
申请日:2021-01-13
Applicant: Adobe Inc.
Inventor: Zhixin Shu , Zhe Lin , Yuchen Liu , Yijun Li
Abstract: This disclosure describes one or more embodiments of systems, non-transitory computer-readable media, and methods that utilize channel pruning and knowledge distillation to generate a compact noise-to-image GAN. For example, the disclosed systems prune less informative channels via outgoing channel weights of the GAN. In some implementations, the disclosed systems further utilize content-aware pruning by utilizing a differentiable loss between an image generated by the GAN and a modified version of the image to identify sensitive channels within the GAN during channel pruning. In some embodiments, the disclosed systems utilize knowledge distillation to learn parameters for the pruned GAN to mimic a full-size GAN. In certain implementations, the disclosed systems utilize content-aware knowledge distillation by applying content masks on images generated by both the pruned GAN and its full-size counterpart to obtain knowledge distillation losses between the images for use in learning the parameters for the pruned GAN.
-
公开(公告)号:US20250117971A1
公开(公告)日:2025-04-10
申请号:US18816693
申请日:2024-08-27
Applicant: ADOBE INC.
Inventor: Feng Liu , Zhengang Li , Yan Kang , Yuchen Liu , Difan Liu , Tobias Hinz
IPC: G06T11/00 , G06T3/4046 , G06T3/4053 , G06V10/774 , G06V10/776 , G06V10/82
Abstract: A method, apparatus, non-transitory computer readable medium, apparatus, and system for video generation include first obtaining a training set including a training video. Then, embodiments initialize a video generation model, sample a subnet architecture from an architecture search space, and a identify a subset of the weights of the video generation model based on the sampled subnet architecture. Subsequently, embodiments train, based on the training video, a subnet of the video generation model to generate synthetic video data. The subnet includes a subset of the weights of the video generation model.
-
公开(公告)号:US20220222532A1
公开(公告)日:2022-07-14
申请号:US17147912
申请日:2021-01-13
Applicant: Adobe Inc.
Inventor: Zhixin Shu , Zhe Lin , Yuchen Liu , Yijun Li
Abstract: This disclosure describes one or more embodiments of systems, non-transitory computer-readable media, and methods that utilize channel pruning and knowledge distillation to generate a compact noise-to-image GAN. For example, the disclosed systems prune less informative channels via outgoing channel weights of the GAN. In some implementations, the disclosed systems further utilize content-aware pruning by utilizing a differentiable loss between an image generated by the GAN and a modified version of the image to identify sensitive channels within the GAN during channel pruning. In some embodiments, the disclosed systems utilize knowledge distillation to learn parameters for the pruned GAN to mimic a full-size GAN. In certain implementations, the disclosed systems utilize content-aware knowledge distillation by applying content masks on images generated by both the pruned GAN and its full-size counterpart to obtain knowledge distillation losses between the images for use in learning the parameters for the pruned GAN.
-
-
-