-
公开(公告)号:US10521892B2
公开(公告)日:2019-12-31
申请号:US15253655
申请日:2016-08-31
Applicant: ADOBE INC.
Inventor: Kalyan K. Sunkavalli , Sunil Hadap , Elya Shechtman , Zhixin Shu
Abstract: Embodiments of the present invention provide systems, methods, and computer storage media directed at relighting a target image based on a lighting effect from a reference image. In one embodiment, a target image and a reference image are received, the reference image includes a lighting effect desired to be applied to the target image. A lighting transfer is performed using color data and geometrical data associated with the reference image and color data and geometrical data associated with the target image. The lighting transfer causes generation of a relit image that corresponds with the target image having a lighting effect of the reference image. The relit image is provided for display to a user via one or more output devices. Other embodiments may be described and/or claimed.
-
公开(公告)号:US20190340419A1
公开(公告)日:2019-11-07
申请号:US15970831
申请日:2018-05-03
Applicant: Adobe Inc.
Inventor: Rebecca Ilene Milman , Jose Ignacio Echevarria Vallespi , Jingwan Lu , Elya Shechtman , Duygu Ceylan Aksit , David P. Simons
Abstract: Generation of parameterized avatars is described. An avatar generation system uses a trained machine-learning model to generate a parameterized avatar, from which digital visual content (e.g., images, videos, augmented and/or virtual reality (AR/VR) content) can be generated. The machine-learning model is trained to identify cartoon features of a particular style—from a library of these cartoon features—that correspond to features of a person depicted in a digital photograph. The parameterized avatar is data (e.g., a feature vector) that indicates the cartoon features identified from the library by the trained machine-learning model for the depicted person. This parameterization enables the avatar to be animated. The parameterization also enables the avatar generation system to generate avatars in non-photorealistic (relatively cartoony) styles such that, despite the style, the avatars preserve identities and expressions of persons depicted in input digital photographs.
-
公开(公告)号:US10402948B2
公开(公告)日:2019-09-03
申请号:US16009714
申请日:2018-06-15
Applicant: ADOBE INC.
Inventor: Sylvain Paris , Sohrab Amirghodsi , Aliakbar Darabi , Elya Shechtman
Abstract: Embodiments described herein are directed to methods and systems for facilitating control of smoothness of transitions between images. In embodiments, a difference of color values of pixels between a foreground image and the background image are identified along a boundary associated with a location at which to paste the foreground image relative to the background image. Thereafter, recursive down sampling of a region of pixels within the boundary by a sampling factor is performed to produce a plurality of down sampled images having color difference indicators associated with each pixel of the down sampled images. Such color difference indicators indicate whether a difference of color value exists for the corresponding pixel. To effectuate a seamless transition, the color difference indicators are normalized in association with each recursively down sampled image.
-
公开(公告)号:US12249132B2
公开(公告)日:2025-03-11
申请号:US17815451
申请日:2022-07-27
Applicant: Adobe Inc.
Inventor: Yijun Li , Nicholas Kolkin , Jingwan Lu , Elya Shechtman
Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for adapting generative neural networks to target domains utilizing an image translation neural network. In particular, in one or more embodiments, the disclosed systems utilize an image translation neural network to translate target results to a source domain for input in target neural network adaptation. For instance, in some embodiments, the disclosed systems compare a translated target result with a source result from a pretrained source generative neural network to adjust parameters of a target generative neural network to produce results corresponding in features to source results and corresponding in style to the target domain.
-
公开(公告)号:US20250069203A1
公开(公告)日:2025-02-27
申请号:US18454850
申请日:2023-08-24
Applicant: ADOBE INC.
Inventor: Yuqian Zhou , Krishna Kumar Singh , Benjamin Delarre , Zhe Lin , Jingwan Lu , Taesung Park , Sohrab Amirghodsi , Elya Shechtman
Abstract: A method, non-transitory computer readable medium, apparatus, and system for image generation are described. An embodiment of the present disclosure includes obtaining an input image, an inpainting mask, and a plurality of content preservation values corresponding to different regions of the inpainting mask, and identifying a plurality of mask bands of the inpainting mask based on the plurality of content preservation values. An image generation model generates an output image based on the input image and the inpainting mask. The output image is generated in a plurality of phases. Each of the plurality of phases uses a corresponding mask band of the plurality of mask bands as an input.
-
公开(公告)号:US12204610B2
公开(公告)日:2025-01-21
申请号:US17650967
申请日:2022-02-14
Applicant: Adobe Inc.
Inventor: Zhe Lin , Haitian Zheng , Jingwan Lu , Scott Cohen , Jianming Zhang , Ning Xu , Elya Shechtman , Connelly Barnes , Sohrab Amirghodsi
IPC: G06K9/00 , G06F18/214 , G06N3/08 , G06T5/77 , G06T7/11
Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for training a generative inpainting neural network to accurately generate inpainted digital images via object-aware training and/or masked regularization. For example, the disclosed systems utilize an object-aware training technique to learn parameters for a generative inpainting neural network based on masking individual object instances depicted within sample digital images of a training dataset. In some embodiments, the disclosed systems also (or alternatively) utilize a masked regularization technique as part of training to prevent overfitting by penalizing a discriminator neural network utilizing a regularization term that is based on an object mask. In certain cases, the disclosed systems further generate an inpainted digital image utilizing a trained generative inpainting model with parameters learned via the object-aware training and/or the masked regularization.
-
公开(公告)号:US20240404013A1
公开(公告)日:2024-12-05
申请号:US18515378
申请日:2023-11-21
Applicant: ADOBE INC.
Inventor: Yuqian Zhou , Krishna Kumar Singh , Zhe Lin , Qing Liu , Zhifei Zhang , Sohrab Amirghodsi , Elya Shechtman , Jingwan Lu
Abstract: Embodiments include systems and methods for generative image filling based on text and a reference image. In one aspect, the system obtains an input image, a reference image, and a text prompt. Then, the system encodes the reference image to obtain an image embedding and encodes the text prompt to obtain a text embedding. Subsequently, a composite image is generated based on the input image, the image embedding, and the text embedding.
-
公开(公告)号:US20240362757A1
公开(公告)日:2024-10-31
申请号:US18307546
申请日:2023-04-26
Applicant: Adobe Inc.
Inventor: Sohrab Amirghodsi , Lingzhi Zhang , Connelly Barnes , Elya Shechtman , Yuqian Zhou , Zhe Lin
CPC classification number: G06T5/77 , G06T5/30 , G06T5/50 , G06T7/11 , G06T2207/20076 , G06T2207/20081 , G06T2207/20084
Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for inpainting digital images utilizing mask-robust machine-learning models. In particular, in one or more embodiments, the disclosed systems obtain an initial mask for an object depicted in a digital image. Additionally, in some embodiments, the disclosed systems generate, utilizing a mask-robust inpainting machine-learning model, an inpainted image from the digital image and the initial mask. Moreover, in some implementations, the disclosed systems generate a relaxed mask that expands the initial mask. Furthermore, in some embodiments, the disclosed systems generate a modified image by compositing the inpainted image and the digital image utilizing the relaxed mask.
-
公开(公告)号:US20240281924A1
公开(公告)日:2024-08-22
申请号:US18171046
申请日:2023-02-17
Applicant: ADOBE INC.
Inventor: Taesung Park , Minguk Kang , Richard Zhang , Junyan Zhu , Elya Shechtman , Sylvain Paris
IPC: G06T3/40
CPC classification number: G06T3/4046 , G06T3/4053
Abstract: Systems and methods for image processing are described. Embodiments of the present disclosure obtain a low-resolution image and a text description of the low-resolution image. A mapping network generates a style vector representing the text description of the low-resolution image. An adaptive convolution component generates an adaptive convolution filter based on the style vector. An image generation network generates a high-resolution image corresponding to the low-resolution image based on the adaptive convolution filter.
-
公开(公告)号:US20240169621A1
公开(公告)日:2024-05-23
申请号:US18056579
申请日:2022-11-17
Applicant: ADOBE INC.
Inventor: Yotam Nitzan , Taesung Park , Michaël Gharbi , Richard Zhang , Junyan Zhu , Elya Shechtman
IPC: G06T11/60 , G06V10/774
CPC classification number: G06T11/60 , G06V10/774 , G06T2200/24 , G06V10/82
Abstract: Systems and methods for image generation include obtaining an input image and an attribute value representing an attribute of the input image to be modified; computing a modified latent vector for the input image by applying the attribute value to a basis vector corresponding to the attribute in a latent space of an image generation network; and generating a modified image based on the modified latent vector using the image generation network, wherein the modified image includes the attribute based on the attribute value.
-
-
-
-
-
-
-
-
-