-
公开(公告)号:US20220067992A1
公开(公告)日:2022-03-03
申请号:US17007693
申请日:2020-08-31
Applicant: Adobe Inc.
Inventor: Ning XU , Trung Bui , Jing Shi , Franck Dernoncourt
Abstract: This disclosure involves executing artificial intelligence models that infer image editing operations from natural language requests spoken by a user. Further, this disclosure performs the inferred image editing operations using inferred parameters for the image editing operations. Systems and methods may be provided that infer one or more image editing operations from a natural language request associated with a source image, locate areas of the source that are relevant to the one or more image editing operations to generate image masks, and performing the one or more image editing operations to generate a modified source image.
-
公开(公告)号:US20220207751A1
公开(公告)日:2022-06-30
申请号:US17696377
申请日:2022-03-16
Applicant: ADOBE INC.
Inventor: Ning XU
Abstract: Methods and systems are provided for generating mattes for input images. A neural network system is trained to generate a matte for an input image utilizing contextual information within the image. Patches from the image and a corresponding trimap are extracted, and alpha values for each individual image patch are predicted based on correlations of features in different regions within the image patch. Predicting alpha values for an image patch may also be based on contextual information from other patches extracted from the same image. This contextual information may be determined by determining correlations between features in the query patch and context patches. The predicted alpha values for an image patch form a matte patch, and all matte patches generated for the patches are stitched together to form an overall matte for the input image.
-
公开(公告)号:US20210264236A1
公开(公告)日:2021-08-26
申请号:US16802440
申请日:2020-02-26
Applicant: ADOBE INC.
Inventor: Ning XU , Bayram Safa CICEK , Hailin JIN , Zhaowen WANG
Abstract: Embodiments of the present disclosure are directed towards improved models trained using unsupervised domain adaptation. In particular, a style-content adaptation system provides improved translation during unsupervised domain adaptation by controlling the alignment of conditional distributions of a model during training such that content (e.g., a class) from a target domain is correctly mapped to content (e.g., the same class) in a source domain. The style-content adaptation system improves unsupervised domain adaptation using independent control over content (e.g., related to a class) as well as style (e.g., related to a domain) to control alignment when translating between the source and target domain. This independent control over content and style can also allow for images to be generated using the style-content adaptation system that contain desired content and/or style.
-
-