-
公开(公告)号:US10672164B2
公开(公告)日:2020-06-02
申请号:US15785386
申请日:2017-10-16
Applicant: Adobe Inc.
Inventor: Zhe Lin , Xin Lu , Xiaohui Shen , Jimei Yang , Jiahui Yu
Abstract: Predicting patch displacement maps using a neural network is described. Initially, a digital image on which an image editing operation is to be performed is provided as input to a patch matcher having an offset prediction neural network. From this image and based on the image editing operation for which this network is trained, the offset prediction neural network generates an offset prediction formed as a displacement map, which has offset vectors that represent a displacement of pixels of the digital image to different locations for performing the image editing operation. Pixel values of the digital image are copied to the image pixels affected by the operation by: determining the vectors pixels that correspond to the image pixels affected by the image editing operation and mapping the pixel values of the image pixels represented by the determined offset vectors to the affected pixels. According to this mapping, the pixel values of the affected pixels are set, effective to perform the image editing operation.
-
公开(公告)号:US20200151947A1
公开(公告)日:2020-05-14
申请号:US16185587
申请日:2018-11-09
Applicant: Adobe Inc.
Inventor: Zhili Chen , Qingyang Li , Jimei Yang
Abstract: A 3D fluid volume generation system obtains a 2D sketch of an outline of a fluid for which the 3D fluid volume is to be generated, and generates a 3D fluid volume that matches the user's sketch. The 3D fluid volume generation system implements a coarse volume generation stage followed by a refinement stage. In the coarse volume generation stage, the 3D fluid volume generation system generates a coarse 3D fluid volume based on the 2D sketch. The coarse 3D fluid volume is referred to as “coarse” because the contour of the coarse 3D fluid volume roughly matches the 2D sketch. In the refinement stage, the coarse 3D fluid volume is refined to better match the 2D sketch, and the 3D fluid volume for the 2D sketch is output.
-
53.
公开(公告)号:US20200151508A1
公开(公告)日:2020-05-14
申请号:US16186382
申请日:2018-11-09
Applicant: Adobe Inc.
Inventor: Jimei Yang , Jianming Zhang , Aaron Phillip Hertzmann , Jianan Li
Abstract: Digital image layout training is described using wireframe rendering within a generative adversarial network (GAN) system. A GAN system is employed to train the generator module to refine digital image layouts. To do so, a wireframe rendering discriminator module rasterizes a refined digital training digital image layout received from a generator module into a wireframe digital image layout. The wireframe digital image layout is then compared with at least one ground truth digital image layout using a loss function as part of machine learning by the wireframe discriminator module. The generator module is then trained by backpropagating a result of the comparison.
-
公开(公告)号:US20200082610A1
公开(公告)日:2020-03-12
申请号:US16126552
申请日:2018-09-10
Applicant: Adobe Inc.
Inventor: Xin Sun , Zhili Chen , Nathan Carr , Julio Marco Murria , Jimei Yang
Abstract: According to one general aspect, systems and techniques for rendering a painting stroke of a three-dimensional digital painting include receiving a painting stroke input on a canvas, where the painting stroke includes a plurality of pixels. For each of the pixels in the plurality of pixels, a neighborhood patch of pixels is selected and input into a neural network and a shading function is output from the neural network. The painting stroke is rendered on the canvas using the shading function.
-
公开(公告)号:US10482639B2
公开(公告)日:2019-11-19
申请号:US15438147
申请日:2017-02-21
Applicant: Adobe Inc.
Inventor: Yijun Li , Chen Fang , Jimei Yang , Zhaowen Wang , Xin Lu
Abstract: In some embodiments, techniques for synthesizing an image style based on a plurality of neural networks are described. A computer system selects a style image based on user input that identifies the style image. The computer system generates an image based on a generator neural network and a loss neural network. The generator neural network outputs the synthesized image based on a noise vector and the style image and is trained based on style features generated from the loss neural network. The loss neural network outputs the style features based on a training image. The training image and the style image have a same resolution. The style features are generated at different resolutions of the training image. The computer system provides the synthesized image to a user device in response to the user input.
-
公开(公告)号:US10445921B1
公开(公告)日:2019-10-15
申请号:US16007898
申请日:2018-06-13
Applicant: Adobe Inc.
Inventor: Yijun Li , Chen Fang , Jimei Yang , Zhaowen Wang , Xin Lu
Abstract: Transferring motion between consecutive frames to a digital image is leveraged in a digital medium environment. A digital image and at least a portion of the digital video are exposed to a motion transfer model. The portion of the digital video includes at least a first digital video frame and a second digital video frame that is consecutive to the first digital video frame. Flow data between the first digital video frame and the second digital image frame is extracted, and the flow data is then processed to generate motion features representing motion between the first digital video frame and the second digital video frame. The digital image is processed to generate image features of the digital image. Motion of the digital video is then transferred to the digital image by combining the motion features with the image features to generate a next digital image frame for the digital image.
-
公开(公告)号:US10424086B2
公开(公告)日:2019-09-24
申请号:US15814751
申请日:2017-11-16
Applicant: Adobe Inc.
Inventor: Zhili Chen , Zhaowen Wang , Rundong Wu , Jimei Yang
Abstract: Oil painting simulation techniques are disclosed which simulate painting brush strokes using a trained neural network. In some examples, a method may include inferring a new height map of existing paint on a canvas after a new painting brush stroke is applied based on a bristle trajectory map that represents the new painting brush stroke and a height map of existing paint on the canvas prior to the application of the new painting brush stroke, and generating a rendering of the new painting brush stroke based on the new height map of existing paint on the canvas after the new painting brush stroke is applied to the canvas and a color map.
-
公开(公告)号:US10410351B2
公开(公告)日:2019-09-10
申请号:US16116609
申请日:2018-08-29
Applicant: Adobe Inc.
Inventor: Zhe Lin , Xin Lu , Xiaohui Shen , Jimei Yang , Chenxi Liu
Abstract: The invention is directed towards segmenting images based on natural language phrases. An image and an n-gram, including a sequence of tokens, are received. An encoding of image features and a sequence of token vectors are generated. A fully convolutional neural network identifies and encodes the image features. A word embedding model generates the token vectors. A recurrent neural network (RNN) iteratively updates a segmentation map based on combinations of the image feature encoding and the token vectors. The segmentation map identifies which pixels are included in an image region referenced by the n-gram. A segmented image is generated based on the segmentation map. The RNN may be a convolutional multimodal RNN. A separate RNN, such as a long short-term memory network, may iteratively update an encoding of semantic features based on the order of tokens. The first RNN may update the segmentation map based on the semantic feature encoding.
-
公开(公告)号:US12260530B2
公开(公告)日:2025-03-25
申请号:US18190544
申请日:2023-03-27
Applicant: Adobe Inc.
Inventor: Krishna Kumar Singh , Yijun Li , Jingwan Lu , Duygu Ceylan Aksit , Yangtuanfeng Wang , Jimei Yang , Tobias Hinz , Qing Liu , Jianming Zhang , Zhe Lin
Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify digital images via scene-based editing using image understanding facilitated by artificial intelligence. For example, in one or more embodiments the disclosed systems utilize generative machine learning models to create modified digital images portraying human subjects. In particular, the disclosed systems generate modified digital images by performing infill modifications to complete a digital image or human inpainting for portions of a digital image that portrays a human. Moreover, in some embodiments, the disclosed systems perform reposing of subjects portrayed within a digital image to generate modified digital images. In addition, the disclosed systems in some embodiments perform facial expression transfer and facial expression animations to generate modified digital images or animations.
-
公开(公告)号:US12033261B2
公开(公告)日:2024-07-09
申请号:US17385559
申请日:2021-07-26
Applicant: Adobe Inc.
Inventor: Ruben Villegas , Jun Saito , Jimei Yang , Duygu Ceylan Aksit , Aaron Hertzmann
Abstract: One example method involves a processing device that performs operations that include receiving a request to retarget a source motion into a target object. Operations further include providing the target object to a contact-aware motion retargeting neural network trained to retarget the source motion into the target object. The contact-aware motion retargeting neural network is trained by accessing training data that includes a source object performing the source motion. The contact-aware motion retargeting neural network generates retargeted motion for the target object, based on a self-contact having a pair of input vertices. The retargeted motion is subject to motion constraints that: (i) preserve a relative location of the self-contact and (ii) prevent self-penetration of the target object.
-
-
-
-
-
-
-
-
-