-
公开(公告)号:US20210158495A1
公开(公告)日:2021-05-27
申请号:US16692843
申请日:2019-11-22
Applicant: Adobe Inc.
Inventor: Connelly Barnes , Utkarsh Singhal , Elya Shechtman , Michael Gharbi
Abstract: A method for manipulating a target image includes generating a query of the target image and keys and values of a first reference image. The method also includes generating matching costs by comparing the query of the target image with each key of the reference image and generating a set of weights from the matching costs. Further, the method includes generating a set of weighted values by applying each weight of the set of weights to a corresponding value of the values of the reference image and generating a weighted patch by adding each weighted value of the set of weighted values together. Additionally, the method includes generating a combined weighted patch by combining the weighted patch with additional weighted patches associated with additional queries of the target image and generating a manipulated image by applying the combined weighted patch to an image processing algorithm.
-
公开(公告)号:US20210142042A1
公开(公告)日:2021-05-13
申请号:US17154830
申请日:2021-01-21
Applicant: Adobe Inc.
Inventor: Kartik Sethi , Oliver Wang , Tharun Mohandoss , Elya Shechtman , Chetan Nanda
Abstract: In implementations of skin tone assisted digital image color matching, a device implements a color editing system, which includes a facial detection module to detect faces in an input image and in a reference image, and includes a skin tone model to determine a skin tone value reflective of a skin tone of each of the faces. A color matching module can be implemented to group the faces into one or more face groups based on the skin tone value of each of the faces, match a face group pair as an input image face group paired with a reference image face group, and generate a modified image from the input image based on color features of the reference image, the color features including face skin tones of the respective faces in the face group pair as part of the color features applied to modify the input image.
-
公开(公告)号:US20210012189A1
公开(公告)日:2021-01-14
申请号:US16507675
申请日:2019-07-10
Applicant: Adobe Inc.
Inventor: Oliver Wang , Kevin Wampler , Kalyan Krishna Sunkavalli , Elya Shechtman , Siddhant Jain
Abstract: Techniques for incorporating a black-box function into a neural network are described. For example, an image editing function may be the black-box function and may be wrapped into a layer of the neural network. A set of parameters and a source image are provided to the black-box function, and the output image that represents the source image with the set of parameters applied to the source image is output from the black-box function. To address the issue that the black-box function may not be differentiable, a loss optimization may calculate the gradients of the function using, for example, a finite differences calculation, and the gradients are used to train the neural network to ensure the output image is representative of an expected ground truth image.
-
公开(公告)号:US20200372710A1
公开(公告)日:2020-11-26
申请号:US16985402
申请日:2020-08-05
Applicant: Adobe, Inc.
Inventor: Oliver Wang , Vladimir Kim , Matthew Fisher , Elya Shechtman , Chen-Hsuan Lin , Bryan Russell
Abstract: Techniques are disclosed for 3D object reconstruction using photometric mesh representations. A decoder is pretrained to transform points sampled from 2D patches of representative objects into 3D polygonal meshes. An image frame of the object is fed into an encoder to get an initial latent code vector. For each frame and camera pair from the sequence, a polygonal mesh is rendered at the given viewpoints. The mesh is optimized by creating a virtual viewpoint, rasterized to obtain a depth map. The 3D mesh projections are aligned by projecting the coordinates corresponding to the polygonal face vertices of the rasterized mesh to both selected viewpoints. The photometric error is determined from RGB pixel intensities sampled from both frames. Gradients from the photometric error are backpropagated into the vertices of the assigned polygonal indices by relating the barycentric coordinates of each image to update the latent code vector.
-
公开(公告)号:US10740881B2
公开(公告)日:2020-08-11
申请号:US15935994
申请日:2018-03-26
Applicant: Adobe Inc.
Inventor: Oliver Wang , Michal Lukac , Elya Shechtman , Mahyar Najibikohnehshahri
Abstract: Techniques for using deep learning to facilitate patch-based image inpainting are described. In an example, a computer system hosts a neural network trained to generate, from an image, code vectors including features learned by the neural network and descriptive of patches. The image is received and contains a region of interest (e.g., a hole missing content). The computer system inputs it to the network and, in response, receives the code vectors. Each code vector is associated with a pixel in the image. Rather than comparing RGB values between patches, the computer system compares the code vector of a pixel inside the region to code vectors of pixels outside the region to find the best match based on a feature similarity measure (e.g., a cosine similarity). The pixel value of the pixel inside the region is set based on the pixel value of the matched pixel outside this region.
-
公开(公告)号:US10719913B2
公开(公告)日:2020-07-21
申请号:US16160855
申请日:2018-10-15
Applicant: ADOBE INC.
Inventor: Sohrab Amirghodsi , Aliakbar Darabi , Elya Shechtman
Abstract: Embodiments of the present invention provide systems, methods, and computer storage media directed at image synthesis utilizing sampling of patch correspondence information between iterations at different scales. A patch synthesis technique can be performed to synthesize a target region at a first image scale based on portions of a source region that are identified by the patch synthesis technique. The image can then be sampled to generate an image at a second image scale. The sampling can include generating patch correspondence information for the image at the second image scale. Invalid patch assignments in the patch correspondence information at the second image scale can then be identified, and valid patches can be assigned to the pixels having invalid patch assignments. Other embodiments may be described and/or claimed.
-
147.
公开(公告)号:US20200151938A1
公开(公告)日:2020-05-14
申请号:US16184289
申请日:2018-11-08
Applicant: Adobe Inc.
Inventor: Elya Shechtman , Yijun Li , Chen Fang , Aaron Hertzmann
Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that integrate (or embed) a non-photorealistic rendering (“NPR”) generator with a style-transfer-neural network to generate stylized images that both correspond to a source image and resemble a stroke style. By integrating an NPR generator with a style-transfer-neural network, the disclosed methods, non-transitory computer readable media, and systems can accurately capture a stroke style resembling one or both of stylized edges or stylized shadings. When training such a style-transfer-neural network, the integrated NPR generator can enable the disclosed methods, non-transitory computer readable media, and systems to use real-stroke drawings (instead of conventional paired-ground-truth drawings) for training the network to accurately portray a stroke style. In some implementations, the disclosed methods, non-transitory computer readable media, and systems can either train or apply a style-transfer-neural network that captures a variety of stroke styles, such as different edge-stroke styles or shading-stroke styles.
-
公开(公告)号:US10607065B2
公开(公告)日:2020-03-31
申请号:US15970831
申请日:2018-05-03
Applicant: Adobe Inc.
Inventor: Rebecca Ilene Milman , Jose Ignacio Echevarria Vallespi , Jingwan Lu , Elya Shechtman , Duygu Ceylan Aksit , David P. Simons
Abstract: Generation of parameterized avatars is described. An avatar generation system uses a trained machine-learning model to generate a parameterized avatar, from which digital visual content (e.g., images, videos, augmented and/or virtual reality (AR/VR) content) can be generated. The machine-learning model is trained to identify cartoon features of a particular style—from a library of these cartoon features—that correspond to features of a person depicted in a digital photograph. The parameterized avatar is data (e.g., a feature vector) that indicates the cartoon features identified from the library by the trained machine-learning model for the depicted person. This parameterization enables the avatar to be animated. The parameterization also enables the avatar generation system to generate avatars in non-photorealistic (relatively cartoony) styles such that, despite the style, the avatars preserve identities and expressions of persons depicted in input digital photographs.
-
公开(公告)号:US10565758B2
公开(公告)日:2020-02-18
申请号:US15622711
申请日:2017-06-14
Applicant: Adobe Inc.
Inventor: Sunil Hadap , Elya Shechtman , Zhixin Shu , Kalyan Sunkavalli , Mehmet Yumer
Abstract: Techniques are disclosed for performing manipulation of facial images using an artificial neural network. A facial rendering and generation network and method learns one or more compact, meaningful manifolds of facial appearance, by disentanglement of a facial image into intrinsic facial properties, and enables facial edits by traversing paths of such manifold(s). The facial rendering and generation network is able to handle a much wider range of manipulations including changes to, for example, viewpoint, lighting, expression, and even higher-level attributes like facial hair and age—aspects that cannot be represented using previous models.
-
公开(公告)号:US20190287225A1
公开(公告)日:2019-09-19
申请号:US15921457
申请日:2018-03-14
Applicant: ADOBE INC.
Inventor: Sohrab Amirghodsi , Kevin Wampler , Elya Shechtman , Aliakbar Darabi
Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for improved patch validity testing for patch-based synthesis applications using similarity transforms. The improved patch validity tests are used to validate (or invalidate) candidate patches as valid patches falling within a sampling region of a source image. The improved patch validity tests include a hole dilation test for patch validity, a no-dilation test for patch invalidity, and a comprehensive pixel test for patch invalidity. A fringe test for range invalidity can be used to identify pixels with an invalid range and invalidate corresponding candidate patches. The fringe test for range invalidity can be performed as a precursor to any or all of the improved patch validity tests. In this manner, validated candidate patches are used to automatically reconstruct a target image.
-
-
-
-
-
-
-
-
-