-
公开(公告)号:US20220148326A1
公开(公告)日:2022-05-12
申请号:US17648718
申请日:2022-01-24
Applicant: Adobe Inc.
Inventor: Christopher Alan Tensmeyer , Rajiv Jain , Curtis Michael Wigington , Brian Lynn Price , Brian Lafayette Davis
IPC: G06V30/32 , G06F3/04883 , G06N3/04 , G06N3/08 , G06V30/228 , G06V30/226
Abstract: Techniques are provided for generating a digital image of simulated handwriting using an encoder-decoder neural network trained on images of natural handwriting samples. The simulated handwriting image can be generated based on a style of a handwriting sample and a variable length coded text input. The style represents visually distinctive characteristics of the handwriting sample, such as the shape, size, slope, and spacing of the letters, characters, or other markings in the handwriting sample. The resulting simulated handwriting image can include the text input rendered in the style of the handwriting sample. The distinctive visual appearance of the letters or words in the simulated handwriting image mimics the visual appearance of the letters or words in the handwriting sample image, whether the letters or words in the simulated handwriting image are the same as in the handwriting sample image or different from those in the handwriting sample image.
-
公开(公告)号:US20200349189A1
公开(公告)日:2020-11-05
申请号:US16929429
申请日:2020-07-15
Applicant: Adobe Inc.
Inventor: Xiaohui Shen , Zhe Lin , Kalyan Krishna Sunkavalli , Hengshuang Zhao , Brian Lynn Price
Abstract: Compositing aware digital image search techniques and systems are described that leverage machine learning. In one example, a compositing aware image search system employs a two-stream convolutional neural network (CNN) to jointly learn feature embeddings from foreground digital images that capture a foreground object and background digital images that capture a background scene. In order to train models of the convolutional neural networks, triplets of training digital images are used. Each triplet may include a positive foreground digital image and a positive background digital image taken from the same digital image. The triplet also contains a negative foreground or background digital image that is dissimilar to the positive foreground or background digital image that is also included as part of the triplet.
-
公开(公告)号:US20200242822A1
公开(公告)日:2020-07-30
申请号:US16841246
申请日:2020-04-06
Applicant: Adobe Inc.
Inventor: Hailin Jin , John Philip Collomosse , Brian Lynn Price
Abstract: Techniques and systems are described for style-aware patching of a digital image in a digital medium environment. For example, a digital image creation system generates style data for a portion to be filled of a digital image, indicating a style of an area surrounding the portion. The digital image creation system also generates content data for the portion indicating content of the digital image of the area surrounding the portion. The digital image creation system selects a source digital image based on similarity of both style and content of the source digital image at a location of the patch to the style data and content data. The digital image creation system transforms the style of the source digital image based on the style data and generates the patch from the source digital image in the transformed style for incorporation into the portion to be filled of the digital image.
-
公开(公告)号:US11756208B2
公开(公告)日:2023-09-12
申请号:US17544048
申请日:2021-12-07
Applicant: Adobe Inc.
Inventor: Brian Lynn Price , Peng Zhou , Scott David Cohen , Gregg Darryl Wilensky
CPC classification number: G06T7/13 , G06T2207/20081 , G06T2207/20084
Abstract: In implementations of object boundary generation, a computing device implements a boundary system to receive a mask defining a contour of an object depicted in a digital image, the mask having a lower resolution than the digital image. The boundary system maps a curve to the contour of the object and extracts strips of pixels from the digital image which are normal to points of the curve. A sample of the digital image is generated using the extracted strips of pixels which is input to a machine learning model. The machine learning model outputs a representation of a boundary of the object by processing the sample of the digital image.
-
公开(公告)号:US11631162B2
公开(公告)日:2023-04-18
申请号:US17557431
申请日:2021-12-21
Applicant: Adobe Inc.
Inventor: Brian Lynn Price , Yinan Zhao , Scott David Cohen
Abstract: Fill techniques as implemented by a computing device are described to perform hole filling of a digital image. In one example, deeply learned features of a digital image using machine learning are used by a computing device as a basis to search a digital image repository to locate the guidance digital image. Once located, machine learning techniques are then used to align the guidance digital image with the hole to be filled in the digital image. Once aligned, the guidance digital image is then used to guide generation of fill for the hole in the digital image. Machine learning techniques are used to determine which parts of the guidance digital image are to be blended to fill the hole in the digital image and which parts of the hole are to receive new content that is synthesized by the computing device.
-
公开(公告)号:US11514252B2
公开(公告)日:2022-11-29
申请号:US16004395
申请日:2018-06-10
Applicant: Adobe Inc.
Inventor: Brian Lynn Price , Ruotian Luo , Scott David Cohen
Abstract: A discriminative captioning system generates captions for digital images that can be used to tell two digital images apart. The discriminative captioning system includes a machine learning system that is trained by a discriminative captioning training system that includes a retrieval machine learning system. For training, a digital image is input to the caption generation machine learning system, which generates a caption for the digital image. The digital image and the generated caption, as well as a set of additional images, are input to the retrieval machine learning system. The retrieval machine learning system generates a discriminability loss that indicates how well the retrieval machine learning system is able to use the caption to discriminate between the digital image and each image in the set of additional digital images. This discriminability loss is used to train the caption generation machine learning system.
-
公开(公告)号:US11250252B2
公开(公告)日:2022-02-15
申请号:US16701586
申请日:2019-12-03
Applicant: ADOBE INC.
Inventor: Christopher Alan Tensmeyer , Rajiv Jain , Curtis Michael Wigington , Brian Lynn Price , Brian Lafayette Davis
IPC: G06K9/00 , G06F3/0488 , G06N3/04 , G06K9/22 , G06N3/08
Abstract: Techniques are provided for generating a digital image of simulated handwriting using an encoder-decoder neural network trained on images of natural handwriting samples. The simulated handwriting image can be generated based on a style of a handwriting sample and a variable length coded text input. The style represents visually distinctive characteristics of the handwriting sample, such as the shape, size, slope, and spacing of the letters, characters, or other markings in the handwriting sample. The resulting simulated handwriting image can include the text input rendered in the style of the handwriting sample. The distinctive visual appearance of the letters or words in the simulated handwriting image mimics the visual appearance of the letters or words in the handwriting sample image, whether the letters or words in the simulated handwriting image are the same as in the handwriting sample image or different from those in the handwriting sample image.
-
公开(公告)号:US20190361994A1
公开(公告)日:2019-11-28
申请号:US15986401
申请日:2018-05-22
Applicant: Adobe Inc.
Inventor: Xiaohui Shen , Zhe Lin , Kalyan Krishna Sunkavalli , Hengshuang Zhao , Brian Lynn Price
Abstract: Compositing aware digital image search techniques and systems are described that leverage machine learning. In one example, a compositing aware image search system employs a two-stream convolutional neural network (CNN) to jointly learn feature embeddings from foreground digital images that capture a foreground object and background digital images that capture a background scene. In order to train models of the convolutional neural networks, triplets of training digital images are used. Each triplet may include a positive foreground digital image and a positive background digital image taken from the same digital image. The triplet also contains a negative foreground or background digital image that is dissimilar to the positive foreground or background digital image that is also included as part of the triplet.
-
公开(公告)号:US20190196698A1
公开(公告)日:2019-06-27
申请号:US15852253
申请日:2017-12-22
Applicant: Adobe Inc.
Inventor: Scott David Cohen , Brian Lynn Price , Abhinav Gupta
IPC: G06F3/0484 , G06T11/60 , G10L15/22 , G06K9/46 , G06F17/30
CPC classification number: G06F3/04845 , G06F16/532 , G06F16/58 , G06K9/4609 , G06K2009/363 , G06K2009/366 , G06T11/60 , G10L15/22
Abstract: Systems and techniques are described herein for directing a user conversation to obtain an editing query, and removing and replacing objects in an image based on the editing query. Pixels corresponding to an object in the image indicated by the editing query are ascertained. The editing query is processed to determine whether it includes a remove request or a replace request. A search query is constructed to obtain images, such as from a database of stock images, including fill material or replacement material to fulfill the remove request or replace request, respectively. Composite images are generated from the fill material or the replacement material and the image to be edited. Composite images are harmonized to remove editing artifacts and make the images look natural. A user interface exposes images, and the user interface accepts multi-modal user input during the directed user conversation.
-
公开(公告)号:US11663467B2
公开(公告)日:2023-05-30
申请号:US16691110
申请日:2019-11-21
Applicant: ADOBE INC.
Inventor: Long Mai , Yannick Hold-Geoffroy , Naoto Inoue , Daichi Ito , Brian Lynn Price
CPC classification number: G06N3/08 , G06T5/50 , G06T15/506 , G06T15/80 , G06T2207/10028 , G06T2207/20081 , G06T2207/20084
Abstract: Embodiments of the present invention provide systems, methods, and non-transitory computer storage media for generating an ambient occlusion (AO) map for a 2D image that can be combined with the 2D image to adjust the contrast of the 2D image based on the geometric information in the 2D image. In embodiments, using a trained neural network, an AO map for a 2D image is automatically generated without any predefined 3D scene information. Optimizing the neural network to generate an estimated AO map for a 2D image requires training, testing, and validating the neural network using a synthetic dataset comprised of pairs of images and ground truth AO maps rendered from 3D scenes. By using an estimated AO map to adjust the contrast of a 2D image, the contrast of the image can be adjusted to make the image appear lifelike by modifying the shadows and shading in the image based on the ambient lighting present in the image.
-
-
-
-
-
-
-
-
-