-
公开(公告)号:US20250078386A1
公开(公告)日:2025-03-06
申请号:US18238719
申请日:2023-08-28
Applicant: Adobe Inc.
Inventor: Yangtuanfeng Wang
Abstract: Retexturing items depicted in digital image data is described. An image retexturing system receives image data that depicts an item featuring a pattern. The image retexturing system identifies coarse correspondences between regions in the image data and a two-dimensional image of the pattern. Using the coarse correspondences, the image retexturing system establishes, for each pixel in the image data depicting the item, a pair of coordinates for a surface of the item featuring the pattern. The coordinate pairs are then used to generate a mesh that represents the surface of the item. The image retexturing system then applies a new texture to a surface of the item by mapping the new texture to a surface of the mesh. A shading layer and item mask are generated for the image data, which are combined with the retextured mask to generate a synthesized image that depicts the retextured item.
-
公开(公告)号:US12165260B2
公开(公告)日:2024-12-10
申请号:US17715646
申请日:2022-04-07
Applicant: Adobe Inc. , University College London
Inventor: Duygu Ceylan Aksit , Yangtuanfeng Wang , Niloy J. Mitra , Meng Zhang
Abstract: Systems and methods are described for rendering garments. The system includes a first machine learning model trained to generate coarse garment templates of a garment and a second machine learning model trained to render garment images. The first machine learning model generates a coarse garment template based on position data. The system produces a neural texture for the garment, the neural texture comprising a multi-dimensional feature map characterizing detail of the garment. The system provides the coarse garment template and the neural texture to the second machine learning model trained to render garment images. The second machine learning model generates a rendered garment image of the garment based on the coarse garment template of the garment and the neural texture.
-
公开(公告)号:US20240378809A1
公开(公告)日:2024-11-14
申请号:US18316490
申请日:2023-05-12
Applicant: Adobe Inc.
Inventor: Yangtuanfeng Wang , Yi Zhou , Yasamin Jafarian , Nathan Aaron Carr , Jimei Yang , Duygu Ceylan Aksit
IPC: G06T17/20
Abstract: Decal application techniques as implemented by a computing device are described to perform decaling of a digital image. In one example, learned features of a digital image using machine learning are used by a computing device as a basis to predict the surface geometry of an object in the digital image. Once the surface geometry of the object is predicted, machine learning techniques are then used by the computing device to configure an overlay object to be applied onto the digital image according to the predicted surface geometry of the overlaid object.
-
公开(公告)号:US20240169553A1
公开(公告)日:2024-05-23
申请号:US18057436
申请日:2022-11-21
Applicant: Adobe Inc.
Inventor: Jae shin Yoon , Zhixin Shu , Yangtuanfeng Wang , Jingwan Lu , Jimei Yang , Duygu Ceylan Aksit
CPC classification number: G06T7/20 , G06T13/40 , G06T15/04 , G06T17/00 , G06T2207/10016 , G06T2207/20081 , G06T2207/20084 , G06T2207/30244
Abstract: Techniques for modeling secondary motion based on three-dimensional models are described as implemented by a secondary motion modeling system, which is configured to receive a plurality of three-dimensional object models representing an object. Based on the three-dimensional object models, the secondary motion modeling system determines three-dimensional motion descriptors of a particular three-dimensional object model using one or more machine learning models. Based on the three-dimensional motion descriptors, the secondary motion modeling system models at least one feature subjected to secondary motion using the one or more machine learning models. The particular three-dimensional object model having the at least one feature is rendered by the secondary motion modeling system.
-
15.
公开(公告)号:US20240135513A1
公开(公告)日:2024-04-25
申请号:US18190654
申请日:2023-03-27
Applicant: Adobe Inc.
Inventor: Krishna Kumar Singh , Yijun Li , Jingwan Lu , Duygu Ceylan Aksit , Yangtuanfeng Wang , Jimei Yang , Tobias Hinz
CPC classification number: G06T5/005 , G06T3/0093 , G06T7/40 , G06T7/70 , G06V10/44 , G06V10/771 , G06V10/806 , G06V10/82 , G06T2207/30196
Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify digital images via scene-based editing using image understanding facilitated by artificial intelligence. For example, in one or more embodiments the disclosed systems utilize generative machine learning models to create modified digital images portraying human subjects. In particular, the disclosed systems generate modified digital images by performing infill modifications to complete a digital image or human inpainting for portions of a digital image that portrays a human. Moreover, in some embodiments, the disclosed systems perform reposing of subjects portrayed within a digital image to generate modified digital images. In addition, the disclosed systems in some embodiments perform facial expression transfer and facial expression animations to generate modified digital images or animations.
-
公开(公告)号:US20230123820A1
公开(公告)日:2023-04-20
申请号:US17502714
申请日:2021-10-15
Applicant: Adobe Inc.
Inventor: Yangtuanfeng Wang , Duygu Ceylan Aksit , Krishna Kumar Singh , Niloy J Mitra
Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and method that utilize a character animation neural network informed by motion and pose signatures to generate a digital video through person-specific appearance modeling and motion retargeting. In particular embodiments, the disclosed systems implement a character animation neural network that includes a pose embedding model to encode a pose signature into spatial pose features. The character animation neural network further includes a motion embedding model to encode a motion signature into motion features. In some embodiments, the disclosed systems utilize the motion features to refine per-frame pose features and improve temporal coherency. In certain implementations, the disclosed systems also utilize the motion features to demodulate neural network weights used to generate an image frame of a character in motion based on the refined pose features.
-
公开(公告)号:US20250078406A1
公开(公告)日:2025-03-06
申请号:US18242380
申请日:2023-09-05
Applicant: Adobe Inc.
Inventor: Jae Shin Yoon , Yangtuanfeng Wang , Krishna Kumar Singh , Junying Wang , Jingwan Lu
Abstract: A modeling system accesses a two-dimensional (2D) input image displayed via a user interface, the 2D input image depicting, at a first view, a first object. At least one region of the first object is not represented by pixel values of the 2D input image. The modeling system generates, by applying a 3D representation generation model to the 2D input image, a three-dimensional (3D) representation of the first object that depicts an entirety of the first object including the first region. The modeling system displays, via the user interface, the 3D representation, wherein the 3D representation is viewable via the user interface from a plurality of views including the first view.
-
公开(公告)号:US20240428491A1
公开(公告)日:2024-12-26
申请号:US18340445
申请日:2023-06-23
Applicant: Adobe Inc.
Inventor: Jae Shin Yoon , Duygu Ceylan Aksit , Yangtuanfeng Wang , Jingwan Lu , Jimei Yang , Zhixin Shu , Chengan He , Yi Zhou , Jun Saito , James Zachary
IPC: G06T13/40
Abstract: The present disclosure relates to a system that utilizes neural networks to generate looping animations from still images. The system fits a 3D model to a pose of a person in a digital image. The system receives a 3D animation sequence that transitions between a starting pose and an ending pose. The system generates, utilizing an animation transition neural network, first and second 3D animation transition sequences that respectively transition between the pose of the person and the starting pose and between the ending pose and the pose of the person. The system modifies each of the 3D animation sequence, the first 3D animation transition sequence, and the second 3D animation transition sequence by applying a texture map. The system generates a looping 3D animation by combining the modified 3D animation sequence, the modified first 3D animation transition sequence, and the modified second 3D animation transition sequence.
-
公开(公告)号:US12067659B2
公开(公告)日:2024-08-20
申请号:US17502714
申请日:2021-10-15
Applicant: Adobe Inc.
Inventor: Yangtuanfeng Wang , Duygu Ceylan Aksit , Krishna Kumar Singh , Niloy J Mitra
CPC classification number: G06T13/40 , G06N3/045 , G06N3/08 , G06N3/088 , G06T7/20 , G06T7/73 , G06T2207/10016 , G06T2207/20081 , G06T2207/20084 , G06T2207/30196
Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and method that utilize a character animation neural network informed by motion and pose signatures to generate a digital video through person-specific appearance modeling and motion retargeting. In particular embodiments, the disclosed systems implement a character animation neural network that includes a pose embedding model to encode a pose signature into spatial pose features. The character animation neural network further includes a motion embedding model to encode a motion signature into motion features. In some embodiments, the disclosed systems utilize the motion features to refine per-frame pose features and improve temporal coherency. In certain implementations, the disclosed systems also utilize the motion features to demodulate neural network weights used to generate an image frame of a character in motion based on the refined pose features.
-
20.
公开(公告)号:US20240144520A1
公开(公告)日:2024-05-02
申请号:US18304144
申请日:2023-04-20
Applicant: Adobe Inc.
Inventor: Giorgio Gori , Yi Zhou , Yangtuanfeng Wang , Yang Zhou , Krishna Kumar Singh , Jae Shin Yoon , Duygu Ceylan Aksit
IPC: G06T7/73
CPC classification number: G06T7/73 , G06T2207/20084 , G06T2207/30196
Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify two-dimensional images via scene-based editing using three-dimensional representations of the two-dimensional images. For instance, in one or more embodiments, the disclosed systems utilize three-dimensional representations of two-dimensional images to generate and modify shadows in the two-dimensional images according to various shadow maps. Additionally, the disclosed systems utilize three-dimensional representations of two-dimensional images to modify humans in the two-dimensional images. The disclosed systems also utilize three-dimensional representations of two-dimensional images to provide scene scale estimation via scale fields of the two-dimensional images. In some embodiments, the disclosed systems utilizes three-dimensional representations of two-dimensional images to generate and visualize 3D planar surfaces for modifying objects in two-dimensional images. The disclosed systems further use three-dimensional representations of two-dimensional images to customize focal points for the two-dimensional images.
-
-
-
-
-
-
-
-
-