-
公开(公告)号:US20240338915A1
公开(公告)日:2024-10-10
申请号:US18132272
申请日:2023-04-07
Applicant: Adobe Inc.
Inventor: Zhixin Shu , Zexiang Xu , Shahrukh Athar , Sai Bi , Kalyan Sunkavalli , Fujun Luan
CPC classification number: G06T19/20 , G06N3/08 , G06T15/80 , G06T17/20 , G06T2210/44 , G06T2219/2012 , G06T2219/2021
Abstract: Certain aspects and features of this disclosure relate to providing a controllable, dynamic appearance for neural 3D portraits. For example, a method involves projecting a color at points in a digital video portrait based on location, surface normal, and viewing direction for each respective point in a canonical space. The method also involves projecting, using the color, dynamic face normals for the points as changing according to an articulated head pose and facial expression in the digital video portrait. The method further involves disentangling, based on the dynamic face normals, a facial appearance in the digital video portrait into intrinsic components in the canonical space. The method additionally involves storing and/or rendering at least a portion of a head pose as a controllable, neural 3D portrait based on the digital video portrait using the intrinsic components.
-
公开(公告)号:US11887241B2
公开(公告)日:2024-01-30
申请号:US17559867
申请日:2021-12-22
Applicant: Adobe Inc.
Inventor: Zexiang Xu , Yannick Hold-Geoffroy , Milos Hasan , Kalyan Sunkavalli , Fanbo Xiang
Abstract: Embodiments are disclosed for neural texture mapping. In some embodiments, a method of neural texture mapping includes obtaining a plurality of images of an object, determining volumetric representation of a scene of the object using a first neural network, mapping 3D points of the scene to a 2D texture space using a second neural network, and determining radiance values for each 2D point in the 2D texture space from a plurality of viewpoints using a second neural network to generate a 3D appearance representation of the object.
-
公开(公告)号:US20250104349A1
公开(公告)日:2025-03-27
申请号:US18894176
申请日:2024-09-24
Applicant: ADOBE INC.
Inventor: Sai Bi , Jiahao Li , Hao Tan , Kai Zhang , Zexiang Xu , Fujun Luan , Yinghao Xu , Yicong Hong , Kalyan K. Sunkavalli
Abstract: A method, apparatus, non-transitory computer readable medium, and system for 3D model generation include obtaining a plurality of input images depicting an object and a set of 3D position embeddings, where each of the plurality of input images depicts the object from a different perspective, encoding the plurality of input images to obtain a plurality of 2D features corresponding to the plurality of input images, respectively, generating 3D features based on the plurality of 2D features and the set of 3D position embeddings, and generating a 3D model of the object based on the 3D features.
-
4.
公开(公告)号:US20240062495A1
公开(公告)日:2024-02-22
申请号:US17892097
申请日:2022-08-21
Applicant: Adobe Inc.
Inventor: Zhixin Shu , Zexiang Xu , Shahrukh Athar , Kalyan Sunkavalli , Elya Shechtman
CPC classification number: G06T19/20 , G06T17/00 , G06T2200/08 , G06T2219/2021
Abstract: A scene modeling system receives a video including a plurality of frames corresponding to views of an object and a request to display an editable three-dimensional (3D) scene that corresponds to a particular frame of the plurality of frames. The scene modeling system applies a scene representation model to the particular frame, and includes a deformation model configured to generate, for each pixel of the particular frame based on a pose and an expression of the object, a deformation point using a 3D morphable model (3DMM) guided deformation field. The scene representation model includes a color model configured to determine, for the deformation point, color and volume density values. The scene modeling system receives a modification to one or more of the pose or the expression of the object including a modification to a location of the deformation point and renders an updated video based on the received modification.
-
公开(公告)号:US20200273237A1
公开(公告)日:2020-08-27
申请号:US15930925
申请日:2020-05-13
Applicant: Adobe Inc.
Inventor: Kalyan Sunkavalli , Zexiang Xu , Sunil Hadap
Abstract: The present disclosure relates to using an object relighting neural network to generate digital images portraying objects under target lighting directions based on sets of digital images portraying the objects under other lighting directions. For example, in one or more embodiments, the disclosed systems provide a sparse set of input digital images and a target lighting direction to an object relighting neural network. The disclosed systems then utilize the object relighting neural network to generate a target digital image that portrays the object illuminated by the target lighting direction. Using a plurality of target digital images, each portraying a different target lighting direction, the disclosed systems can also generate a modified digital image portraying the object illuminated by a target lighting configuration that comprises a combination of the different target lighting directions.
-
公开(公告)号:US11488342B1
公开(公告)日:2022-11-01
申请号:US17332708
申请日:2021-05-27
Applicant: ADOBE INC.
Inventor: Kalyan Krishna Sunkavalli , Yannick Hold-Geoffroy , Milos Hasan , Zexiang Xu , Yu-Ying Yeh , Stefano Corazza
Abstract: Embodiments of the technology described herein, make unknown material-maps in a Physically Based Rendering (PBR) asset usable through an identification process that relies, at least in part, on image analysis. In addition, when a desired material-map type is completely missing from a PBR asset the technology described herein may generate a suitable synthetic material map for use in rendering. In one aspect, the correct map type is assigned using a machine classifier, such as a convolutional neural network, which analyzes image content of the unknown material map and produce a classification. The technology described herein also correlates material maps into material definitions using a combination of the material-map type and similarity analysis. The technology described herein may generate synthetic maps to be used in place of the missing material maps. The synthetic maps may be generated using a Generative Adversarial Network (GAN).
-
7.
公开(公告)号:US20220343522A1
公开(公告)日:2022-10-27
申请号:US17233122
申请日:2021-04-16
Applicant: ADOBE INC.
Inventor: Sai Bi , Zexiang Xu , Kalyan Krishna Sunkavalli , David Jay Kriegman , Ravi Ramamoorthi
IPC: G06T7/514 , G06T17/20 , H04N13/111 , H04N13/128 , H04N13/282
Abstract: Enhanced methods and systems for generating both a geometry model and an optical-reflectance model (an object reconstruction model) for a physical object, based on a sparse set of images of the object under a sparse set of viewpoints. The geometry model is a mesh model that includes a set of vertices representing the object's surface. The reflectance model is SVBRDF that is parameterized via multiple channels (e.g., diffuse albedo, surface-roughness, specular albedo, and surface-normals). For each vertex of the geometry model, the reflectance model includes a value for each of the multiple channels. The object reconstruction model is employed to render graphical representations of a virtualized object (a VO based on the physical object) within a computation-based (e.g., a virtual or immersive) environment. Via the reconstruction model, the VO may be rendered from arbitrary viewpoints and under arbitrary lighting conditions.
-
公开(公告)号:US20220335636A1
公开(公告)日:2022-10-20
申请号:US17231833
申请日:2021-04-15
Applicant: ADOBE INC.
Inventor: Sai Bi , Zexiang Xu , Kalyan Krishna Sunkavalli , Milos Hasan , Yannick Hold-Geoffroy , David Jay Kriegman , Ravi Ramamoorthi
Abstract: A scene reconstruction system renders images of a scene with high-quality geometry and appearance and supports view synthesis, relighting, and scene editing. Given a set of input images of a scene, the scene reconstruction system trains a network to learn a volume representation of the scene that includes separate geometry and reflectance parameters. Using the volume representation, the scene reconstruction system can render images of the scene under arbitrary viewing (view synthesis) and lighting (relighting) locations. Additionally, the scene reconstruction system can render images that change the reflectance of objects in the scene (scene editing).
-
公开(公告)号:US20240169653A1
公开(公告)日:2024-05-23
申请号:US17993854
申请日:2022-11-23
Applicant: Adobe Inc. , The Regents of the University of California
Inventor: Krishna Bhargava Mullia Lakshminarayana , Zexiang Xu , Milos Hasan , Fujun Luan , Alexandr Kuznetsov , Xuezheng Wang , Ravi Ramamoorthi
CPC classification number: G06T15/06 , G06T15/04 , G06T15/506 , G06T2215/12
Abstract: A scene modeling system accesses a three-dimensional (3D) scene including a 3D object. The scene modeling system applies a silhouette bidirectional texture function (SBTF) model to the 3D object to generate an output image of a textured material rendered as a surface of the 3D object. Applying the SBTF model includes determining a bounding geometry for the surface of the 3D object. Applying the SBTF model includes determining, for each pixel of the output image, a pixel value based on the bounding geometry. The scene modeling system displays, via a user interface, the output image based on the determined pixel values.
-
公开(公告)号:US20240013477A1
公开(公告)日:2024-01-11
申请号:US17861199
申请日:2022-07-09
Applicant: Adobe Inc.
Inventor: Zexiang Xu , Zhixin Shu , Sai Bi , Qiangeng Xu , Kalyan Sunkavalli , Julien Philip
CPC classification number: G06T15/205 , G06T15/80 , G06T15/06 , G06T2207/10028
Abstract: A scene modeling system receives a plurality of input two-dimensional (2D) images corresponding to a plurality of views of an object and a request to display a three-dimensional (3D) scene that includes the object. The scene modeling system generates an output 2D image for a view of the 3D scene by applying a scene representation model to the input 2D images. The scene representation model includes a point cloud generation model configured to generate, based on the input 2D images, a neural point cloud representing the 3D scene. The scene representation model includes a neural point volume rendering model configured to determine, for each pixel of the output image and using the neural point cloud and a volume rendering process, a color value. The scene modeling system transmits, responsive to the request, the output 2D image. Each pixel of the output image includes the respective determined color value.
-
-
-
-
-
-
-
-
-