CONTROLLABLE DYNAMIC APPEARANCE FOR NEURAL 3D PORTRAITS

    公开(公告)号:US20240338915A1

    公开(公告)日:2024-10-10

    申请号:US18132272

    申请日:2023-04-07

    Applicant: Adobe Inc.

    Abstract: Certain aspects and features of this disclosure relate to providing a controllable, dynamic appearance for neural 3D portraits. For example, a method involves projecting a color at points in a digital video portrait based on location, surface normal, and viewing direction for each respective point in a canonical space. The method also involves projecting, using the color, dynamic face normals for the points as changing according to an articulated head pose and facial expression in the digital video portrait. The method further involves disentangling, based on the dynamic face normals, a facial appearance in the digital video portrait into intrinsic components in the canonical space. The method additionally involves storing and/or rendering at least a portion of a head pose as a controllable, neural 3D portrait based on the digital video portrait using the intrinsic components.

    GENERATING ENHANCED THREE-DIMENSIONAL OBJECT RECONSTRUCTION MODELS FROM SPARSE SET OF OBJECT IMAGES

    公开(公告)号:US20220343522A1

    公开(公告)日:2022-10-27

    申请号:US17233122

    申请日:2021-04-16

    Applicant: ADOBE INC.

    Abstract: Enhanced methods and systems for generating both a geometry model and an optical-reflectance model (an object reconstruction model) for a physical object, based on a sparse set of images of the object under a sparse set of viewpoints. The geometry model is a mesh model that includes a set of vertices representing the object's surface. The reflectance model is SVBRDF that is parameterized via multiple channels (e.g., diffuse albedo, surface-roughness, specular albedo, and surface-normals). For each vertex of the geometry model, the reflectance model includes a value for each of the multiple channels. The object reconstruction model is employed to render graphical representations of a virtualized object (a VO based on the physical object) within a computation-based (e.g., a virtual or immersive) environment. Via the reconstruction model, the VO may be rendered from arbitrary viewpoints and under arbitrary lighting conditions.

    Editing neural radiance fields with neural basis decomposition

    公开(公告)号:US12211138B2

    公开(公告)日:2025-01-28

    申请号:US18065456

    申请日:2022-12-13

    Applicant: Adobe Inc.

    Abstract: Embodiments of the present disclosure provide systems, methods, and computer storage media for generating editable synthesized views of scenes by inputting image rays into neural networks using neural basis decomposition. In embodiments, a set of input images of a scene depicting at least one object are collected and used to generate a plurality of rays of the scene. The rays each correspond to three-dimensional coordinates and viewing angles taken from the images. A volume density of the scene is determined by inputting the three-dimensional coordinates from the neural radiance fields into a first neural network to generate a 3D geometric representation of the object. An appearance decomposition is produced by inputting the three-dimensional coordinates and the viewing angles of the rays into a second neural network.

    POINT-BASED NEURAL RADIANCE FIELD FOR THREE DIMENSIONAL SCENE REPRESENTATION

    公开(公告)号:US20240013477A1

    公开(公告)日:2024-01-11

    申请号:US17861199

    申请日:2022-07-09

    Applicant: Adobe Inc.

    CPC classification number: G06T15/205 G06T15/80 G06T15/06 G06T2207/10028

    Abstract: A scene modeling system receives a plurality of input two-dimensional (2D) images corresponding to a plurality of views of an object and a request to display a three-dimensional (3D) scene that includes the object. The scene modeling system generates an output 2D image for a view of the 3D scene by applying a scene representation model to the input 2D images. The scene representation model includes a point cloud generation model configured to generate, based on the input 2D images, a neural point cloud representing the 3D scene. The scene representation model includes a neural point volume rendering model configured to determine, for each pixel of the output image and using the neural point cloud and a volume rendering process, a color value. The scene modeling system transmits, responsive to the request, the output 2D image. Each pixel of the output image includes the respective determined color value.

    GENERATING THREE-DIMENSIONAL REPRESENTATIONS FOR DIGITAL OBJECTS UTILIZING MESH-BASED THIN VOLUMES

    公开(公告)号:US20230360327A1

    公开(公告)日:2023-11-09

    申请号:US17661878

    申请日:2022-05-03

    Applicant: Adobe Inc.

    CPC classification number: G06T17/205 G06T13/20 G06T2210/21

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media that generate three-dimensional hybrid mesh-volumetric representations for digital objects. For instance, in one or more embodiments, the disclosed systems generate a mesh for a digital object from a plurality of digital images that portray the digital object using a multi-view stereo model. Additionally, the disclosed systems determine a set of sample points for a thin volume around the mesh. Using a neural network, the disclosed systems further generate a three-dimensional hybrid mesh-volumetric representation for the digital object utilizing the set of sample points for the thin volume and the mesh.

    Generating three-dimensional representations for digital objects utilizing mesh-based thin volumes

    公开(公告)号:US12254570B2

    公开(公告)日:2025-03-18

    申请号:US17661878

    申请日:2022-05-03

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media that generate three-dimensional hybrid mesh-volumetric representations for digital objects. For instance, in one or more embodiments, the disclosed systems generate a mesh for a digital object from a plurality of digital images that portray the digital object using a multi-view stereo model. Additionally, the disclosed systems determine a set of sample points for a thin volume around the mesh. Using a neural network, the disclosed systems further generate a three-dimensional hybrid mesh-volumetric representation for the digital object utilizing the set of sample points for the thin volume and the mesh.

    POINT-BASED NEURAL RADIANCE FIELD FOR THREE DIMENSIONAL SCENE REPRESENTATION

    公开(公告)号:US20240404181A1

    公开(公告)日:2024-12-05

    申请号:US18799247

    申请日:2024-08-09

    Applicant: Adobe Inc.

    Abstract: A scene modeling system receives a plurality of input two-dimensional (2D) images corresponding to a plurality of views of an object and a request to display a three-dimensional (3D) scene that includes the object. The scene modeling system generates an output 2D image for a view of the 3D scene by applying a scene representation model to the input 2D images. The scene representation model includes a point cloud generation model configured to generate, based on the input 2D images, a neural point cloud representing the 3D scene. The scene representation model includes a neural point volume rendering model configured to determine, for each pixel of the output image and using the neural point cloud and a volume rendering process, a color value. The scene modeling system transmits, responsive to the request, the output 2D image. Each pixel of the output image includes the respective determined color value.

Patent Agency Ranking