CONTROLLABLE DYNAMIC APPEARANCE FOR NEURAL 3D PORTRAITS

    公开(公告)号:US20240338915A1

    公开(公告)日:2024-10-10

    申请号:US18132272

    申请日:2023-04-07

    Applicant: Adobe Inc.

    Abstract: Certain aspects and features of this disclosure relate to providing a controllable, dynamic appearance for neural 3D portraits. For example, a method involves projecting a color at points in a digital video portrait based on location, surface normal, and viewing direction for each respective point in a canonical space. The method also involves projecting, using the color, dynamic face normals for the points as changing according to an articulated head pose and facial expression in the digital video portrait. The method further involves disentangling, based on the dynamic face normals, a facial appearance in the digital video portrait into intrinsic components in the canonical space. The method additionally involves storing and/or rendering at least a portion of a head pose as a controllable, neural 3D portrait based on the digital video portrait using the intrinsic components.

    DEFORMABLE NEURAL RADIANCE FIELD FOR EDITING FACIAL POSE AND FACIAL EXPRESSION IN NEURAL 3D SCENES

    公开(公告)号:US20240062495A1

    公开(公告)日:2024-02-22

    申请号:US17892097

    申请日:2022-08-21

    Applicant: Adobe Inc.

    CPC classification number: G06T19/20 G06T17/00 G06T2200/08 G06T2219/2021

    Abstract: A scene modeling system receives a video including a plurality of frames corresponding to views of an object and a request to display an editable three-dimensional (3D) scene that corresponds to a particular frame of the plurality of frames. The scene modeling system applies a scene representation model to the particular frame, and includes a deformation model configured to generate, for each pixel of the particular frame based on a pose and an expression of the object, a deformation point using a 3D morphable model (3DMM) guided deformation field. The scene representation model includes a color model configured to determine, for the deformation point, color and volume density values. The scene modeling system receives a modification to one or more of the pose or the expression of the object including a modification to a location of the deformation point and renders an updated video based on the received modification.

    RELIGHTING DIGITAL IMAGES ILLUMINATED FROM A TARGET LIGHTING DIRECTION

    公开(公告)号:US20200273237A1

    公开(公告)日:2020-08-27

    申请号:US15930925

    申请日:2020-05-13

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to using an object relighting neural network to generate digital images portraying objects under target lighting directions based on sets of digital images portraying the objects under other lighting directions. For example, in one or more embodiments, the disclosed systems provide a sparse set of input digital images and a target lighting direction to an object relighting neural network. The disclosed systems then utilize the object relighting neural network to generate a target digital image that portrays the object illuminated by the target lighting direction. Using a plurality of target digital images, each portraying a different target lighting direction, the disclosed systems can also generate a modified digital image portraying the object illuminated by a target lighting configuration that comprises a combination of the different target lighting directions.

    Material map identification and augmentation

    公开(公告)号:US11488342B1

    公开(公告)日:2022-11-01

    申请号:US17332708

    申请日:2021-05-27

    Applicant: ADOBE INC.

    Abstract: Embodiments of the technology described herein, make unknown material-maps in a Physically Based Rendering (PBR) asset usable through an identification process that relies, at least in part, on image analysis. In addition, when a desired material-map type is completely missing from a PBR asset the technology described herein may generate a suitable synthetic material map for use in rendering. In one aspect, the correct map type is assigned using a machine classifier, such as a convolutional neural network, which analyzes image content of the unknown material map and produce a classification. The technology described herein also correlates material maps into material definitions using a combination of the material-map type and similarity analysis. The technology described herein may generate synthetic maps to be used in place of the missing material maps. The synthetic maps may be generated using a Generative Adversarial Network (GAN).

    GENERATING ENHANCED THREE-DIMENSIONAL OBJECT RECONSTRUCTION MODELS FROM SPARSE SET OF OBJECT IMAGES

    公开(公告)号:US20220343522A1

    公开(公告)日:2022-10-27

    申请号:US17233122

    申请日:2021-04-16

    Applicant: ADOBE INC.

    Abstract: Enhanced methods and systems for generating both a geometry model and an optical-reflectance model (an object reconstruction model) for a physical object, based on a sparse set of images of the object under a sparse set of viewpoints. The geometry model is a mesh model that includes a set of vertices representing the object's surface. The reflectance model is SVBRDF that is parameterized via multiple channels (e.g., diffuse albedo, surface-roughness, specular albedo, and surface-normals). For each vertex of the geometry model, the reflectance model includes a value for each of the multiple channels. The object reconstruction model is employed to render graphical representations of a virtualized object (a VO based on the physical object) within a computation-based (e.g., a virtual or immersive) environment. Via the reconstruction model, the VO may be rendered from arbitrary viewpoints and under arbitrary lighting conditions.

    POINT-BASED NEURAL RADIANCE FIELD FOR THREE DIMENSIONAL SCENE REPRESENTATION

    公开(公告)号:US20240013477A1

    公开(公告)日:2024-01-11

    申请号:US17861199

    申请日:2022-07-09

    Applicant: Adobe Inc.

    CPC classification number: G06T15/205 G06T15/80 G06T15/06 G06T2207/10028

    Abstract: A scene modeling system receives a plurality of input two-dimensional (2D) images corresponding to a plurality of views of an object and a request to display a three-dimensional (3D) scene that includes the object. The scene modeling system generates an output 2D image for a view of the 3D scene by applying a scene representation model to the input 2D images. The scene representation model includes a point cloud generation model configured to generate, based on the input 2D images, a neural point cloud representing the 3D scene. The scene representation model includes a neural point volume rendering model configured to determine, for each pixel of the output image and using the neural point cloud and a volume rendering process, a color value. The scene modeling system transmits, responsive to the request, the output 2D image. Each pixel of the output image includes the respective determined color value.

Patent Agency Ranking