Unified shape representation
    41.
    发明授权

    公开(公告)号:US11551038B2

    公开(公告)日:2023-01-10

    申请号:US16459420

    申请日:2019-07-01

    Applicant: Adobe Inc.

    Abstract: Techniques are described herein for generating and using a unified shape representation that encompasses features of different types of shape representations. In some embodiments, the unified shape representation is a unicode comprising a vector of embeddings and values for the embeddings. The embedding values are inferred, using a neural network that has been trained on different types of shape representations, based on a first representation of a three-dimensional (3D) shape. The first representation is received as input to the trained neural network and corresponds to a first type of shape representation. At least one embedding has a value dependent on a feature provided by a second type of shape representation and not provided by the first type of shape representation. The value of the at least one embedding is inferred based upon the first representation and in the absence of the second type of shape representation for the 3D shape.

    Generating suggested edits for three-dimensional graphics based on deformations of prior edits

    公开(公告)号:US11282290B1

    公开(公告)日:2022-03-22

    申请号:US16953227

    申请日:2020-11-19

    Applicant: Adobe Inc.

    Abstract: Using a prediction engine, generating, based on deformations of prior editing operations performed with a graphics editing tool, suggested editing operations that augment current editing operations applied to a graphical object. The prediction engine accesses first samples defining first positions along first paths of previous editing operations applied to a mesh object in a previous frame and second samples defining second positions along second paths of executed editing operations applied in a current frame. The prediction engine identifies, from a comparison of the first samples and the second samples, a matching component set from the previous editing operations that corresponds to the executed editing operations. The prediction engine deforms the first samples toward the second samples and determines suggested editing operations that comprise a non-matching component set as modified based on the deformed first samples. The prediction engine updates an interface to provide the suggested editing operations.

    LEARNING HYBRID (SURFACE-BASED AND VOLUME-BASED) SHAPE REPRESENTATION

    公开(公告)号:US20210264659A1

    公开(公告)日:2021-08-26

    申请号:US16799664

    申请日:2020-02-24

    Applicant: Adobe Inc.

    Abstract: Certain embodiments involve techniques for generating a 3D representation based on a provided 2D image of an object. An image generation system receives the 2D image representation and generates a multi-dimensional vector of the input that represents the image. The image generation system samples a set of points and provides the set of points and the multi-dimensional vector to a neural network that was trained to predict a 3D surface representing the image such that the 3D surface is consistent with a 3D surface of the object calculated using an implicit function for representing the image. The neural network predicts, based on the multi-dimensional vector and the set of points, the 3D surface representing the object.

    UNIFIED SHAPE REPRESENTATION
    44.
    发明申请

    公开(公告)号:US20210004645A1

    公开(公告)日:2021-01-07

    申请号:US16459420

    申请日:2019-07-01

    Applicant: Adobe Inc.

    Abstract: Techniques are described herein for generating and using a unified shape representation that encompasses features of different types of shape representations. In some embodiments, the unified shape representation is a unicode comprising a vector of embeddings and values for the embeddings. The embedding values are inferred, using a neural network that has been trained on different types of shape representations, based on a first representation of a three-dimensional (3D) shape. The first representation is received as input to the trained neural network and corresponds to a first type of shape representation. At least one embedding has a value dependent on a feature provided by a second type of shape representation and not provided by the first type of shape representation. The value of the at least one embedding is inferred based upon the first representation and in the absence of the second type of shape representation for the 3D shape.

    3D OBJECT RECONSTRUCTION USING PHOTOMETRIC MESH REPRESENTATION

    公开(公告)号:US20200372710A1

    公开(公告)日:2020-11-26

    申请号:US16985402

    申请日:2020-08-05

    Applicant: Adobe, Inc.

    Abstract: Techniques are disclosed for 3D object reconstruction using photometric mesh representations. A decoder is pretrained to transform points sampled from 2D patches of representative objects into 3D polygonal meshes. An image frame of the object is fed into an encoder to get an initial latent code vector. For each frame and camera pair from the sequence, a polygonal mesh is rendered at the given viewpoints. The mesh is optimized by creating a virtual viewpoint, rasterized to obtain a depth map. The 3D mesh projections are aligned by projecting the coordinates corresponding to the polygonal face vertices of the rasterized mesh to both selected viewpoints. The photometric error is determined from RGB pixel intensities sampled from both frames. Gradients from the photometric error are backpropagated into the vertices of the assigned polygonal indices by relating the barycentric coordinates of each image to update the latent code vector.

    Realistically illuminated virtual objects embedded within immersive environments

    公开(公告)号:US10600239B2

    公开(公告)日:2020-03-24

    申请号:US15877142

    申请日:2018-01-22

    Applicant: ADOBE INC.

    Abstract: Matching an illumination of an embedded virtual object (VO) with current environment illumination conditions provides an enhanced immersive experience to a user. To match the VO and environment illuminations, illumination basis functions are determined based on preprocessing image data, captured as a first combination of intensities of direct illumination sources illuminates the environment. Each basis function corresponds to one of the direct illumination sources. During the capture of runtime image data, a second combination of intensities illuminates the environment. An illumination-weighting vector is determined based on the runtime image data. The determination of the weighting vector accounts for indirect illumination sources, such as surface reflections. The weighting vector encodes a superposition of the basis functions that corresponds to the second combination of intensities. The method illuminates the VO based on the weighting vector. The resulting illumination of the VO matches the second combination of the intensities and surface reflections.

    GENERATING TARGET-CHARACTER-ANIMATION SEQUENCES BASED ON STYLE-AWARE PUPPETS PATTERNED AFTER SOURCE-CHARACTER-ANIMATION SEQUENCES

    公开(公告)号:US20200035010A1

    公开(公告)日:2020-01-30

    申请号:US16047839

    申请日:2018-07-27

    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that use style-aware puppets patterned after a source-character-animation sequence to generate a target-character-animation sequence. In particular, the disclosed systems can generate style-aware puppets based on an animation character drawn or otherwise created (e.g., by an artist) for the source-character-animation sequence. The style-aware puppets can include, for instance, a character-deformational model, a skeletal-difference map, and a visual-texture representation of an animation character from a source-character-animation sequence. By using style-aware puppets, the disclosed systems can both preserve and transfer a detailed visual appearance and stylized motion of an animation character from a source-character-animation sequence to a target-character-animation sequence.

    Segmenting three-dimensional shapes into labeled component shapes

    公开(公告)号:US10467760B2

    公开(公告)日:2019-11-05

    申请号:US15440572

    申请日:2017-02-23

    Applicant: Adobe Inc.

    Abstract: This disclosure involves generating and outputting a segmentation model using 3D models having user-provided labels and scene graphs. For example, a system uses a neural network learned from the user-provided labels to transform feature vectors, which represent component shapes of the 3D models, into transformed feature vectors identifying points in a feature space. The system identifies component-shape groups from clusters of the points in the feature space. The system determines, from the scene graphs, parent-child relationships for the component-shape groups. The system generates a segmentation hierarchy with nodes corresponding to the component-shape groups and links corresponding to the parent-child relationships. The system trains a point classifier to assign feature points, which are sampled from an input 3D shape, to nodes of the segmentation hierarchy, and thereby segment the input 3D shape into component shapes.

Patent Agency Ranking