Generating styles for neural style transfer in three-dimensional shapes

    公开(公告)号:US12223611B2

    公开(公告)日:2025-02-11

    申请号:US18149609

    申请日:2023-01-03

    Applicant: AUTODESK, INC.

    Abstract: One embodiment of the present invention sets forth a technique for performing style transfer. The technique includes determining a distribution associated with a plurality of style codes for a plurality of three-dimensional (3D) shapes, where each style code included in the plurality of style codes represents a difference between a first 3D shape and a second 3D shape, and where the second 3D shape is generated by applying one or more augmentations to the first 3D shape. The technique also includes sampling from the distribution to generate an additional style code and executing a trained machine learning model based on the additional style code to generate an output 3D shape having style-based attributes associated with the additional style code and content-based attributes associated with an object. The technique further includes generating a 3D model of the object based on the output 3D shape.

    Graph alignment techniques for dimensioning drawings automatically

    公开(公告)号:US11954820B2

    公开(公告)日:2024-04-09

    申请号:US17374722

    申请日:2021-07-13

    Applicant: AUTODESK, INC.

    CPC classification number: G06T3/40 G06T2207/20081

    Abstract: One embodiment of the present invention sets forth a technique for adding dimensions to a target drawing. The technique includes generating a first set of node embeddings for a first set of nodes included in a target graph that represents the target drawing. The technique also includes receiving a second set of node embeddings for a second set of nodes included in a source graph that represents a source drawing, where one or more nodes included in the second set of nodes are associated with one or more source dimensions included in the source drawing. The technique further includes generating a set of mappings between the first and second sets of nodes based similarities between the first set of node embeddings and the second set of node embeddings, and automatically placing the one or more source dimensions within the target drawing based on the set of mappings.

    Training machine learning models to perform neural style transfer in three-dimensional shapes

    公开(公告)号:US12182958B2

    公开(公告)日:2024-12-31

    申请号:US18149605

    申请日:2023-01-03

    Applicant: AUTODESK, INC.

    Abstract: One embodiment of the present invention sets forth a technique for training a machine learning model to perform style transfer. The technique includes applying one or more augmentations to a first input three-dimensional (3D) shape to generate a second input 3D shape. The technique also includes generating, via a first set of neural network layers, a style code based on a first latent representation of the first input 3D shape and a second latent representation of the second input 3D shape. The technique further includes generating, via a second set of neural network layers, a first output 3D shape based on the style code and the second latent representation, and performing one or more operations on the first and second sets of neural network layers based on a first loss associated with the first output 3D shape to generate a trained machine learning model.

    Neural style transfer in three-dimensional shapes

    公开(公告)号:US12182957B2

    公开(公告)日:2024-12-31

    申请号:US18149601

    申请日:2023-01-03

    Applicant: AUTODESK, INC.

    Abstract: One embodiment of the present invention sets forth a technique for performing style transfer. The technique includes generating an input shape representation that includes a plurality of points near a surface of an input three-dimensional (3D) shape, where the input 3D shape includes content-based attributes associated with an object. The technique also includes determining a style code based on a difference between a first latent representation of a first 3D shape and a second latent representation of a second 3D shape, where the second 3D shape is generated by applying one or more augmentations to the first 3D shape. The technique further includes generating, based on the input shape representation and style code, an output 3D shape having the content-based attributes of the input 3D shape and style-based attributes associated with the style code, and generating a 3D model of the object based on the output 3D shape.

Patent Agency Ranking