ENERGY-BASED VARIATIONAL AUTOENCODERS

    公开(公告)号:US20220101122A1

    公开(公告)日:2022-03-31

    申请号:US17357728

    申请日:2021-06-24

    Abstract: One embodiment of the present invention sets forth a technique for generating data using a generative model. The technique includes sampling from one or more distributions of one or more variables to generate a first set of values for the one or more variables, where the one or more distributions are used during operation of one or more portions of the generative model. The technique also includes applying one or more energy values generated via an energy-based model to the first set of values to produce a second set of values for the one or more variables. The technique further includes either outputting the set of second values as output data or performing one or more operations based on the second set of values to generate output data.

    DEEP HIERARCHICAL VARIATIONAL AUTOENCODER

    公开(公告)号:US20210397945A1

    公开(公告)日:2021-12-23

    申请号:US17089492

    申请日:2020-11-04

    Abstract: One embodiment of the present invention sets forth a technique for performing machine learning. The technique includes inputting a training dataset into a variational autoencoder (VAE) comprising an encoder network, a prior network, and a decoder network. The technique also includes training the VAE by updating one or more parameters of the VAE based on a smoothness of one or more outputs produced by the VAE from the training dataset. The technique further includes producing generative output that reflects a first distribution of the training dataset by applying the decoder network to one or more values sampled from a second distribution of latent variables generated by the prior network.

    ANIMATABLE CHARACTER GENERATION USING 3D REPRESENTATIONS

    公开(公告)号:US20250157114A1

    公开(公告)日:2025-05-15

    申请号:US18623745

    申请日:2024-04-01

    Abstract: In various examples, systems and methods are disclosed relating to generating animatable characters or avatars. The system can assign a plurality of first elements of a three-dimensional (3D) model of a subject to a plurality of locations on a surface of the subject in an initial pose. Further, the system can assign a plurality of second elements to the plurality of first elements, each second element of the plurality of second elements having an opacity corresponding to a distance between the second element and the surface of the subject. Further, the system can update the plurality of second elements based at least on a target pose for the subject and one or more attributes of the subject to determine a plurality of updated second elements. Further, the system can render a representation of the subject based at least on the plurality of updated second elements.

    META-LEARNING OF REPRESENTATIONS USING SELF-SUPERVISED TASKS

    公开(公告)号:US20250103906A1

    公开(公告)日:2025-03-27

    申请号:US18471196

    申请日:2023-09-20

    Abstract: One embodiment of the present invention sets forth a technique for performing meta-learning. The technique includes performing a first set of training iterations to convert a prediction learning network into a first trained prediction learning network based on a first support set of training data and executing a representation learning network and the first trained prediction learning network to generate a first set of supervised training output and a first set of self-supervised training output based on a first query set of training data corresponding to the first support set of training data. The technique also includes performing a first training iteration to convert the representation learning network into a first trained representation learning network based on a first loss associated with the first set of supervised training output and a second loss associated with the first set of self-supervised training output.

    META-TESTING OF REPRESENTATIONS LEARNED USING SELF-SUPERVISED TASKS

    公开(公告)号:US20250095350A1

    公开(公告)日:2025-03-20

    申请号:US18471209

    申请日:2023-09-20

    Abstract: One embodiment of the present invention sets forth a technique for executing a machine learning model. The technique includes performing a first set of training iterations to convert a prediction learning network into a first trained prediction learning network based on a first support set associated with a first set of classes. The technique also includes executing a first trained representation learning network to convert a first data sample into a first latent representation, where the first trained representation learning network is generated by training a representation learning network using a first query set, a first set of self-supervised losses, and a first set of supervised losses. The technique further includes executing the first trained prediction learning network to convert the first latent representation into a first prediction of a first class that is not included in the second set of classes.

    TRAINING A TRANSFORMER NEURAL NETWORK TO PERFORM TASK-SPECIFIC PARAMETER SELECTION

    公开(公告)号:US20250094813A1

    公开(公告)日:2025-03-20

    申请号:US18471204

    申请日:2023-09-20

    Abstract: One embodiment of the present invention sets forth a technique for training a transformer neural network. The technique includes inputting a first task token and a first set of samples into the transformer neural network and training the transformer neural network using a first set of losses between predictions generated by the transformer neural network from the first task token and first set of samples as well as a first set of labels. The technique also includes converting the first task token into a second task token that is larger than the first task token, inputting the second task token and a second set of samples into the transformer neural network, and training the transformer neural network using a second set of losses between predictions generated by the transformer neural network from the second task token and the second set of samples as well as a second set of labels.

    MACHINE-LEARNING TECHNIQUES FOR REPRESENTING ITEMS IN A SPECTRAL DOMAIN

    公开(公告)号:US20230267306A1

    公开(公告)日:2023-08-24

    申请号:US17933806

    申请日:2022-09-20

    CPC classification number: G06N3/0454 G06T5/10 G06T2207/20056

    Abstract: In various embodiments, a training application generates a trained machine learning model that represents items in a spectral domain. The training application executes a first neural network on a first set of data points associated with both a first item and the spectral domain to generate a second neural network. Subsequently, the training application generates a set of predicted data points that are associated with both the first item and the spectral domain via the second neural network. The training application generates the trained machine learning model based on the first neural network, the second neural network, and the set of predicted data points. The trained machine learning model maps one or more positions within the spectral domain to one or more values associated with an item based on a set of data points associated with both the item and the spectral domain.

Patent Agency Ranking