LEARNING AND PROPAGATING VISUAL ATTRIBUTES

    公开(公告)号:US20220076128A1

    公开(公告)日:2022-03-10

    申请号:US17017597

    申请日:2020-09-10

    Abstract: One embodiment of the present invention sets forth a technique for performing spatial propagation. The technique includes generating a first directed acyclic graph (DAG) by connecting spatially adjacent points included in a set of unstructured points via directed edges along a first direction. The technique also includes applying a first set of neural network layers to one or more images associated with the set of unstructured points to generate (i) a set of features for the set of unstructured points and (ii) a set of pairwise affinities between the spatially adjacent points connected by the directed edges. The technique further includes generating a set of labels for the set of unstructured points by propagating the set of features across the first DAG based on the set of pairwise affinities.

    FEW-SHOT CONTINUAL LEARNING WITH TASK-SPECIFIC PARAMETER SELECTION

    公开(公告)号:US20250094819A1

    公开(公告)日:2025-03-20

    申请号:US18471184

    申请日:2023-09-20

    Abstract: One embodiment of the present invention sets forth a technique for executing a transformer neural network. The technique includes executing a first attention unit included in the transformer neural network to convert a first input token into a first query, a first key, and a first plurality of values, where each value included in the first plurality of values represents a sub-task associated with the transformer neural network. The technique also includes computing a first plurality of outputs associated with the first input token based on the first query, the first key, and the first plurality of values. The technique further includes performing a task associated with an input corresponding to the first input token based on the first input token and the first plurality of outputs.

    TECHNIQUES FOR TRAINING A MACHINE LEARNING MODEL TO RECONSTRUCT DIFFERENT THREE-DIMENSIONAL SCENES

    公开(公告)号:US20240161404A1

    公开(公告)日:2024-05-16

    申请号:US18497938

    申请日:2023-10-30

    CPC classification number: G06T17/20

    Abstract: In various embodiments, a training application trains a machine learning model to generate three-dimensional (3D) representations of two-dimensional images. The training application maps a depth image and a viewpoint to signed distance function (SDF) values associated with 3D query points. The training application maps a red, blue, and green (RGB) image to radiance values associated with the 3DI query points. The training application computes a red, blue, green, and depth (RGBD) reconstruction loss based on at least the SDF values and the radiance values. The training application modifies at least one of a pre-trained geometry encoder, a pre-trained geometry decoder, an untrained texture encoder, or an untrained texture decoder based on the RGBD reconstruction loss to generate a trained machine learning model that generates 3D representations of RGBD images.

    ANIMATABLE CHARACTER GENERATION USING 3D REPRESENTATIONS

    公开(公告)号:US20250157114A1

    公开(公告)日:2025-05-15

    申请号:US18623745

    申请日:2024-04-01

    Abstract: In various examples, systems and methods are disclosed relating to generating animatable characters or avatars. The system can assign a plurality of first elements of a three-dimensional (3D) model of a subject to a plurality of locations on a surface of the subject in an initial pose. Further, the system can assign a plurality of second elements to the plurality of first elements, each second element of the plurality of second elements having an opacity corresponding to a distance between the second element and the surface of the subject. Further, the system can update the plurality of second elements based at least on a target pose for the subject and one or more attributes of the subject to determine a plurality of updated second elements. Further, the system can render a representation of the subject based at least on the plurality of updated second elements.

    META-LEARNING OF REPRESENTATIONS USING SELF-SUPERVISED TASKS

    公开(公告)号:US20250103906A1

    公开(公告)日:2025-03-27

    申请号:US18471196

    申请日:2023-09-20

    Abstract: One embodiment of the present invention sets forth a technique for performing meta-learning. The technique includes performing a first set of training iterations to convert a prediction learning network into a first trained prediction learning network based on a first support set of training data and executing a representation learning network and the first trained prediction learning network to generate a first set of supervised training output and a first set of self-supervised training output based on a first query set of training data corresponding to the first support set of training data. The technique also includes performing a first training iteration to convert the representation learning network into a first trained representation learning network based on a first loss associated with the first set of supervised training output and a second loss associated with the first set of self-supervised training output.

    META-TESTING OF REPRESENTATIONS LEARNED USING SELF-SUPERVISED TASKS

    公开(公告)号:US20250095350A1

    公开(公告)日:2025-03-20

    申请号:US18471209

    申请日:2023-09-20

    Abstract: One embodiment of the present invention sets forth a technique for executing a machine learning model. The technique includes performing a first set of training iterations to convert a prediction learning network into a first trained prediction learning network based on a first support set associated with a first set of classes. The technique also includes executing a first trained representation learning network to convert a first data sample into a first latent representation, where the first trained representation learning network is generated by training a representation learning network using a first query set, a first set of self-supervised losses, and a first set of supervised losses. The technique further includes executing the first trained prediction learning network to convert the first latent representation into a first prediction of a first class that is not included in the second set of classes.

    TRAINING A TRANSFORMER NEURAL NETWORK TO PERFORM TASK-SPECIFIC PARAMETER SELECTION

    公开(公告)号:US20250094813A1

    公开(公告)日:2025-03-20

    申请号:US18471204

    申请日:2023-09-20

    Abstract: One embodiment of the present invention sets forth a technique for training a transformer neural network. The technique includes inputting a first task token and a first set of samples into the transformer neural network and training the transformer neural network using a first set of losses between predictions generated by the transformer neural network from the first task token and first set of samples as well as a first set of labels. The technique also includes converting the first task token into a second task token that is larger than the first task token, inputting the second task token and a second set of samples into the transformer neural network, and training the transformer neural network using a second set of losses between predictions generated by the transformer neural network from the second task token and the second set of samples as well as a second set of labels.

Patent Agency Ranking