UNSUPERVISED DOMAIN ADAPTATION WITH NEURAL NETWORKS

    公开(公告)号:US20240296205A1

    公开(公告)日:2024-09-05

    申请号:US18656298

    申请日:2024-05-06

    Abstract: Approaches presented herein provide for unsupervised domain transfer learning. In particular, three neural networks can be trained together using at least labeled data from a first domain and unlabeled data from a second domain. Features of the data are extracted using a feature extraction network. A first classifier network uses these features to classify the data, while a second classifier network uses these features to determine the relevant domain. A combined loss function is used to optimize the networks, with a goal of the feature extraction network extracting features that the first classifier network is able to use to accurately classify the data, but prevent the second classifier from determining the domain for the image. Such optimization enables object classification to be performed with high accuracy for either domain, even though there may have been little to no labeled training data for the second domain.

    Unsupervised domain adaptation with neural networks

    公开(公告)号:US11989262B2

    公开(公告)日:2024-05-21

    申请号:US17226534

    申请日:2021-04-09

    Abstract: Approaches presented herein provide for unsupervised domain transfer learning. In particular, three neural networks can be trained together using at least labeled data from a first domain and unlabeled data from a second domain. Features of the data are extracted using a feature extraction network. A first classifier network uses these features to classify the data, while a second classifier network uses these features to determine the relevant domain. A combined loss function is used to optimize the networks, with a goal of the feature extraction network extracting features that the first classifier network is able to use to accurately classify the data, but prevent the second classifier from determining the domain for the image. Such optimization enables object classification to be performed with high accuracy for either domain, even though there may have been little to no labeled training data for the second domain.

    PROCESSING ULTRAHYPERBOLIC REPRESENTATIONS USING NEURAL NETWORKS

    公开(公告)号:US20220391667A1

    公开(公告)日:2022-12-08

    申请号:US17827132

    申请日:2022-05-27

    Inventor: Marc Law

    Abstract: Approaches presented herein use ultrahyperbolic representations (e.g., non-Riemannian manifolds) in inferencing tasks—such as classification—performed by machine learning models (e.g., neural networks). For example, a machine learning model may receive, as input, a graph including data on which to perform an inferencing task. This input can be in the form of, for example, a set of nodes and an adjacency matrix, where the nodes can each correspond to a vector in the graph. The neural network can take this input and perform mapping in order to generate a representation of this graph using an ultrahyperbolic (e.g., non-parametric, pseudo- or semi-Riemannian) manifold. This manifold can be of constant non-zero curvature, generalizing to at least hyperbolic and elliptical geometries. Once such a manifold-based representation is obtained, the neural network can perform one or more inferencing tasks using this representation, such as for classification or animation.

    ESTIMATING OPTIMAL TRAINING DATA SET SIZES FOR MACHINE LEARNING MODEL SYSTEMS AND APPLICATIONS

    公开(公告)号:US20230376849A1

    公开(公告)日:2023-11-23

    申请号:US18318212

    申请日:2023-05-16

    CPC classification number: G06N20/00

    Abstract: In various examples, estimating optimal training data set sizes for machine learning model systems and applications. Systems and methods are disclosed that estimate an amount of data to include in a training data set, where the training data set is then used to train one or more machine learning models to reach a target validation performance. To estimate the amount of training data, subsets of an initial training data set may be used to train the machine learning model(s) in order to determine estimates for the minimum amount of training data needed to train the machine learning model(s) to reach the target validation performance. The estimates may then be used to generate one or more functions, such as a cumulative density function and/or a probability density function, wherein the function(s) is then used to estimate the amount of training data needed to train the machine learning model(s).

    OPTIMIZED ACTIVE LEARNING USING INTEGER PROGRAMMING

    公开(公告)号:US20230244985A1

    公开(公告)日:2023-08-03

    申请号:US17591039

    申请日:2022-02-02

    CPC classification number: G06N20/00

    Abstract: In various examples, a representative subset of data points are queried or selected using integer programming to minimize the Wasserstein distance between the selected data points and the data set from which they were selected. A Generalized Benders Decomposition (GBD) may be used to decompose and iteratively solve the minimization problem, providing a globally optimal solution (an identified subset of data points that match the distribution of their data set) within a threshold tolerance. Data selection may be accelerated by applying one or more constraints while iterating, such as optimality cuts that leverage properties of the Wasserstein distance and/or pruning constraints that reduce the search space of candidate data points. In an active learning implementation, a representative subset of unlabeled data points may be selected using GBD, labeled, and used to train machine learning model(s) over one or more cycles of active learning.

    DOMAIN ADAPTATION USING DOMAIN-ADVERSARIAL LEARNING IN SYNTHETIC DATA SYSTEMS AND APPLICATIONS

    公开(公告)号:US20220383073A1

    公开(公告)日:2022-12-01

    申请号:US17827141

    申请日:2022-05-27

    Abstract: In various examples, machine learning models (MLMs) may be updated using multi-order gradients in order to train the MLMs, such as at least a first order gradient and any number of higher-order gradients. At least a first of the MLMs may be trained to generate a representation of features that is invariant to a first domain corresponding to a first dataset and a second domain corresponding to a second dataset. At least a second of the MLMs may be trained to classify whether the representation corresponds to the first domain or the second domain. At least a third of the MLMs may trained to perform a task. The first dataset may correspond to a labeled source domain and the second dataset may correspond to an unlabeled target domain. The training may include transferring knowledge from the first domain to the second domain in a representation space.

    UNSUPERVISED DOMAIN ADAPTATION WITH NEURAL NETWORKS

    公开(公告)号:US20220108134A1

    公开(公告)日:2022-04-07

    申请号:US17226534

    申请日:2021-04-09

    Abstract: Approaches presented herein provide for unsupervised domain transfer learning. In particular, three neural networks can be trained together using at least labeled data from a first domain and unlabeled data from a second domain. Features of the data are extracted using a feature extraction network. A first classifier network uses these features to classify the data, while a second classifier network uses these features to determine the relevant domain. A combined loss function is used to optimize the networks, with a goal of the feature extraction network extracting features that the first classifier network is able to use to accurately classify the data, but prevent the second classifier from determining the domain for the image. Such optimization enables object classification to be performed with high accuracy for either domain, even though there may have been little to no labeled training data for the second domain.

Patent Agency Ranking