LEARNING TO GENERATE SYNTHETIC DATASETS FOR TRANING NEURAL NETWORKS

    公开(公告)号:US20200160178A1

    公开(公告)日:2020-05-21

    申请号:US16685795

    申请日:2019-11-15

    Abstract: In various examples, a generative model is used to synthesize datasets for use in training a downstream machine learning model to perform an associated task. The synthesized datasets may be generated by sampling a scene graph from a scene grammar—such as a probabilistic grammar—and applying the scene graph to the generative model to compute updated scene graphs more representative of object attribute distributions of real-world datasets. The downstream machine learning model may be validated against a real-world validation dataset, and the performance of the model on the real-world validation dataset may be used as an additional factor in further training or fine-tuning the generative model for generating the synthesized datasets specific to the task of the downstream machine learning model.

    NEURAL RENDERING FOR INVERSE GRAPHICS GENERATION

    公开(公告)号:US20210279952A1

    公开(公告)日:2021-09-09

    申请号:US17193405

    申请日:2021-03-05

    Abstract: Approaches are presented for training an inverse graphics network. An image synthesis network can generate training data for an inverse graphics network. In turn, the inverse graphics network can teach the synthesis network about the physical three-dimensional (3D) controls. Such an approach can provide for accurate 3D reconstruction of objects from 2D images using the trained inverse graphics network, while requiring little annotation of the provided training data. Such an approach can extract and disentangle 3D knowledge learned by generative models by utilizing differentiable renderers, enabling a disentangled generative model to function as a controllable 3D “neural renderer,” complementing traditional graphics renderers.

    NEURAL RENDERING FOR INVERSE GRAPHICS GENERATION

    公开(公告)号:US20230134690A1

    公开(公告)日:2023-05-04

    申请号:US17981770

    申请日:2022-11-07

    Abstract: Approaches are presented for training an inverse graphics network. An image synthesis network can generate training data for an inverse graphics network. In turn, the inverse graphics network can teach the synthesis network about the physical three-dimensional (3D) controls. Such an approach can provide for accurate 3D reconstruction of objects from 2D images using the trained inverse graphics network, while requiring little annotation of the provided training data. Such an approach can extract and disentangle 3D knowledge learned by generative models by utilizing differentiable renderers, enabling a disentangled generative model to function as a controllable 3D “neural renderer,” complementing traditional graphics renderers.

    Learning to generate synthetic datasets for training neural networks

    公开(公告)号:US11610115B2

    公开(公告)日:2023-03-21

    申请号:US16685795

    申请日:2019-11-15

    Abstract: In various examples, a generative model is used to synthesize datasets for use in training a downstream machine learning model to perform an associated task. The synthesized datasets may be generated by sampling a scene graph from a scene grammar—such as a probabilistic grammar—and applying the scene graph to the generative model to compute updated scene graphs more representative of object attribute distributions of real-world datasets. The downstream machine learning model may be validated against a real-world validation dataset, and the performance of the model on the real-world validation dataset may be used as an additional factor in further training or fine-tuning the generative model for generating the synthesized datasets specific to the task of the downstream machine learning model.

    HIGH-PRECISION SEMANTIC IMAGE EDITING USING NEURAL NETWORKS FOR SYNTHETIC DATA GENERATION SYSTEMS AND APPLICATIONS

    公开(公告)号:US20220383570A1

    公开(公告)日:2022-12-01

    申请号:US17827394

    申请日:2022-05-27

    Abstract: In various examples, high-precision semantic image editing for machine learning systems and applications are described. For example, a generative adversarial network (GAN) may be used to jointly model images and their semantic segmentations based on a same underlying latent code. Image editing may be achieved by using segmentation mask modifications (e.g., provided by a user, or otherwise) to optimize the latent code to be consistent with the updated segmentation, thus effectively changing the original, e.g., RGB image. To improve efficiency of the system, and to not require optimizations for each edit on each image, editing vectors may be learned in latent space that realize the edits, and that can be directly applied on other images with or without additional optimizations. As a result, a GAN in combination with the optimization approaches described herein may simultaneously allow for high precision editing in real-time with straightforward compositionality of multiple edits.

Patent Agency Ranking