DENOISING DIFFUSION GENERATIVE ADVERSARIAL NETWORKS

    公开(公告)号:US20230095092A1

    公开(公告)日:2023-03-30

    申请号:US17957143

    申请日:2022-09-30

    IPC分类号: G06T5/00

    摘要: Apparatuses, systems, and techniques are presented to train and utilize one or more neural networks. A denoising diffusion generative adversarial network (denoising diffusion GAN) reduces a number of denoising steps during a reverse process. The denoising diffusion GAN does not assume a Gaussian distribution for large steps of the denoising process and applies a multi-model model to permit denoising with fewer steps. Systems and methods further minimize a divergence between a diffused real data distribution and a diffused generator distribution over several timesteps. Accordingly, various embodiments may enable faster sample generation, in which the samples are generated from noise using the denoising diffusion GAN.

    HIGH-PRECISION SEMANTIC IMAGE EDITING USING NEURAL NETWORKS FOR SYNTHETIC DATA GENERATION SYSTEMS AND APPLICATIONS

    公开(公告)号:US20220383570A1

    公开(公告)日:2022-12-01

    申请号:US17827394

    申请日:2022-05-27

    摘要: In various examples, high-precision semantic image editing for machine learning systems and applications are described. For example, a generative adversarial network (GAN) may be used to jointly model images and their semantic segmentations based on a same underlying latent code. Image editing may be achieved by using segmentation mask modifications (e.g., provided by a user, or otherwise) to optimize the latent code to be consistent with the updated segmentation, thus effectively changing the original, e.g., RGB image. To improve efficiency of the system, and to not require optimizations for each edit on each image, editing vectors may be learned in latent space that realize the edits, and that can be directly applied on other images with or without additional optimizations. As a result, a GAN in combination with the optimization approaches described herein may simultaneously allow for high precision editing in real-time with straightforward compositionality of multiple edits.