PHYSICS-GUIDED MOTION DIFFUSION MODEL
    1.
    发明公开

    公开(公告)号:US20240169636A1

    公开(公告)日:2024-05-23

    申请号:US18317378

    申请日:2023-05-15

    IPC分类号: G06T13/40 G06T5/00 G06T13/80

    摘要: Systems and methods are disclosed that improve performance of synthesized motion generated by a diffusion neural network model. A physics-guided motion diffusion model incorporates physical constraints into the diffusion process to model the complex dynamics induced by forces and contact. Specifically, a physics-based motion projection module uses motion imitation in a physics simulator to project the denoised motion of a diffusion step to a physically plausible motion. The projected motion is further used in the next diffusion iteration to guide the denoising diffusion process. The use of physical constraints in the physics-guided motion diffusion model iteratively pulls the motion toward a physically-plausible space, reducing artifacts such as floating, foot sliding, and ground penetration.

    DIFFUSION-BASED OPEN-VOCABULARY SEGMENTATION

    公开(公告)号:US20240153093A1

    公开(公告)日:2024-05-09

    申请号:US18310414

    申请日:2023-05-01

    IPC分类号: G06T7/10 G06V10/40

    摘要: An open-vocabulary diffusion-based panoptic segmentation system is not limited to perform segmentation using only object categories seen during training, and instead can also successfully perform segmentation of object categories not seen during training and only seen during testing and inferencing. In contrast with conventional techniques, a text-conditioned diffusion (generative) model is used to perform the segmentation. The text-conditioned diffusion model is pre-trained to generate images from text captions, including computing internal representations that provide spatially well-differentiated object features. The internal representations computed within the diffusion model comprise object masks and a semantic visual representation of the object. The semantic visual representation may be extracted from the diffusion model and used in conjunction with a text representation of a category label to classify the object. Objects are classified by associating the text representations of category labels with the object masks and their semantic visual representations to produce panoptic segmentation data.

    LANDMARK DETECTION WITH AN ITERATIVE NEURAL NETWORK

    公开(公告)号:US20240096115A1

    公开(公告)日:2024-03-21

    申请号:US18243555

    申请日:2023-09-07

    摘要: Landmark detection refers to the detection of landmarks within an image or a video, and is used in many computer vision tasks such emotion recognition, face identity verification, hand tracking, gesture recognition, and eye gaze tracking. Current landmark detection methods rely on a cascaded computation through cascaded networks or an ensemble of multiple models, which starts with an initial guess of the landmarks and iteratively produces corrected landmarks which match the input more finely. However, the iterations required by current methods typically increase the training memory cost linearly, and do not have an obvious stopping criteria. Moreover, these methods tend to exhibit jitter in landmark detection results for video. The present disclosure improves current landmark detection methods by providing landmark detection using an iterative neural network. Furthermore, when detecting landmarks in video, the present disclosure provides for a reduction in jitter due to reuse of previous hidden states from previous frames.

    JOINT REPRESENTATION LEARNING FROM IMAGES AND TEXT

    公开(公告)号:US20210056353A1

    公开(公告)日:2021-02-25

    申请号:US17000048

    申请日:2020-08-21

    IPC分类号: G06K9/62 G06N3/08

    摘要: The disclosure provides a framework or system for learning visual representation using a large set of image/text pairs. The disclosure provides, for example, a method of visual representation learning, a joint representation learning system, and an artificial intelligence (AI) system that employs one or more of the trained models from the method or system. The AI system can be used, for example, in autonomous or semi-autonomous vehicles. In one example, the method of visual representation learning includes: (1) receiving a set of image embeddings from an image representation model and a set of text embeddings from a text representation model, and (2) training, employing mutual information, a critic function by learning relationships between the set of image embeddings and the set of text embeddings.

    FAIRNESS-BASED NEURAL NETWORK MODEL TRAINING USING REAL AND GENERATED DATA

    公开(公告)号:US20240144000A1

    公开(公告)日:2024-05-02

    申请号:US18307227

    申请日:2023-04-26

    IPC分类号: G06N3/08

    CPC分类号: G06N3/08

    摘要: A neural network model is trained for fairness and accuracy using both real and synthesized training data, such as images. During training a first sampling ratio between the real and synthesized training data is optimized. The first sampling ratio may comprise a value for each group (or attribute), where each value is optimized. A second sampling ratio defines relative amounts of training data that are used for each one of the groups. Furthermore, a neural network model accuracy and a fairness metric are both used for updating the first and second sampling ratios during training iterations. The neural network model may be trained using different classes of training data. The second sampling ratio may vary for each class.

    DENOISING DIFFUSION GENERATIVE ADVERSARIAL NETWORKS

    公开(公告)号:US20230095092A1

    公开(公告)日:2023-03-30

    申请号:US17957143

    申请日:2022-09-30

    IPC分类号: G06T5/00

    摘要: Apparatuses, systems, and techniques are presented to train and utilize one or more neural networks. A denoising diffusion generative adversarial network (denoising diffusion GAN) reduces a number of denoising steps during a reverse process. The denoising diffusion GAN does not assume a Gaussian distribution for large steps of the denoising process and applies a multi-model model to permit denoising with fewer steps. Systems and methods further minimize a divergence between a diffused real data distribution and a diffused generator distribution over several timesteps. Accordingly, various embodiments may enable faster sample generation, in which the samples are generated from noise using the denoising diffusion GAN.

    CONDITIONAL DIFFUSION MODEL FOR DATA-TO-DATA TRANSLATION

    公开(公告)号:US20240273682A1

    公开(公告)日:2024-08-15

    申请号:US18431527

    申请日:2024-02-02

    IPC分类号: G06T5/60 G06T5/50

    CPC分类号: G06T5/60 G06T5/50

    摘要: Image restoration generally involves recovering a target clean image from a given image having noise, blurring, or other degraded features. Current image restoration solutions typically include a diffusion model that is trained for image restoration by a forward process that progressively diffuses data to noise, and then by learning in a reverse process to generate the data from the noise. However, the forward process relies on Gaussian noise to diffuse the original data, which has little or no structural information corresponding to the original data versus learning from the degraded image itself which is much more structurally informative compared to the random Gaussian noise. Similar problems also exist for other data-to-data translation tasks. The present disclosure trains a data translation conditional diffusion model from diffusion bridge(s) computed between a first version of the data and a second version of the data, which can yield a model that can provide interpretable generation, sampling efficiency, and reduced processing time.