Generating images for virtual try-on and pose transfer

    公开(公告)号:US11861772B2

    公开(公告)日:2024-01-02

    申请号:US17678237

    申请日:2022-02-23

    Applicant: Adobe Inc.

    CPC classification number: G06T11/60 G06N3/045 G06T7/11 G06T7/70

    Abstract: In implementations of systems for generating images for virtual try-on and pose transfer, a computing device implements a generator system to receive input data describing a first digital image that depicts a person in a pose and a second digital image that depicts a garment. Candidate appearance flow maps are computed that warp the garment based on the pose at different pixel-block sizes using a first machine learning model. The generator system generates a warped garment image by combining the candidate appearance flow maps as an aggregate per-pixel displacement map using a convolutional gated recurrent network. A conditional segment mask is predicted that segments portions of a geometry of the person using a second machine learning model. The generator system outputs a digital image that depicts the person in the pose wearing the garment based on the warped garment image and the conditional segmentation mask using a third machine learning model.

    Model training with retrospective loss

    公开(公告)号:US11797823B2

    公开(公告)日:2023-10-24

    申请号:US16793551

    申请日:2020-02-18

    Applicant: Adobe Inc.

    Abstract: Generating a machine learning model that is trained using retrospective loss is described. A retrospective loss system receives an untrained machine learning model and a task for training the model. The retrospective loss system initially trains the model over warm-up iterations using task-specific loss that is determined based on a difference between predictions output by the model during training on input data and a ground truth dataset for the input data. Following the warm-up training iterations, the retrospective loss system continues to train the model using retrospective loss, which is model-agnostic and constrains the model such that a subsequently output prediction is more similar to the ground truth dataset than the previously output prediction. After determining that the model's outputs are within a threshold similarity to the ground truth dataset, the model is output with its current parameters as a trained model.

    FORM STRUCTURE EXTRACTION BY PREDICTING ASSOCIATIONS

    公开(公告)号:US20230267345A1

    公开(公告)日:2023-08-24

    申请号:US18135948

    申请日:2023-04-18

    Applicant: Adobe Inc.

    CPC classification number: G06N5/04 G06N3/08 G06N20/00 G06N20/10 G06V10/82

    Abstract: Techniques described herein extract form structures from a static form to facilitate making that static form reflowable. A method described herein includes accessing low-level form elements extracted from a static form. The method includes determining, using a first set of prediction models, second-level form elements based on the low-level form elements. Each second-level form element includes a respective one or more low-level form elements. The method further includes determining, using a second set of prediction models, high-level form elements based on the second-level form elements and the low-level form elements. Each high-level form element includes a respective one or more second-level form elements or low-level form elements. The method further includes generating a reflowable form based on the static form by, for each high-level form element, linking together the respective one or more second-level form elements or low-level form elements.

    Form structure extraction by predicting associations

    公开(公告)号:US11657306B2

    公开(公告)日:2023-05-23

    申请号:US16904263

    申请日:2020-06-17

    Applicant: Adobe Inc.

    CPC classification number: G06N5/04 G06N3/08 G06N20/00 G06N20/10 G06V10/82

    Abstract: Techniques described herein extract form structures from a static form to facilitate making that static form reflowable. A method described herein includes accessing low-level form elements extracted from a static form. The method includes determining, using a first set of prediction models, second-level form elements based on the low-level form elements. Each second-level form element includes a respective one or more low-level form elements. The method further includes determining, using a second set of prediction models, high-level form elements based on the second-level form elements and the low-level form elements. Each high-level form element includes a respective one or more second-level form elements or low-level form elements. The method further includes generating a reflowable form based on the static form by, for each high-level form element, linking together the respective one or more second-level form elements or low-level form elements.

    Generating combined feature embedding for minority class upsampling in training machine learning models with imbalanced samples

    公开(公告)号:US11631029B2

    公开(公告)日:2023-04-18

    申请号:US16564531

    申请日:2019-09-09

    Applicant: Adobe, Inc.

    Abstract: Systems, methods, and non-transitory computer-readable media are disclosed for generating combined feature embeddings for minority class upsampling in training machine learning models with imbalanced training samples. For example, the disclosed systems can select training sample values from a set of training samples and a combination ratio value from a continuous probability distribution. Additionally, the disclosed systems can generate a combined synthetic training sample value by modifying the selected training sample values using the combination ratio value and combining the modified training sample values. Moreover, the disclosed systems can generate a combined synthetic ground truth label based on the combination ratio value. In addition, the disclosed systems can utilize the combined synthetic training sample value and the combined synthetic ground truth label to generate a combined synthetic training sample and utilize the combined synthetic training sample to train a machine learning model.

    COMPRESSING DIGITAL IMAGES UTILIZING DEEP PERCEPTUAL SIMILARITY

    公开(公告)号:US20220198717A1

    公开(公告)日:2022-06-23

    申请号:US17654529

    申请日:2022-03-11

    Applicant: Adobe Inc.

    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for utilizing deep learning to intelligently determine compression settings for compressing a digital image. For instance, the disclosed system utilizes a neural network to generate predicted perceptual quality values for compression settings on a compression quality scale. The disclosed system fits the predicted compression distortions to a perceptual distortion characteristic curve for interpolating predicted perceptual quality values across the compression settings on the compression quality scale. Additionally, the disclosed system then performs a search over the predicted perceptual quality values for the compression settings along the compression quality scale to select a compression setting based on a perceptual quality threshold. The disclosed system generates a compressed digital image according to compression parameters for the selected compression setting.

Patent Agency Ranking