IMAGE RESAMPLING FOR DCT BASED IMAGE ENCODING FORMATS USING MEMORY EFFICIENT TECHNIQUES

    公开(公告)号:US20210067782A1

    公开(公告)日:2021-03-04

    申请号:US16556744

    申请日:2019-08-30

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating digital images of modified resolution by filtering in the frequency domain. For example, the disclosed systems can utilize a tiling procedure to generate discrete cosine transform blocks. The disclosed systems can further filter the quantized data of the discrete cosine transform blocks within the frequency domain using, for example, a Lanczos resampling kernel. In addition, the digital image resolution modification system can utilize sub-band approximation and block composition to generate a modified digital image.

    TRANSFERRING HAIRSTYLES BETWEEN PORTRAIT IMAGES UTILIZING DEEP LATENT REPRESENTATIONS

    公开(公告)号:US20240005578A1

    公开(公告)日:2024-01-04

    申请号:US18467397

    申请日:2023-09-14

    Applicant: Adobe Inc.

    CPC classification number: G06T11/60 G06N3/08 G06T5/50 G06V40/165 G06V40/171

    Abstract: The disclosure describes one or more embodiments of systems, methods, and non-transitory computer-readable media that generate a transferred hairstyle image that depicts a person from a source image having a hairstyle from a target image. For example, the disclosed systems utilize a face-generative neural network to project the source and target images into latent vectors. In addition, in some embodiments, the disclosed systems quantify (or identify) activation values that control hair features for the projected latent vectors of the target and source image. Furthermore, in some instances, the disclosed systems selectively combine (e.g., via splicing) the projected latent vectors of the target and source image to generate a hairstyle-transfer latent vector by using the quantified activation values. Then, in one or more embodiments, the disclosed systems generate a transferred hairstyle image that depicts the person from the source image having the hairstyle from the target image by synthesizing the hairstyle-transfer latent vector.

    TRANSFERRING HAIRSTYLES BETWEEN PORTRAIT IMAGES UTILIZING DEEP LATENT REPRESENTATIONS

    公开(公告)号:US20220101577A1

    公开(公告)日:2022-03-31

    申请号:US17034845

    申请日:2020-09-28

    Applicant: Adobe Inc.

    Abstract: The disclosure describes one or more embodiments of systems, methods, and non-transitory computer-readable media that generate a transferred hairstyle image that depicts a person from a source image having a hairstyle from a target image. For example, the disclosed systems utilize a face-generative neural network to project the source and target images into latent vectors. In addition, in some embodiments, the disclosed systems quantify (or identify) activation values that control hair features for the projected latent vectors of the target and source image. Furthermore, in some instances, the disclosed systems selectively combine (e.g., via splicing) the projected latent vectors of the target and source image to generate a hairstyle-transfer latent vector by using the quantified activation values. Then, in one or more embodiments, the disclosed systems generate a transferred hairstyle image that depicts the person from the source image having the hairstyle from the target image by synthesizing the hairstyle-transfer latent vector.

    Transferring hairstyles between portrait images utilizing deep latent representations

    公开(公告)号:US11790581B2

    公开(公告)日:2023-10-17

    申请号:US17034845

    申请日:2020-09-28

    Applicant: Adobe Inc.

    CPC classification number: G06T11/60 G06N3/08 G06T5/50 G06V40/165 G06V40/171

    Abstract: The disclosure describes one or more embodiments of systems, methods, and non-transitory computer-readable media that generate a transferred hairstyle image that depicts a person from a source image having a hairstyle from a target image. For example, the disclosed systems utilize a face-generative neural network to project the source and target images into latent vectors. In addition, in some embodiments, the disclosed systems quantify (or identify) activation values that control hair features for the projected latent vectors of the target and source image. Furthermore, in some instances, the disclosed systems selectively combine (e.g., via splicing) the projected latent vectors of the target and source image to generate a hairstyle-transfer latent vector by using the quantified activation values. Then, in one or more embodiments, the disclosed systems generate a transferred hairstyle image that depicts the person from the source image having the hairstyle from the target image by synthesizing the hairstyle-transfer latent vector.

Patent Agency Ranking