TRANSFERRING HAIRSTYLES BETWEEN PORTRAIT IMAGES UTILIZING DEEP LATENT REPRESENTATIONS
Abstract:
The disclosure describes one or more embodiments of systems, methods, and non-transitory computer-readable media that generate a transferred hairstyle image that depicts a person from a source image having a hairstyle from a target image. For example, the disclosed systems utilize a face-generative neural network to project the source and target images into latent vectors. In addition, in some embodiments, the disclosed systems quantify (or identify) activation values that control hair features for the projected latent vectors of the target and source image. Furthermore, in some instances, the disclosed systems selectively combine (e.g., via splicing) the projected latent vectors of the target and source image to generate a hairstyle-transfer latent vector by using the quantified activation values. Then, in one or more embodiments, the disclosed systems generate a transferred hairstyle image that depicts the person from the source image having the hairstyle from the target image by synthesizing the hairstyle-transfer latent vector.
Information query
Patent Agency Ranking
0/0