DIVERSITY-PRESERVED DOMAIN ADAPTATION USING TEXT-TO-IMAGE DIFFUSION FOR 3D GENERATIVE MODEL
Abstract:
An embodiment of the present disclosure provides a three-dimensional image generation method, which is performed by a server and is capable of domain adaptation, including generating N target images corresponding to a second domain by converting styles of previously collected N source images corresponding to a first domain according to instructions of an input text, selecting only a target image that satisfies a preset condition among the N target images, and generating multiple three-dimensional images corresponding to a specific domain through certain noise data and a preset camera pose parameter by training a three-dimensional generation model, which is previously built, by using the selected target image.
Information query
Patent Agency Ranking
0/0