Generative neural network distillation
Abstract:
A compact generative neural network can be distilled from a teacher generative neural network using a training network. The compact network can be trained on the input data and output data of the teacher network. The training network train the student network using a discrimination layer and one or more types of losses, such as perception loss and adversarial loss.
Public/Granted literature
Information query
Patent Agency Ranking
0/0