-
公开(公告)号:US11003865B1
公开(公告)日:2021-05-11
申请号:US16879457
申请日:2020-05-20
Applicant: Google LLC
Inventor: Kenton Chiu Tsun Lee , Kelvin Gu , Zora Tung , Panupong Pasupat , Ming-Wei Chang
Abstract: Systems and methods for pre-training and fine-tuning of neural-network-based language models are disclosed in which a neural-network-based textual knowledge retriever is trained along with the language model. In some examples, the knowledge retriever obtains documents from an unlabeled pre-training corpus, generates its own training tasks, and learns to retrieve documents relevant to those tasks. In some examples, the knowledge retriever is further refined using supervised open-QA questions. The framework of the present technology provides models that can intelligently retrieve helpful information from a large unlabeled corpus, rather than requiring all potentially relevant information to be stored implicitly in the parameters of the neural network. This framework may thus reduce the storage space and complexity of the neural network, and also enable the model to more effectively handle new tasks that may be different than those on which it was pre-trained.