SYSTEMS AND METHODS FOR CROSS-LINGUAL TRANSFER LEARNING
摘要:
Embodiments described herein provide a method of training a language model by tuning a prompt. The method comprises masking tokens of first and second conversational texts which have the same semantic meaning but in different languages (e.g., a translation). The masked texts are input to a language model with a prepended soft prompt. The language model generates respective predicted outputs. A loss objective is computed including a masked language model loss. The prompt is updated based on the computed loss objective via backpropagation while keeping the language model frozen.
信息查询
0/0