-
公开(公告)号:US20240220732A1
公开(公告)日:2024-07-04
申请号:US18148045
申请日:2022-12-29
Applicant: Google LLC
Inventor: Raghav Gupta , Yuan Cao , Abhinav Kumar Rastogi , Harrison J. Lee , Jeffrey Liangjie Zhao
CPC classification number: G06F40/35 , G06F16/367
Abstract: Example methods include determining an input schema representation for a task. The schema representation comprises natural language descriptions of slot and intent descriptions, wherein respective indices are associated with each of the slot descriptions and each of the intent descriptions. The methods include determining a contextual representation comprising a concatenation of a history of dialog sequences exchanged between a user and a service agent, wherein the dialog sequences describe a context for the task. The methods include training, a sequence-to-sequence language model and based on a concatenation of the input schema representation and the contextual representation, to predict a sequence of dialog states for an input task, wherein the sequence of dialog states comprises an assignment of values to slots for which the user has indicated a preference in dialog sequences corresponding to the input task. The methods include providing the trained sequence-to-sequence language model.
-
公开(公告)号:US20240221731A1
公开(公告)日:2024-07-04
申请号:US18148037
申请日:2022-12-29
Applicant: Google LLC
Inventor: Raghav Gupta , Yuan Cao , Abhinav Kumar Rastogi , Harrison J. Lee , Jeffrey Liangjie Zhao
CPC classification number: G10L15/1815 , G06F40/35 , G10L15/063 , G10L2015/0633
Abstract: Example methods include determining an input prompt comprising an utterance labeled with a sequence of slot-value pairs, wherein the sequence of slot-value pairs indicates possible slots and values in the utterance, and wherein the utterance relates to a task. The methods include determining a contextual representation comprising a concatenation of a history of utterances exchanged between a user and a service agent. The utterances describe a context for the task. The methods include training, based on a concatenation of the input prompt and the contextual representation, a sequence-to-sequence language model to predict a sequence of dialog states for an input task. The sequence of dialog states comprise an assignment of values to slots for which the user has indicated a preference in dialog sequences. The methods include providing the trained sequence-to-sequence language model.
-