-
公开(公告)号:US20250086405A1
公开(公告)日:2025-03-13
申请号:US18481803
申请日:2023-10-05
Applicant: GOOGLE LLC
Inventor: Swaroop Mishra , Ragha Kotikalapudi , Obaid Sarvana , Sahitya Potluri , YaGuang Li , Taylor Bos , Steven Zheng , Hanzhao Lin , Chenkai Kuang , Heng-Tze Cheng , Ed H. Chi , Quoc Le
Abstract: Some implementations relate to generating a training and/or evaluation dataset with LLM prompts (e.g., derived from user queries) based on a prompt complexity. An input prompt, for example derived from a user query, is received. The input prompt is decomposed into a prompt tree comprising a plurality of nodes. The plurality of nodes comprise: a plurality of leaf nodes corresponding to simple sub-prompts of the input query; a plurality of branch nodes of sub-prompts each corresponding to multiple simple sub-prompts; and a root node corresponding to the input prompt. A prompt complexity is determined based on a path length of the prompt tree. The prompt complexity is compared to a threshold complexity. If the prompt complexity is above the threshold complexity, the input prompt is included in a set of training prompts and/or a set of evaluation prompts.
-
公开(公告)号:US20250045534A1
公开(公告)日:2025-02-06
申请号:US18378434
申请日:2023-10-10
Applicant: GOOGLE LLC
Inventor: Swaroop Mishra , Ragha Kotikalapudi , Sahitya Potluri , Taylor Bos , YaGuang Li , Hanzhao Lin , Steven Zheng , Yu Du , Chen Zhu , Chenkai Kuang , Xinying Song , Heng-Tze Cheng , Ed H. Chi , Quoc Le
IPC: G06F40/40
Abstract: Implementations relate to a method implemented by one or more processors, the method including: receiving natural language (NL) based input associated with a client device; generating, using a large language model (LLM) and based on processing the NL based input, LLM output; determining, based on the LLM output, a sequence of LLM responses, the sequence of LLM responses including at least one intermediate LLM response and a final LLM response. In some implementations, the method may further include causing the final LLM response to be rendered at the client device. In additional or alternative implementations, the method may further include storing, as an instance of training data for fine-tuning the LLM or an additional LLM, the NL based input along with the final LLM response.
-