PROMPT COMPLEXITY FOR LARGE LANGUAGE MODELS

    公开(公告)号:US20250086405A1

    公开(公告)日:2025-03-13

    申请号:US18481803

    申请日:2023-10-05

    Applicant: GOOGLE LLC

    Abstract: Some implementations relate to generating a training and/or evaluation dataset with LLM prompts (e.g., derived from user queries) based on a prompt complexity. An input prompt, for example derived from a user query, is received. The input prompt is decomposed into a prompt tree comprising a plurality of nodes. The plurality of nodes comprise: a plurality of leaf nodes corresponding to simple sub-prompts of the input query; a plurality of branch nodes of sub-prompts each corresponding to multiple simple sub-prompts; and a root node corresponding to the input prompt. A prompt complexity is determined based on a path length of the prompt tree. The prompt complexity is compared to a threshold complexity. If the prompt complexity is above the threshold complexity, the input prompt is included in a set of training prompts and/or a set of evaluation prompts.

    INSTRUCTION FOLLOWING IN LARGE LANGUAGE MODELS TO REDUCE COMPUTATIONAL RESOURCE CONSUMPTION

    公开(公告)号:US20240394471A1

    公开(公告)日:2024-11-28

    申请号:US18231586

    申请日:2023-08-08

    Applicant: GOOGLE LLC

    Abstract: Implementations relate to improving instruction following capabilities of large language models (LLMs) using instruction decomposition, self-evaluation, and optionally progressive refinement. Processor(s) of a system can: obtain natural language (NL) based input, generate a plurality of candidate responses and evaluate the candidate responses based on instructions included in the NL based input, using an LLM, and progressively refine the candidate responses until it is determined that one or more termination criteria are satisfied. In some implementations, the NL based input can be received from a client device. In these implementations, a given candidate response that is progressively refined can be rendered for presentation at the client device and responsive to the NL base input. In additional or alternative implementations, the NL based input can be obtained from database(s). In these implementations, a given candidate response that is progressively refined can be utilized in fine-tuning of the LLM.

Patent Agency Ranking