REDUCING COMPUTATIONAL RESOURCE USAGE VIA TRAINING AND/OR UTILIZING LARGE LANGUAGE MODELS

    公开(公告)号:US20240378394A1

    公开(公告)日:2024-11-14

    申请号:US18231650

    申请日:2023-08-08

    Applicant: GOOGLE LLC

    Abstract: Implementations described herein relate to using self-evaluation when utilizing a large language model (LLM) to generate a response to a natural language (NL) based input. The LLM can be used to process the NL based input to generate a plurality of responses, and to generate a critique of those responses by comparing the responses to a set of response evaluation criteria. One of the responses can then be selected based on the comparison with the set of response evaluation criteria which can vary from one NL based input to another. If the NL based input was obtained a user of a client device during an inference stage, then the selected response can be rendered for presentation to the user. If the NL based input was obtained during a training stage, then the selected response can be stored as a training instance and optionally in association with additional data.

    INSTRUCTION FOLLOWING IN LARGE LANGUAGE MODELS TO REDUCE COMPUTATIONAL RESOURCE CONSUMPTION

    公开(公告)号:US20240394471A1

    公开(公告)日:2024-11-28

    申请号:US18231586

    申请日:2023-08-08

    Applicant: GOOGLE LLC

    Abstract: Implementations relate to improving instruction following capabilities of large language models (LLMs) using instruction decomposition, self-evaluation, and optionally progressive refinement. Processor(s) of a system can: obtain natural language (NL) based input, generate a plurality of candidate responses and evaluate the candidate responses based on instructions included in the NL based input, using an LLM, and progressively refine the candidate responses until it is determined that one or more termination criteria are satisfied. In some implementations, the NL based input can be received from a client device. In these implementations, a given candidate response that is progressively refined can be rendered for presentation at the client device and responsive to the NL base input. In additional or alternative implementations, the NL based input can be obtained from database(s). In these implementations, a given candidate response that is progressively refined can be utilized in fine-tuning of the LLM.

    QUERY RESPONSE USING A CUSTOM CORPUS
    7.
    发明公开

    公开(公告)号:US20240362093A1

    公开(公告)日:2024-10-31

    申请号:US18231606

    申请日:2023-08-08

    Applicant: GOOGLE LLC

    CPC classification number: G06F9/547 G06F16/243

    Abstract: At least utilizing a custom corpus of documents to condition a large language model (LLM) when generating a response to a user query. In some implementations, a user query associated with a client device is received. An API query for an external application is generated by an LLM based on the user query. The external application has access to a custom corpus of documents comprising a plurality of documents. The external application is queried using the API query. Data representative of one or more documents in the custom corpus of documents is received from the external application in response to the API query. The LLM generates a response to the query that is conditioned on the data representing one or more of the documents in the custom corpus of documents received from the external application. The response to the user query is caused to be rendered on the client device.

Patent Agency Ranking