-
公开(公告)号:US20250086405A1
公开(公告)日:2025-03-13
申请号:US18481803
申请日:2023-10-05
Applicant: GOOGLE LLC
Inventor: Swaroop Mishra , Ragha Kotikalapudi , Obaid Sarvana , Sahitya Potluri , YaGuang Li , Taylor Bos , Steven Zheng , Hanzhao Lin , Chenkai Kuang , Heng-Tze Cheng , Ed H. Chi , Quoc Le
Abstract: Some implementations relate to generating a training and/or evaluation dataset with LLM prompts (e.g., derived from user queries) based on a prompt complexity. An input prompt, for example derived from a user query, is received. The input prompt is decomposed into a prompt tree comprising a plurality of nodes. The plurality of nodes comprise: a plurality of leaf nodes corresponding to simple sub-prompts of the input query; a plurality of branch nodes of sub-prompts each corresponding to multiple simple sub-prompts; and a root node corresponding to the input prompt. A prompt complexity is determined based on a path length of the prompt tree. The prompt complexity is compared to a threshold complexity. If the prompt complexity is above the threshold complexity, the input prompt is included in a set of training prompts and/or a set of evaluation prompts.
-
公开(公告)号:US20250045534A1
公开(公告)日:2025-02-06
申请号:US18378434
申请日:2023-10-10
Applicant: GOOGLE LLC
Inventor: Swaroop Mishra , Ragha Kotikalapudi , Sahitya Potluri , Taylor Bos , YaGuang Li , Hanzhao Lin , Steven Zheng , Yu Du , Chen Zhu , Chenkai Kuang , Xinying Song , Heng-Tze Cheng , Ed H. Chi , Quoc Le
IPC: G06F40/40
Abstract: Implementations relate to a method implemented by one or more processors, the method including: receiving natural language (NL) based input associated with a client device; generating, using a large language model (LLM) and based on processing the NL based input, LLM output; determining, based on the LLM output, a sequence of LLM responses, the sequence of LLM responses including at least one intermediate LLM response and a final LLM response. In some implementations, the method may further include causing the final LLM response to be rendered at the client device. In additional or alternative implementations, the method may further include storing, as an instance of training data for fine-tuning the LLM or an additional LLM, the NL based input along with the final LLM response.
-
公开(公告)号:US20250053751A1
公开(公告)日:2025-02-13
申请号:US18413495
申请日:2024-01-16
Applicant: GOOGLE LLC
Inventor: Oscar Akerlund , Evgeny Sluzhaev , Golnaz Ghiasi , Thang Luong , Yifeng Lu , Igor Petrovski , Agoston Weisz , Wei Yu , Rakesh Shivanna , Michael Andrew Goodman , Apoorv Kulshreshtha , Yu Du , Amin Ghafouri , Sanil Jain , Dustin Tran , Vikas Peswani , YaGuang Li
IPC: G06F40/40
Abstract: Implementations relate to generating multi-modal response(s) through utilization of large language model(s) (LLM(s)). Processor(s) of a system can: receive natural language (NL) based input, generate a multi-modal response that is responsive to the NL based output, and cause the multi-modal response to be rendered. In some implementations, and in generating the multi-modal response, the processor(s) can process, using a LLM, LLM input (e.g., that includes at least the NL based input) to generate LLM output, and determine, based on the LLM output, textual content for inclusion in the multi-modal response and multimedia content for inclusion in the multi-modal response. In some implementations, the multimedia content can be obtained based on a multimedia content tag that is included in the LLM output and that is indicative of the multimedia content. In various implementations, the multimedia content can be interleaved between segments of the textual content.
-
公开(公告)号:US20240428006A1
公开(公告)日:2024-12-26
申请号:US18211967
申请日:2023-06-20
Applicant: GOOGLE LLC
Inventor: Jian Li , Zhifeng Chen , Yanping Huang , Yuanzhong Xu , Tao Wang , YaGuang Li
IPC: G06F40/40
Abstract: Implementations relate to asymmetric quantization of large language models (LLMs). Processor(s) of a system can: obtain a trained LLM, wherein the trained LLM includes a plurality of layers, each layer comprising a respective plurality of weights; for each layer of the plurality of layers: calculate an optimal clipping range for the respective plurality of weights, and clip one or more weights of the respective plurality of weights that lie outside of the optimal clipping range to produce a clipped layer; quantize the LLM to generate a quantized LLM, wherein the instructions to quantize include instructions to map weights of the plurality of clipped layers of the LLM from continuous values to discrete values; and provide the quantized LLM for downstream processing.
-
公开(公告)号:US20240330334A1
公开(公告)日:2024-10-03
申请号:US18225990
申请日:2023-07-25
Applicant: GOOGLE LLC
Inventor: Sidharth Mudgal , Ahmad Beirami , Jilin Chen , Alex Beutel , Harish Ganapathy , YaGuang Li , Tao Wang , Yanping Huang , Trevor Strohman
IPC: G06F16/332 , G06F40/284
CPC classification number: G06F16/3329 , G06F40/284
Abstract: Implementations relate to reducing latency in generating and/or rendering a given stream of natural language (NL) based output generated using a large language model (LLM). Processor(s) of a system can: receive NL based input associated with a client device, generate the stream of NL based output utilizing the LLM that is responsive to the NL based input and that is for a given dialog context of an ongoing dialog, and cause the stream of NL based output to be rendered at the client device. Notably, the processor(s) can employ attribute classifier(s) and a multi-objective scorer to implement a blockwise controlled decoding technique in generating the stream of NL based output utilizing the LLM. By implementing the blockwise controlled decoding technique in generating the stream of NL based output utilizing the LLM, the processor(s) can reduce latency in generating and/or of the stream of NL based output generated utilizing the LLM.
-
公开(公告)号:US11907674B1
公开(公告)日:2024-02-20
申请号:US18370683
申请日:2023-09-20
Applicant: GOOGLE LLC
Inventor: Oscar Akerlund , Evgeny Sluzhaev , Golnaz Ghiasi , Thang Luong , Yifeng Lu , Igor Petrovski , Ágoston Weisz , Wei Yu , Rakesh Shivanna , Michael Andrew Goodman , Apoorv Kulshreshtha , Yu Du , Amin Ghafouri , Sanil Jain , Dustin Tran , Vikas Peswani , YaGuang Li
CPC classification number: G06F40/40
Abstract: Implementations relate to generating multi-modal response(s) through utilization of large language model(s) (LLM(s)). Processor(s) of a system can: receive natural language (NL) based input, generate a multi-modal response that is responsive to the NL based output, and cause the multi-modal response to be rendered. In some implementations, and in generating the multi-modal response, the processor(s) can process, using a LLM, LLM input (e.g., that includes at least the NL based input) to generate LLM output, and determine, based on the LLM output, textual content for inclusion in the multi-modal response and multimedia content for inclusion in the multi-modal response. In some implementations, the multimedia content can be obtained based on a multimedia content tag that is included in the LLM output and that is indicative of the multimedia content. In various implementations, the multimedia content can be interleaved between segments of the textual content.
-
-
-
-
-