-
公开(公告)号:US20240205174A1
公开(公告)日:2024-06-20
申请号:US18081541
申请日:2022-12-14
Applicant: GOOGLE LLC
Inventor: Alexander Bailey
IPC: H04L51/02 , G10L13/033 , H04L51/06
CPC classification number: H04L51/02 , G10L13/0335 , H04L51/06
Abstract: Implementations relate to processing, utilizing a large language model (“LLM”), input that is based on sensor data, from sensor(s) of a client device, to generate LLM output—and causing output, that is based on the generated LLM output, to be rendered by an interactive chatbot. The input that is based on sensor data and that is processed by the LLM in generating the LLM output can be, or can include, non-acoustic input based on non-acoustic sensor data. For example, an instance of LLM output can be generated based on processing of non-acoustic input using the LLM and without any processing of acoustic input (that is based on acoustic sensor data) using the LLM. As another example, an instance of LLM output can be generated based on processing, using the LLM, both non-acoustic input that is based on non-acoustic data and acoustic input that is based on acoustic sensor data.
-
公开(公告)号:US20240311575A1
公开(公告)日:2024-09-19
申请号:US18123141
申请日:2023-03-17
Applicant: GOOGLE LLC
Inventor: Martin Baeuml , Alexander Bailey , Jonas Bragagnolo , Florent D'Halluin , Trevor Strohman
Abstract: Implementations relate to dialog management of a large language model (LLM) utilized in generating natural language (NL) output during an ongoing dialog. Processor(s) of a system can: receive NL based input as part of the ongoing dialog, generate NL based output utilizing the LLM, and cause the NL based output to be rendered. Further, the processor(s) can receive subsequent NL based input as part of the ongoing dialog. In some implementations, the processor(s) can determine whether to modify a corresponding dialog context in generating subsequent NL based output, and modify the corresponding dialog context accordingly. For example, the processor(s) can restrict the corresponding dialog context, or supplant the corresponding dialog context with a corresponding curated dialog context. In additional or alternative implementations, the processor(s) can modify a corresponding NL based output threshold utilized in generating the subsequent NL based response to ensure the resulting NL based output is desirable.
-
公开(公告)号:US20240311402A1
公开(公告)日:2024-09-19
申请号:US18136634
申请日:2023-04-19
Applicant: GOOGLE LLC
Inventor: Martin Baeuml , Yanping Huang , Wenhao Jia , Chang Lan , Yuanzhong Xu , Junwhan Ahn , Alexander Bailey , Leif Schelin , Trevor Strohman , Emanuel Taropa , Sidharth Mudgal , Yanyan Zheng , Zhifeng Chen , Ahmad Beirami
IPC: G06F16/332 , G06F40/40
CPC classification number: G06F16/3322 , G06F16/3329 , G06F40/40
Abstract: Implementations relate to reducing latency in generating and/or rendering natural language (NL) output generated using a large language model (LLM). Processor(s) of a system can: receive NL based input associated with a client device, and generate the NL based output utilizing the LLM. The NL based output can be a stream of NL based output in that it includes a plurality of segments, and is generated on a segment-by-segment basis. In some implementations, a first segment of the stream of NL based output is selected for inclusion in the stream of NL based output as a second segment (and any subsequent segment) is being generated to reduce latency in evaluating the NL based output as a whole prior to rendering thereof. In some versions of those implementations, the first segment is rendered as the second segment (and any subsequent segment) is being generated to further reduce latency in rendering thereof.
-
-