Universal Monolingual Output Layer for Multilingual Speech Recognition

    公开(公告)号:US20240135923A1

    公开(公告)日:2024-04-25

    申请号:US18485271

    申请日:2023-10-11

    申请人: Google LLC

    摘要: A method includes receiving a sequence of acoustic frames as input to a multilingual automated speech recognition (ASR) model configured to recognize speech in a plurality of different supported languages and generating, by an audio encoder of the multilingual ASR, a higher order feature representation for a corresponding acoustic frame in the sequence of acoustic frames. The method also includes generating, by a language identification (LID) predictor of the multilingual ASR, a language prediction representation for a corresponding higher order feature representation. The method also includes generating, by a decoder of the multilingual ASR, a probability distribution over possible speech recognition results based on the corresponding higher order feature representation, a sequence of non-blank symbols, and a corresponding language prediction representation. The decoder includes monolingual output layer having a plurality of output nodes each sharing a plurality of language-specific wordpiece models.

    Emitting Word Timings with End-to-End Models

    公开(公告)号:US20210350794A1

    公开(公告)日:2021-11-11

    申请号:US17204852

    申请日:2021-03-17

    申请人: Google LLC

    IPC分类号: G10L15/06 G10L25/30 G10L25/78

    摘要: A method includes receiving a training example that includes audio data representing a spoken utterance and a ground truth transcription. For each word in the spoken utterance, the method also includes inserting a placeholder symbol before the respective word identifying a respective ground truth alignment for a beginning and an end of the respective word, determining a beginning word piece and an ending word piece, and generating a first constrained alignment for the beginning word piece and a second constrained alignment for the ending word piece. The first constrained alignment is aligned with the ground truth alignment for the beginning of the respective word and the second constrained alignment is aligned with the ground truth alignment for the ending of the respective word. The method also includes constraining an attention head of a second pass decoder by applying the first and second constrained alignments.

    DIALOG MANAGEMENT FOR LARGE LANGUAGE MODEL-BASED (LLM-BASED) DIALOGS

    公开(公告)号:US20240311575A1

    公开(公告)日:2024-09-19

    申请号:US18123141

    申请日:2023-03-17

    申请人: GOOGLE LLC

    IPC分类号: G06F40/35 G06N20/00

    CPC分类号: G06F40/35 G06N20/00

    摘要: Implementations relate to dialog management of a large language model (LLM) utilized in generating natural language (NL) output during an ongoing dialog. Processor(s) of a system can: receive NL based input as part of the ongoing dialog, generate NL based output utilizing the LLM, and cause the NL based output to be rendered. Further, the processor(s) can receive subsequent NL based input as part of the ongoing dialog. In some implementations, the processor(s) can determine whether to modify a corresponding dialog context in generating subsequent NL based output, and modify the corresponding dialog context accordingly. For example, the processor(s) can restrict the corresponding dialog context, or supplant the corresponding dialog context with a corresponding curated dialog context. In additional or alternative implementations, the processor(s) can modify a corresponding NL based output threshold utilized in generating the subsequent NL based response to ensure the resulting NL based output is desirable.

    Joint Speech and Text Streaming Model for ASR

    公开(公告)号:US20240028829A1

    公开(公告)日:2024-01-25

    申请号:US18346232

    申请日:2023-07-01

    申请人: Google LLC

    IPC分类号: G06F40/284 G06F40/40

    CPC分类号: G06F40/284 G06F40/40

    摘要: A method includes receiving training data that includes a set of unspoken textual utterances. For each respective unspoken textual utterance, the method includes, tokenizing the respective textual utterance into a sequence of sub-word units, generating a first higher order textual feature representation for a corresponding sub-word unit tokenized from the respective unspoken textual utterance, receiving the first higher order textual feature representation generated by a text encoder, and generating a first probability distribution over possible text units. The method also includes training an encoder based on the first probability distribution over possible text units generated by a first-pass decoder for each respective unspoken textual utterance in the set of unspoken textual utterances.