-
公开(公告)号:US20230274728A1
公开(公告)日:2023-08-31
申请号:US18314556
申请日:2023-05-09
Applicant: Google LLC
Inventor: Daisy Stanton , Eric Dean Battenberg , Russell John Wyatt Skerry-Ryan , Soroosh Mariooryad , David Teh-hwa Kao , Thomas Edward Bagby , Sean Matthew Shannon
Abstract: A system for generating an output audio signal includes a context encoder, a text-prediction network, and a text-to-speech (TTS) model. The context encoder is configured to receive one or more context features associated with current input text and process the one or more context features to generate a context embedding associated with the current input text. The text-prediction network is configured to process the current input text and the context embedding to predict, as output, a style embedding for the current input text. The style embedding specifies a specific prosody and/or style for synthesizing the current input text into expressive speech. The TTS model is configured to process the current input text and the style embedding to generate an output audio signal of expressive speech of the current input text. The output audio signal has the specific prosody and/or style specified by the style embedding.
-
公开(公告)号:US20230260504A1
公开(公告)日:2023-08-17
申请号:US18302764
申请日:2023-04-18
Applicant: Google LLC
Inventor: Eric Dean Battenberg , Daisy Stanton , Russell John Wyatt Skerry-Ryan , Soroosh Mariooryad , David Teh-hwa Kao , Thomas Edward Bagby , Sean Matthew Shannon
IPC: G10L13/047 , G10L13/10
CPC classification number: G10L13/047 , G10L13/10
Abstract: A method for estimating an embedding capacity includes receiving, at a deterministic reference encoder, a reference audio signal, and determining a reference embedding corresponding to the reference audio signal, the reference embedding having a corresponding embedding dimensionality. The method also includes measuring a first reconstruction loss as a function of the corresponding embedding dimensionality of the reference embedding and obtaining a variational embedding from a variational posterior. The variational embedding has a corresponding embedding dimensionality and a specified capacity. The method also includes measuring a second reconstruction loss as a function of the corresponding embedding dimensionality of the variational embedding and estimating a capacity of the reference embedding by comparing the first measured reconstruction loss for the reference embedding relative to the second measured reconstruction loss for the variational embedding having the specified capacity.
-
公开(公告)号:US11676573B2
公开(公告)日:2023-06-13
申请号:US16931336
申请日:2020-07-16
Applicant: Google LLC
Inventor: Daisy Stanton , Eric Dean Battenberg , Russell John Wyatt Skerry-Ryan , Soroosh Mariooryad , David Teh-Hwa Kao , Thomas Edward Bagby , Sean Matthew Shannon
Abstract: A system for generating an output audio signal includes a context encoder, a text-prediction network, and a text-to-speech (TTS) model. The context encoder is configured to receive one or more context features associated with current input text and process the one or more context features to generate a context embedding associated with the current input text. The text-prediction network is configured to process the current input text and the context embedding to predict, as output, a style embedding for the current input text. The style embedding specifies a specific prosody and/or style for synthesizing the current input text into expressive speech. The TTS model is configured to process the current input text and the style embedding to generate an output audio signal of expressive speech of the current input text. The output audio signal has the specific prosody and/or style specified by the style embedding.
-
公开(公告)号:US20210035551A1
公开(公告)日:2021-02-04
申请号:US16931336
申请日:2020-07-16
Applicant: Google LLC
Inventor: Daisy Stanton , Eric Dean Battenberg , Russell John Wyatt Skerry-Ryan , Soroosh Mariooryad , David Teh-Hwa Kao , Thomas Edward Bagby , Sean Matthew Shannon
IPC: G10L13/10
Abstract: A system for generating an output audio signal includes a context encoder, a text-prediction network, and a text-to-speech (TTS) model. The context encoder is configured to receive one or more context features associated with current input text and process the one or more context features to generate a context embedding associated with the current input text. The text-prediction network is configured to process the current input text and the context embedding to predict, as output, a style embedding for the current input text. The style embedding specifies a specific prosody and/or style for synthesizing the current input text into expressive speech The TTS model is configured to process the current input text and the style embedding to generate an output audio signal of expressive speech of the current input text. The output audio signal has the specific prosody and/or style specified by the style embedding.
-
公开(公告)号:US20200168242A1
公开(公告)日:2020-05-28
申请号:US16778222
申请日:2020-01-31
Applicant: Google LLC
Inventor: Gabor Simko , Maria Carolina Parada San Martin , Sean Matthew Shannon
IPC: G10L25/78 , G10L15/18 , G10L15/065 , G10L15/187 , G10L15/22
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for detecting an end of a query are disclosed. In one aspect, a method includes the actions of receiving audio data that corresponds to an utterance spoken by a user. The actions further include applying, to the audio data, an end of query model. The actions further include determining the confidence score that reflects a likelihood that the utterance is a complete utterance. The actions further include comparing the confidence score that reflects the likelihood that the utterance is a complete utterance to a confidence score threshold. The actions further include determining whether the utterance is likely complete or likely incomplete. The actions further include providing, for output, an instruction to (i) maintain a microphone that is receiving the utterance in an active state or (ii) deactivate the microphone that is receiving the utterance.
-
-
-
-