-
公开(公告)号:US20230173657A1
公开(公告)日:2023-06-08
申请号:US17544117
申请日:2021-12-07
Applicant: GOOGLE LLC
Inventor: Matthew Sharifi , Victor Carbune
CPC classification number: B25J9/0003 , G10L15/22 , G05D1/0088 , G05D1/12 , G06V20/50 , G06F3/165 , H04R1/323 , G06F3/167 , G05D2201/0207
Abstract: Implementations set forth herein relate to a robotic computing device that can seek additional information from other nearby device(s) for fulfilling a request and/or delegating certain operations to the other nearby device(s). Delegating certain operations can involve the robotic computing device maneuvering to a location of a nearby device and soliciting the nearby device for assistance by providing an input from the robotic computing device to the nearby device. In some instances, the input can include an audible rendering of an invocation phrase and a command phrase for invoking an automated assistant that is accessible via the nearby device. A determination of whether to delegate certain operations or seek additional information can be based on a variety of factors such as predicted efficiency and estimated accuracy of performance for performing certain operations.
-
公开(公告)号:US20230059469A1
公开(公告)日:2023-02-23
申请号:US17982834
申请日:2022-11-08
Applicant: GOOGLE LLC
Inventor: Matthew Sharifi , Victor Carbune
Abstract: Implementations can receive audio data corresponding to a spoken utterance of a user, process the audio data to generate a plurality of speech hypotheses, determine an action to be performed by an automated assistant based on the speech hypotheses, and cause the computing device to render an indication of the action. In response to the computing device rendering the indication, implementations can receive additional audio data corresponding to an additional spoken utterance of the user, process the additional audio data to determine that a portion of the spoken utterance is similar to an additional portion of the additional spoken utterance, supplant the action with an alternate action, and cause the automated assistant to initiate performance of the alternate action. Some implementations can determine whether to render the indication of the action based on a confidence level associated with the action.
-
公开(公告)号:US20230055608A1
公开(公告)日:2023-02-23
申请号:US17982863
申请日:2022-11-08
Applicant: GOOGLE LLC
Inventor: Matthew Sharifi , Victor Carbune
Abstract: Some implementations relate to performing speech biasing, NLU biasing, and/or other biasing based on historical assistant interaction(s). It can be determined, for one or more given historical interactions of a given user, whether to affect future biasing for (1) the given user account, (2) additional user account(s), and/or (3) the shared assistant device as a whole. Some implementations disclosed herein additionally and/or alternatively relate to: determining, based on utterance(s) of a given user to a shared assistant device, an association of first data and second data; storing the association as accessible to a given user account of the given user; and determining whether to store the association as also accessible by additional user account(s) and/or the shared assistant device.
-
公开(公告)号:US11557293B2
公开(公告)日:2023-01-17
申请号:US17321994
申请日:2021-05-17
Applicant: GOOGLE LLC
Inventor: Victor Carbune , Matthew Sharifi , Ondrej Skopek , Justin Lu , Daniel Valcarce , Kevin Kilgour , Mohamad Hassan Rom , Nicolo D'Ercole , Michael Golikov
Abstract: Some implementations process, using warm word model(s), a stream of audio data to determine a portion of the audio data that corresponds to particular word(s) and/or phrase(s) (e.g., a warm word) associated with an assistant command, process, using an automatic speech recognition (ASR) model, a preamble portion of the audio data (e.g., that precedes the warm word) and/or a postamble portion of the audio data (e.g., that follows the warm word) to generate ASR output, and determine, based on processing the ASR output, whether a user intended the assistant command to be performed. Additional or alternative implementations can process the stream of audio data using a speaker identification (SID) model to determine whether the audio data is sufficient to identify the user that provided a spoken utterance captured in the stream of audio data, and determine if that user is authorized to cause performance of the assistant command.
-
公开(公告)号:US11514109B2
公开(公告)日:2022-11-29
申请号:US17083613
申请日:2020-10-29
Applicant: Google LLC
Inventor: Matthew Sharifi , Victor Carbune
IPC: G06F15/16 , G06F16/9032 , G16Y10/80 , G16Y40/35 , G10L15/30
Abstract: Implementations can identify a given assistant device from among a plurality of assistant devices in an ecosystem, obtain device-specific signal(s) that are generated by the given assistant device, process the device-specific signal(s) to generate candidate semantic label(s) for the given assistant device, select a given semantic label for the given semantic device from among the candidate semantic label(s), and assigning, in a device topology representation of the ecosystem, the given semantic label to the given assistant device. Implementations can optionally receive a spoken utterance that includes a query or command at the assistant device(s), determine a semantic property of the query or command matches the given semantic label to the given assistant device, and cause the given assistant device to satisfy the query or command.
-
36.
公开(公告)号:US20220366911A1
公开(公告)日:2022-11-17
申请号:US17337804
申请日:2021-06-03
Applicant: GOOGLE LLC
Inventor: Victor Carbune , Krishna Sapkota , Behshad Behzadi , Julia Proskurnia , Jacopo Sannazzaro Natta , Justin Lu , Magali Boizot-Roche , Márius Sajgalík , Nicolo D'Ercole , Zaheed Sabur , Luv Kothari
Abstract: Implementations described herein relate to an application and/or automated assistant that can identify arrangement operations to perform for arranging text during speech-to-text operations—without a user having to expressly identify the arrangement operations. In some instances, a user that is dictating a document (e.g., an email, a text message, etc.) can provide a spoken utterance to an application in order to incorporate textual content. However, in some of these instances, certain corresponding arrangements are needed for the textual content in the document. The textual content that is derived from the spoken utterance can be arranged by the application based on an intent, vocalization features, and/or contextual features associated with the spoken utterance and/or a type of the application associated with the document, without the user expressly identifying the corresponding arrangements. In this way, the application can infer content arrangement operations from a spoken utterance that only specifies the textual content.
-
公开(公告)号:US20220355814A1
公开(公告)日:2022-11-10
申请号:US17273673
申请日:2020-11-18
Applicant: GOOGLE LLC
Inventor: Matthew Sharifi , Victor Carbune
Abstract: To identify driving event sounds during navigation, a client device in a vehicle provides a set of navigation directions for traversing from a starting location to a destination location along a route. During navigation to the destination location, the client device identifies audio that includes a driving event sound from within the vehicle or an area surrounding the vehicle. In response to determining that the audio includes the driving event sound, the client device determines whether the driving event sound is artificial. In response to determining that the driving event sound is artificial, the client device presents a notification to the driver indicating that the driving event sound is artificial or masks the driving event sound to prevent the driver from hearing the driving event sound.
-
公开(公告)号:US11462219B2
公开(公告)日:2022-10-04
申请号:US17086296
申请日:2020-10-30
Applicant: Google LLC
Inventor: Matthew Sharifi , Victor Carbune
IPC: G10L15/00 , G10L15/22 , G10L15/02 , G10L21/0208 , G10L25/78 , G10L25/87 , G10L21/0272
Abstract: A method includes receiving a first instance of raw audio data corresponding to a voice-based command and receiving a second instance of the raw audio data corresponding to an utterance of audible contents for an audio-based communication spoken by a user. When a voice filtering recognition routine determines to activate voice filtering for at least the voice of the user, the method also includes obtaining a respective speaker embedding of the user and processing, using the respective speaker embedding, the second instance of the raw audio data to generate enhanced audio data for the audio-based communication that isolates the utterance of the audible contents spoken by the user and excludes at least a portion of the one or more additional sounds that are not spoken by the user The method also includes executing.
-
39.
公开(公告)号:US11366812B2
公开(公告)日:2022-06-21
申请号:US16621109
申请日:2019-06-25
Applicant: Google LLC
Inventor: Victor Carbune , Sandro Feuz
IPC: G06F17/00 , G06F16/2455 , G06F16/953 , G06N20/00 , G06F16/901
Abstract: Techniques and a framework are described herein for gathering information about developing events from multiple live data streams and pushing new pieces of information to interested individuals as those pieces of information are learned. In various implementations, a plurality of live data streams may be monitored. Based on the monitoring, a data structure that models diffusion of information through a population may be generated and applied as input across a machine learning model to generate output. The output may be indicative of a likelihood of occurrence of a developing event and/or a predicted measure of relevancy of the developing event to a particular user. Based on a determination that the likelihood and/or measure of relevancy satisfies a criterion, one or more computing devices may render, as output, information about the developing event.
-
公开(公告)号:US20220189465A1
公开(公告)日:2022-06-16
申请号:US17117799
申请日:2020-12-10
Applicant: Google LLC
Inventor: Matthew Sharifi , Victor Carbune
Abstract: A method includes receiving audio data corresponding to an utterance spoken by a user that includes a command for a digital assistant to perform a long-standing operation, activating a set of one or more warm words associated with a respective action for controlling the long-standing operation, and associating the activated set of one or more warm words with only the user. While the digital assistant is performing the long-standing operation, the method includes receiving additional audio data corresponding to an additional utterance, identifying one of the warm words from the activated set of warm words, and performing speaker verification on the additional audio data. The method further includes performing the respective action associated with the identified one of the warm words for controlling the long-standing operation when the additional utterance was spoken by the same user that is associated with the activated set of one or more warm words.
-
-
-
-
-
-
-
-
-