Abstract:
Methods, systems, and apparatus for voice authentication and command. In an aspect, a method comprises: receiving, by a data processing apparatus that is operating in a locked mode, audio data that encodes an utterance of a user, wherein the locked mode prevents the data processing apparatus from performing at least one action; providing, while the data processing apparatus is operating in the locked mode, the audio data to a voice biometric engine and a voice action engine; receiving, while the data processing apparatus is operating in the locked mode, an indication from the voice biometric engine that the user has been biometrically authenticated; and in response to receiving the indication, triggering the voice action engine to process a voice action that is associated with the utterance.
Abstract:
Methods, systems, and apparatus for voice authentication and command. In an aspect, a method comprises: receiving, by a data processing apparatus that is operating in a locked mode, audio data that encodes an utterance of a user, wherein the locked mode prevents the data processing apparatus from performing at least one action; providing, while the data processing apparatus is operating in the locked mode, the audio data to a voice biometric engine and a voice action engine; receiving, while the data processing apparatus is operating in the locked mode, an indication from the voice biometric engine that the user has been biometrically authenticated; and in response to receiving the indication, triggering the voice action engine to process a voice action that is associated with the utterance.
Abstract:
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for receiving, from a user device, data indicating a user performed a user input gesture combining a first display object in a plurality of display objects with a second display object in the plurality of display objects; identifying attributes that are associated with both the first display object and the second display object; and performing a search based on the attributes.
Abstract:
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for receiving, from a user device, data indicating a user performed a user input gesture combining a first display object in a plurality of display objects with a second display object in the plurality of display objects; identifying attributes that are associated with both the first display object and the second display object; and performing a search based on the attributes.
Abstract:
A method performed by one or more processing devices includes receiving data indicative of dictated speech that has been spoken by a user during speech dictation; causing speech recognition to be performed on the data to obtain units of text; selecting a unit from the units, wherein the unit selected corresponds to a portion of the data received at a time that is more recent relative to times at which others of the units are received; and generating, based on an output of the speech recognition, data for a graphical user interface, that when rendered on a display device, causes the graphical user interface to display: a visual representation of the dictated speech, wherein the visual representation includes a visual indicator of the unit selected; and a control for performing dictation correction on the unit selected in real-time during the speech dictation.