Abstract:
In one implementation, a computer-implemented method includes receiving, at a mobile computing device, ambiguous user input that indicates more than one of a plurality of commands; and determining a current context associated with the mobile computing device that indicates where the mobile computing device is currently located. The method can further include disambiguating the ambiguous user input by selecting a command from the plurality of commands based on the current context associated with the mobile computing device; and causing output associated with performance of the selected command to be provided by the mobile computing device.
Abstract:
A system and machine-implemented method for providing a visual cue of overscrolling displayed content on an electronic device. When the end of a scrollable page or content has been reached, the visual cue corresponds to the user's physical scrolling input. The content in the window is effectively attached to the window so that when the end of the content is reached in one direction, the window containing the content is pulled in the same scrolling direction. The pulling in the scrolling direction occurs in a logarithmically decreasing manner, providing a tactile-like visual effect that the outer frame of the window is resisting the attempt to scroll further in the scrolling direction. The visual resistance effect may include squishing or stretching portions of the window without distorting the content within the window.
Abstract:
In general, the subject matter described in this specification can be embodied in methods, systems, and program products for receiving user input that defines a search query, and providing the search query to a server system. Information that a search engine system determined was responsive to the search query is received at a computing device. The computing device is identified as in a first state, and a first output mode for audibly outputting at least a portion of the information is selected. The first output mode is selected from a collection of the first output mode and a second output mode. The second output mode is selected in response to the computing device being in a second state and is for visually outputting at least the portion of the information and not audibly outputting the at least portion of the information. At least the portion of information is audibly output.
Abstract:
A system and method for processing a touch input are provided. An initial press action that is associated with a number of simultaneous touches is detected on a touch interface. One or more commands that are mapped to one or more sequences of user interaction is determined based on the number of simultaneous touches, where each of the one or more sequences of user interaction is initiated by the initial press action. One or more graphical interface components, where each of the one or more graphical interface components correspond to a different one of the one or more sequences of user interaction are provided for display. Each of the one or more graphical interface components indicate at least part of the corresponding sequence of user interaction and indicates respective command mapped to the corresponding sequence of user interaction.
Abstract:
A system and machine-implemented method for facilitating an application launcher providing direct access to one or more items, the method including identifying one or more items maintained at one or more sources accessible by the user at the computing device meeting search criteria specified by a user, determining an application associated with each of the one or more items facilitating access to the item, generating an instance of each of the one or more items facilitating direct user interaction with the item, where the user is able to interact with the item directly from the instance of the item and providing the instance of each of the one or more items for display to the user at the computing device in response to the request.
Abstract:
An example method includes, responsive to receiving an indication of an incoming communication, identifying, by a computing device, first and second portions of an image that are associated with respective first and second portions of a face of a human user, wherein the human user has been determined to be an originator of the incoming communication. The example method further includes outputting, by the computing device and for display, the first and second portions of the image that are associated with the respective first and second portions of the face of the human user, and outputting, by the computing device and for display, message content associated with the incoming communication, such that the message content as displayed at least partially overlays the second portion of the image.
Abstract:
The subject matter of this specification can be implemented in, among other things, a computer-implemented method for correcting words in transcribed text including receiving speech audio data from a microphone. The method further includes sending the speech audio data to a transcription system. The method further includes receiving a word lattice transcribed from the speech audio data by the transcription system. The method further includes presenting one or more transcribed words from the word lattice. The method further includes receiving a user selection of at least one of the presented transcribed words. The method further includes presenting one or more alternate words from the word lattice for the selected transcribed word. The method further includes receiving a user selection of at least one of the alternate words. The method further includes replacing the selected transcribed word in the presented transcribed words with the selected alternate word.
Abstract:
A computer-implemented method of multisensory speech detection is disclosed. The method comprises determining an orientation of a mobile device and determining an operating mode of the mobile device based on the orientation of the mobile device. The method further includes identifying speech detection parameters that specify when speech detection begins or ends based on the determined operating mode and detecting speech from a user of the mobile device based on the speech detection parameters.
Abstract:
In one implementation, a computer-implemented method includes detecting a current context associated with a mobile computing device and determining, based on the current context, whether to switch the mobile computing device from a current mode of operation to a second mode of operation during which the mobile computing device monitors ambient sounds for voice input that indicates a request to perform an operation. The method can further include, in response to determining whether to switch to the second mode of operation, activating one or more microphones and a speech analysis subsystem associated with the mobile computing device so that the mobile computing device receives a stream of audio data. The method can also include providing output on the mobile computing device that is responsive to voice input that is detected in the stream of audio data and that indicates a request to perform an operation.