DETERMINING RESPONSIVE CONTENT FOR A COMPOUND QUERY BASED ON A SET OF GENERATED SUB-QUERIES

    公开(公告)号:US20200342018A1

    公开(公告)日:2020-10-29

    申请号:US16617360

    申请日:2018-05-07

    Applicant: Google LLC

    Abstract: Implementations are directed to determining, based on a submitted query that is a compound query, that a set of multiple sub-queries are collectively an appropriate interpretation of the compound query. Those implementations are further directed to providing, in response to such a determination, a corresponding command for each of the sub-queries of the determined set. Each of the commands is to a corresponding agent (of one or more agents), and causes the agent to generate and provide corresponding responsive content. Those implementations are further directed to causing content to be rendered in response to the submitted query, where the content is based on the corresponding responsive content received in response to the commands.

    SYSTEMS, METHODS, AND APPARATUS FOR PROVIDING IMAGE SHORTCUTS FOR AN ASSISTANT APPLICATION

    公开(公告)号:US20200250433A1

    公开(公告)日:2020-08-06

    申请号:US16850294

    申请日:2020-04-16

    Applicant: Google LLC

    Abstract: Methods, apparatus, systems, and computer-readable media are set forth for generating and/or utilizing image shortcuts that cause one or more corresponding computer actions to be performed in response to determining that one or more features are present in image(s) from a camera of a computing device of a user (e.g., present in a real-time image feed from the camera). An image shortcut can be generated in response to user interface input, such as a spoken command. For example, the user interface input can direct the automated assistant to perform one or more actions in response to object(s) having certain feature(s) being present in a field of view of the camera. Subsequently, when the user directs their camera at object(s) having such feature(s), the assistant application can cause the action(s) to be automatically performed. For example, the assistant application can cause data to be presented and/or can control a remote device in accordance with the image shortcut.

    AUTOMATED ASSISTANT PERFORMANCE OF A NON-ASSISTANT APPLICATION OPERATION(S) IN RESPONSE TO A USER INPUT THAT CAN BE LIMITED TO A PARAMETER(S)

    公开(公告)号:US20240361982A1

    公开(公告)日:2024-10-31

    申请号:US18765101

    申请日:2024-07-05

    Applicant: GOOGLE LLC

    CPC classification number: G06F3/167 G10L15/22 G10L15/28 G10L2015/223

    Abstract: Implementations set forth herein relate to an automated assistant that can provide a selectable action intent suggestion when a user is accessing a third party application that is controllable via the automated assistant. The action intent can be initialized by the user without explicitly invoking the automated assistant using, for example, an invocation phrase (e.g., “Assistant . . . ”). Rather, the user can initialize performance of the corresponding action by identifying one or more action parameters. In some implementations, the selectable suggestion can indicate that a microphone is active for the user to provide a spoken utterance that identifies a parameter(s). When the action intent is initialized in response to the spoken utterance from the user, the automated assistant can control the third party application according to the action intent and any identified parameter(s).

    ORCHESTRATING EXECUTION OF A SERIES OF ACTIONS REQUESTED TO BE PERFORMED VIA AN AUTOMATED ASSISTANT

    公开(公告)号:US20230377572A1

    公开(公告)日:2023-11-23

    申请号:US18231112

    申请日:2023-08-07

    Applicant: GOOGLE LLC

    CPC classification number: G10L15/22 G06N3/08 G10L15/02 G10L2015/223

    Abstract: Implementations are set forth herein for creating an order of execution for actions that were requested by a user, via a spoken utterance to an automated assistant. The order of execution for the requested actions can be based on how each requested action can, or is predicted to, affect other requested actions. In some implementations, an order of execution for a series of actions can be determined based on an output of a machine learning model, such as a model that has been trained according to supervised learning. A particular order of execution can be selected to mitigate waste of processing, memory, and network resources—at least relative to other possible orders of execution. Using interaction data that characterizes past performances of automated assistants, certain orders of execution can be adapted over time, thereby allowing the automated assistant to learn from past interactions with one or more users.

    Systems, methods, and apparatus for providing image shortcuts for an assistant application

    公开(公告)号:US11600065B2

    公开(公告)日:2023-03-07

    申请号:US17838914

    申请日:2022-06-13

    Applicant: Google LLC

    Abstract: Methods, apparatus, systems, and computer-readable media are set forth for generating and/or utilizing image shortcuts that cause one or more corresponding computer actions to be performed in response to determining that one or more features are present in image(s) from a camera of a computing device of a user (e.g., present in a real-time image feed from the camera). An image shortcut can be generated in response to user interface input, such as a spoken command. For example, the user interface input can direct the automated assistant to perform one or more actions in response to object(s) having certain feature(s) being present in a field of view of the camera. Subsequently, when the user directs their camera at object(s) having such feature(s), the assistant application can cause the action(s) to be automatically performed. For example, the assistant application can cause data to be presented and/or can control a remote device in accordance with the image shortcut.

    AUTOMATED ASSISTANTS WITH CONFERENCE CAPABILITIES

    公开(公告)号:US20230013581A1

    公开(公告)日:2023-01-19

    申请号:US17944712

    申请日:2022-09-14

    Applicant: GOOGLE LLC

    Abstract: Techniques are described related to enabling automated assistants to enter into a “conference mode” in which they can “participate” in meetings between multiple human participants and perform various functions described herein. In various implementations, an automated assistant implemented at least in part on conference computing device(s) may be set to a conference mode in which the automated assistant performs speech-to-text processing on multiple distinct spoken utterances, provided by multiple meeting participants, without requiring explicit invocation prior to each utterance. The automated assistant may perform semantic processing on first text generated from the speech-to-text processing of one or more of the spoken utterances, and generate, based on the semantic processing, data that is pertinent to the first text. The data may be output to the participants at conference computing device(s). The automated assistant may later determine that the meeting has concluded, and may be set to a non-conference mode.

Patent Agency Ranking