-
11.
公开(公告)号:US12067040B2
公开(公告)日:2024-08-20
申请号:US18103291
申请日:2023-01-30
Applicant: Google LLC
Inventor: Joseph Lange , Mugurel Ionut Andreica , Marcin Nowak-Przygodzki
IPC: G06F16/33 , G06F16/215 , G06F16/332 , G06F16/338 , G10L15/22 , G10L15/26
CPC classification number: G06F16/3344 , G06F16/215 , G06F16/3329 , G06F16/338 , G10L15/22 , G10L15/26
Abstract: Implementations are directed to determining, based on a submitted query that is a compound query, that a set of multiple sub-queries are collectively an appropriate interpretation of the compound query. Those implementations are further directed to providing, in response to such a determination, a corresponding command for each of the sub-queries of the determined set. Each of the commands is to a corresponding agent (of one or more agents), and causes the agent to generate and provide corresponding responsive content. Those implementations are further directed to causing content to be rendered in response to the submitted query, where the content is based on the corresponding responsive content received in response to the commands.
-
12.
公开(公告)号:US11830487B2
公开(公告)日:2023-11-28
申请号:US17235104
申请日:2021-04-20
Applicant: GOOGLE LLC
Inventor: Joseph Lange , Marcin Nowak-Przygodzki
CPC classification number: G10L15/22
Abstract: Implementations set forth herein relate to an automated assistant that can operate as an interface between a user and a separate application to search application content of the separate application. The automated assistant can interact with existing search filter features of another application and can also adapt in circumstances when certain filter parameters are not directly controllable at a search interface of the application. For instance, when a user requests that a search operation be performed using certain terms, those terms may refer to content filters that may not be available at a search interface of the application. However, the automated assistant can generate an assistant input based on those content filters in order to ensure that any resulting search results will be filtered accordingly. The assistant input can then be submitted into a search field of the application and a search operation can be executed.
-
13.
公开(公告)号:US20230206628A1
公开(公告)日:2023-06-29
申请号:US18117798
申请日:2023-03-06
Applicant: GOOGLE LLC
Inventor: Marcin Nowak-Przygodzki , Gökhan Bakir
IPC: G06V20/20 , G06F16/9032 , G06F3/01 , G06F3/03 , G06F3/00 , H04N23/63 , G06F9/451 , G06F16/58 , G06F3/0481 , G06F3/16
CPC classification number: G06V20/20 , G06F16/9032 , G06F3/017 , G06F3/0304 , G06F3/005 , H04N23/63 , G06F9/453 , G06F16/5866 , G06F3/0481 , G06F3/167
Abstract: Methods, apparatus, systems, and computer-readable media are set forth for generating and/or utilizing image shortcuts that cause one or more corresponding computer actions to be performed in response to determining that one or more features are present in image(s) from a camera of a computing device of a user (e.g., present in a real-time image feed from the camera). An image shortcut can be generated in response to user interface input, such as a spoken command. For example, the user interface input can direct the automated assistant to perform one or more actions in response to object(s) having certain feature(s) being present in a field of view of the camera. Subsequently, when the user directs their camera at object(s) having such feature(s), the assistant application can cause the action(s) to be automatically performed. For example, the assistant application can cause data to be presented and/or can control a remote device in accordance with the image shortcut.
-
公开(公告)号:US20230050054A1
公开(公告)日:2023-02-16
申请号:US17974086
申请日:2022-10-26
Applicant: GOOGLE LLC
Inventor: Gökhan Bakir , Andre Elisseeff , Torsten Marek , João Paulo Pagaime da Silva , Mathias Carlen , Dana Ritter , Lukasz Suder , Ernest Galbrun , Matthew Stokes , Marcin Nowak-Przygodzki , Mugurel-Ionut Andreica , Marius Dumitran
IPC: G06F16/9535 , G06F16/9032
Abstract: Implementations are described herein for analyzing existing interactive web sites to facilitate automatic engagement with those web sites, e.g., by automated assistants or via other user interfaces, with minimal effort from the hosts of those websites. For example, in various implementations, techniques described herein may be used to abstract, validate, maintain, generalize, extend and/or distribute individual actions and “traces” of actions that are useable to navigate through various interactive websites. Additionally, techniques are described herein for leveraging these actions and/or traces to automate aspects of interaction with a third party website. For example, in some implementations, techniques described herein may enable users to engage with an automated assistant (via a spoken or typed dialog session) to interact with the third party website without requiring the user to visually interact with the third party web site directly and without requiring the third party to implement their own third party agent.
-
15.
公开(公告)号:US11557119B2
公开(公告)日:2023-01-17
申请号:US17838914
申请日:2022-06-13
Applicant: Google LLC
Inventor: Marcin Nowak-Przygodzki , Gökhan Bakir
IPC: G06F3/01 , G06F9/451 , G06F3/0481 , G06F3/16 , H04N5/232 , G06V20/20 , G06F16/9032 , G06F3/03 , G06F3/00 , G06F16/58
Abstract: Methods, apparatus, systems, and computer-readable media are set forth for generating and/or utilizing image shortcuts that cause one or more corresponding computer actions to be performed in response to determining that one or more features are present in image(s) from a camera of a computing device of a user (e.g., present in a real-time image feed from the camera). An image shortcut can be generated in response to user interface input, such as a spoken command. For example, the user interface input can direct the automated assistant to perform one or more actions in response to object(s) having certain feature(s) being present in a field of view of the camera. Subsequently, when the user directs their camera at object(s) having such feature(s), the assistant application can cause the action(s) to be automatically performed. For example, the assistant application can cause data to be presented and/or can control a remote device in accordance with the image shortcut.
-
16.
公开(公告)号:US20220335935A1
公开(公告)日:2022-10-20
申请号:US17235104
申请日:2021-04-20
Applicant: GOOGLE LLC
Inventor: Joseph Lange , Marcin Nowak-Przygodzki
IPC: G10L15/22
Abstract: Implementations set forth herein relate to an automated assistant that can operate as an interface between a user and a separate application to search application content of the separate application. The automated assistant can interact with existing search filter features of another application and can also adapt in circumstances when certain filter parameters are not directly controllable at a search interface of the application. For instance, when a user requests that a search operation be performed using certain terms, those terms may refer to content filters that may not be available at a search interface of the application. However, the automated assistant can generate an assistant input based on those content filters in order to ensure that any resulting search results will be filtered accordingly. The assistant input can then be submitted into a search field of the application and a search operation can be executed.
-
公开(公告)号:US20220334794A1
公开(公告)日:2022-10-20
申请号:US17233223
申请日:2021-04-16
Applicant: Google LLC
Inventor: Joseph Lange , Marcin Nowak-Przygodzki
Abstract: Implementations set forth herein relate to an automated assistant that can provide a selectable action intent suggestion when a user is accessing a third party application that is controllable via the automated assistant. The action intent can be initialized by the user without explicitly invoking the automated assistant using, for example, an invocation phrase (e.g., “Assistant . . . ”). Rather, the user can initialize performance of the corresponding action by identifying one or more action parameters. In some implementations, the selectable suggestion can indicate that a microphone is active for the user to provide a spoken utterance that identifies a parameter(s). When the action intent is initialized in response to the spoken utterance from the user, the automated assistant can control the third party application according to the action intent and any identified parameter(s).
-
18.
公开(公告)号:US20220309788A1
公开(公告)日:2022-09-29
申请号:US17838914
申请日:2022-06-13
Applicant: Google LLC
Inventor: Marcin Nowak-Przygodzki , Gökhan Bakir
IPC: G06V20/20 , G06F16/9032 , G06F3/01 , G06F3/03 , G06F3/00 , G06F9/451 , G06F16/58 , G06F3/0481 , G06F3/16 , H04N5/232
Abstract: Methods, apparatus, systems, and computer-readable media are set forth for generating and/or utilizing image shortcuts that cause one or more corresponding computer actions to be performed in response to determining that one or more features are present in image(s) from a camera of a computing device of a user (e.g., present in a real-time image feed from the camera). An image shortcut can be generated in response to user interface input, such as a spoken command. For example, the user interface input can direct the automated assistant to perform one or more actions in response to object(s) having certain feature(s) being present in a field of view of the camera. Subsequently, when the user directs their camera at object(s) having such feature(s), the assistant application can cause the action(s) to be automatically performed. For example, the assistant application can cause data to be presented and/or can control a remote device in accordance with the image shortcut.
-
公开(公告)号:US20190132265A1
公开(公告)日:2019-05-02
申请号:US15833454
申请日:2017-12-06
Applicant: Google LLC
Inventor: Marcin Nowak-Przygodzki , Jan Lamecki , Behshad Behzadi
Abstract: Techniques are described related to enabling automated assistants to enter into a “conference mode” in which they can “participate” in meetings between multiple human participants and perform various functions described herein. In various implementations, an automated assistant implemented at least in part on conference computing device(s) may be set to a conference mode in which the automated assistant performs speech-to-text processing on multiple distinct spoken utterances, provided by multiple meeting participants, without requiring explicit invocation prior to each utterance. The automated assistant may perform semantic processing on first text generated from the speech-to-text processing of one or more of the spoken utterances, and generate, based on the semantic processing, data that is pertinent to the first text. The data may be output to the participants at conference computing device(s). The automated assistant may later determine that the meeting has concluded, and may be set to a non-conference mode.
-
公开(公告)号:US20250124707A1
公开(公告)日:2025-04-17
申请号:US18999889
申请日:2024-12-23
Applicant: GOOGLE LLC
Inventor: Marcin Nowak-Przygodzki , Gökhan Bakir
IPC: G06V20/20 , G06F3/01 , G06F3/0482 , G06F3/04886 , G06F16/487 , G06F16/9032 , G06Q30/02 , G06V20/68 , H04N23/63 , H04N23/667
Abstract: Techniques described herein enable a user to interact with an automated assistant and obtain relevant output from the automated assistant without requiring arduous typed input to be provided by the user and/or without requiring the user to provide spoken input that could cause privacy concerns (e.g., if other individuals are nearby). The assistant application can operate in multiple different image conversation modes in which the assistant application is responsive to various objects in a field of view of the camera. The image conversation modes can be suggested to the user when a particular object is detected in the field of view of the camera. When the user selects an image conversation mode, the assistant application can thereafter provide output, for presentation, that is based on the selected image conversation mode and that is based on object(s) captured by image(s) of the camera.
-
-
-
-
-
-
-
-
-