-
公开(公告)号:US20230298585A1
公开(公告)日:2023-09-21
申请号:US18202236
申请日:2023-05-25
Applicant: GOOGLE LLC
Inventor: Lucas Mirelmann , Zaheed Sabur , Bohdan Vlasyuk , Marie Patriarche Bledowski , Sergey Nazarov , Denis Burakov , Behshad Behzadi , Michael Golikov , Steve Cheng , Daniel Cotting , Mario Bertschler
CPC classification number: G10L15/22 , G10L15/083
Abstract: Implementations herein relate to pre-caching data, corresponding to predicted interactions between a user and an automated assistant, using data characterizing previous interactions between the user and the automated assistant. An interaction can be predicted based on details of a current interaction between the user and an automated assistant. One or more predicted interactions can be initialized, and/or any corresponding data pre-cached, prior to the user commanding the automated assistant in furtherance of the predicted interaction. Interaction predictions can be generated using a user-parameterized machine learning model, which can be used when processing input(s) that characterize a recent user interaction with the automated assistant. The predicted interaction(s) can include action(s) to be performed by third-party application(s).
-
102.
公开(公告)号:US20230252989A1
公开(公告)日:2023-08-10
申请号:US18136189
申请日:2023-04-18
Applicant: GOOGLE LLC
Inventor: Daniel Cotting , Zaheed Sabur , Lan Huo , Bryan Christopher Horling , Behshad Behzadi , Lucas Mirelmann , Michael Golikov , Denis Burakov , Steve Cheng , Bohdan Vlasyuk , Sergey Nazarov , Mario Bertschler , Luv Kothari
IPC: G10L15/22 , G06F3/16 , G10L15/18 , G10L15/30 , H04L67/568
CPC classification number: G10L15/22 , G06F3/165 , G06F3/167 , G10L15/1815 , G10L15/30 , H04L67/568 , G10L2015/223 , H04L67/01
Abstract: Implementations can reduce the time required to obtain responses from an automated assistant through proactive caching, locally at a client device, of proactive assistant cache entries—and through on-device utilization of the proactive assistant cache entries. Different proactive cache entries can be provided to different client devices, and various implementations relate to technique(s) utilized in determining which proactive cache entries to provide to which client devices. In some of those implementations, in determining which proactive cache entries to provide (proactively or in response to a request) to a given client device, a remote system selects, from a superset of candidate proactive cache entries, a subset of the cache entries for providing to the given client device.
-
公开(公告)号:US11664028B2
公开(公告)日:2023-05-30
申请号:US17569811
申请日:2022-01-06
Applicant: GOOGLE LLC
Inventor: Lucas Mirelmann , Zaheed Sabur , Bohdan Vlasyuk , Marie Patriarche Bledowski , Sergey Nazarov , Denis Burakov , Behshad Behzadi , Michael Golikov , Steve Cheng , Daniel Cotting , Mario Bertschler
CPC classification number: G10L15/22 , G10L15/083
Abstract: Implementations herein relate to pre-caching data, corresponding to predicted interactions between a user and an automated assistant, using data characterizing previous interactions between the user and the automated assistant. An interaction can be predicted based on details of a current interaction between the user and an automated assistant. One or more predicted interactions can be initialized, and/or any corresponding data pre-cached, prior to the user commanding the automated assistant in furtherance of the predicted interaction. Interaction predictions can be generated using a user-parameterized machine learning model, which can be used when processing input(s) that characterize a recent user interaction with the automated assistant. The predicted interaction(s) can include action(s) to be performed by third-party application(s).
-
公开(公告)号:US20230047212A1
公开(公告)日:2023-02-16
申请号:US17977601
申请日:2022-10-31
Applicant: GOOGLE LLC
Inventor: Gokhan H. Bakir , Behshad Behzadi , Marcin M. Nowak-Przygodzki
IPC: G06F16/2453 , H04L67/02 , G06F16/248 , G06F16/9535 , G06F40/143
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for receiving a query provided by a user and comprising one or more terms. Obtaining context data based on at least a portion of a first resource displayed to the user at a time that the query is received. Obtaining a revised query that is based on the query and the context data. Receiving a plurality of search results responsive to the revised query. Automatically, selecting a search result that represents a second resource from the plurality of search results, and providing the second resource for display to the user.
-
公开(公告)号:US20230041517A1
公开(公告)日:2023-02-09
申请号:US17970894
申请日:2022-10-21
Applicant: GOOGLE LLC
Inventor: Michael Golikov , Zaheed Sabur , Denis Burakov , Behshad Behzadi , Sergey Nazarov , Daniel Cotting , Mario Bertschler , Lucas Mirelmann , Steve Cheng , Bohdan Vlasyuk , Jonathan Lee , Lucia Terrenghi , Adrian Zumbrunnen
Abstract: Implementations can reduce the time required to obtain responses from an automated assistant by, for example, obviating the need to provide an explicit invocation to the automated assistant, such as by saying a hot-word/phrase or performing a specific user input, prior to speaking a command or query. In addition, the automated assistant can optionally receive, understand, and/or respond to the command or query without communicating with a server, thereby further reducing the time in which a response can be provided. Implementations only selectively initiate on-device speech recognition responsive to determining one or more condition(s) are satisfied. Further, in some implementations, on-device NLU, on-device fulfillment, and/or resulting execution occur only responsive to determining, based on recognized text form the on-device speech recognition, that such further processing should occur. Thus, through selective activation of on-device speech processing, and/or selective activation of on-device NLU and/or on-device fulfillment, various client device resources are conserved.
-
公开(公告)号:US11574013B1
公开(公告)日:2023-02-07
申请号:US17471877
申请日:2021-09-10
Applicant: GOOGLE LLC
Inventor: Michal Jastrzebski , Aurelien Boffy , Gokhan H. Bakir , Behshad Behzadi , Marcin M. Nowak-Przygodzki
IPC: G06F16/9032 , G06F16/25
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for providing contextual information to a user. In one aspect, a method includes receiving, from a user device, a query-independent request for contextual information relevant to an active resource displayed in an application environment on the user device, generating multiple queries from displayed content from the resource, determining a quality score for each of the multiple queries, selecting one or more of the multiple queries based on their respective quality scores, and providing, to the user device for each of the selected one or more queries, a respective user interface element for display with the active resource, wherein each user interface element includes contextual information regarding the respective query and includes the respective query.
-
公开(公告)号:US11568146B2
公开(公告)日:2023-01-31
申请号:US16605838
申请日:2019-09-10
Applicant: Google LLC
Inventor: Sharon Stovezky , Yariv Adan , Radu Voroneanu , Behshad Behzadi , Ragnar Groot Koerkamp , Marcin Nowak-Przygodzki
IPC: G06F40/295 , G06F16/9537
Abstract: Implementations set forth herein relate to an automated assistant that operates according to a variety of different location-based biasing modes for rendering responsive content for a user and/or proactively suggesting content for the user. The user can provide condensed spoken utterances to the automated assistant, when the automated assistant is operating according to one or more location-based biasing modes, but nonetheless receive accurate responsive outputs from the automated assistant. A responsive output generated by biasing toward a subset of location characteristic data that has been prioritized over other subsets of location characteristic data. The biasing allows the automated assistant to compensate for any details that may be missing from a spoken utterance, but allows the user to provide shorter spoken utterances, thereby reducing an amount of language processing when processing inputs from the user.
-
公开(公告)号:US11521037B2
公开(公告)日:2022-12-06
申请号:US17330892
申请日:2021-05-26
Applicant: GOOGLE LLC
Inventor: Yariv Adan , Vladimir Vuskovic , Behshad Behzadi
Abstract: An example method includes receiving, by a computational assistant executing at one or more processors, a representation of an utterance spoken at a computing device; identifying, based on the utterance, a task to be performed by the computational assistant; responsive to determining, by the computational assistant, that complete performance of the task will take more than a threshold amount of time, outputting, for playback by one or more speakers operably connected to the computing device, synthesized voice data that informs a user of the computing device that complete performance of the task will not be immediate; and performing, by the computational assistant, the task.
-
公开(公告)号:US11436411B2
公开(公告)日:2022-09-06
申请号:US16698350
申请日:2019-11-27
Applicant: Google LLC
Inventor: Nathan David Howard , Gabor Simko , Andrei Giurgiu , Behshad Behzadi , Marcin M. Nowak-Przygodzki
IPC: G06F40/284 , G06F16/903 , G06F16/901 , G06N5/02 , G10L15/22 , G10L25/51 , G10L15/08
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for detecting a continued conversation are disclosed. In one aspect, a method includes the actions of receiving first audio data of a first utterance. The actions further include obtaining a first transcription of the first utterance. The actions further include receiving second audio data of a second utterance. The actions further include obtaining a second transcription of the second utterance. The actions further include determining whether the second utterance includes a query directed to a query processing system based on analysis of the second transcription and the first transcription or a response to the first query. The actions further include configuring the data routing component to provide the second transcription of the second utterance to the query processing system as a second query or bypass routing the second transcription.
-
公开(公告)号:US20220059093A1
公开(公告)日:2022-02-24
申请号:US17521131
申请日:2021-11-08
Applicant: Google LLC
Inventor: Daniel Cotting , Zaheed Sabur , Lan Huo , Bryan Christopher Horling , Behshad Behzadi , Lucas Mirelmann , Michael Golikov , Denis Burakov , Steve Cheng , Bohdan Vlasyuk , Sergey Nazarov , Mario Bertschler , Luv Kothari
Abstract: Implementations can reduce the time required to obtain responses from an automated assistant through proactive caching, locally at a client device, of proactive assistant cache entries—and through on-device utilization of the proactive assistant cache entries. Different proactive cache entries can be provided to different client devices, and various implementations relate to technique(s) utilized in determining which proactive cache entries to provide to which client devices. In some of those implementations, in determining which proactive cache entries to provide (proactively or in response to a request) to a given client device, a remote system selects, from a superset of candidate proactive cache entries, a subset of the cache entries for providing to the given client device.
-
-
-
-
-
-
-
-
-