-
11.
公开(公告)号:US20200329140A1
公开(公告)日:2020-10-15
申请号:US16339235
申请日:2019-01-16
Applicant: Google LLC
Inventor: Sandro Feuz , Thomas Deselaers
Abstract: Implementations set forth herein relate to generating a pre-call analysis for one or more users that are receiving and/or initializing a call with one or more other users, and/or prioritizing pre-call content according to whether security-related value was gleaned from provisioning certain pre-call content. One or more machine learning models can be employed for determining the pre-call content to be cached and/or presented prior to a user accepting a call from another user. Feedback provided before, during, and/or after the call can be used as a basis from which to prioritize certain content and/or sources of content when generating pre-call content for a subsequent call. Other information, such as contextual data (e.g., calendar entries, available peripheral devices, location, etc.) corresponding to the previous call and/or the subsequent call, can also be used as a basis from which to provide a pre-call analysis.
-
公开(公告)号:US20190342282A1
公开(公告)日:2019-11-07
申请号:US16477062
申请日:2017-01-20
Applicant: Google LLC
Inventor: Victor Carbune , Daniel Keysers , Thomas Deselaers
IPC: H04L29/06
Abstract: An example method includes establishing a single-user login session associated with a first user-account such that the single-user login session has read and/or write access to first user data associated with the first user-account. The method further includes accepting, within the single-user login session, a further login associated with a second user-account to convert the single-user login session to a multi-user login session having read and/or write access to second user data associated with the second user-account in addition to having read and/or write access to the first user data. Computer readable media and computing devices related to the example method are disclosed herein as well.
-
13.
公开(公告)号:US20190179938A1
公开(公告)日:2019-06-13
申请号:US15840103
申请日:2017-12-13
Applicant: Google LLC
Inventor: Sandro Feuz , Thomas Deselaers
Abstract: Implementations are related to observing user interactions in association with searching for various files, and modifying a model and/or index based on such observations in order to improve the search process. In some implementations, a reinforcement learning model is utilized to adapt one or more search actions of the search process. Such search action(s) can include, for example, updating an index, reweighting terms in an index, modifying a search query, and/or modifying one or more ranking signal(s) utilized in raking search results. A policy of the reinforcement learning model can be utilized to generate action parameters that dictate performance of search action(s) for a search query, dependent on an observed state that is based on the search query. The policy can be iteratively updated in view of a reward function, and observed user interactions across multiple search sessions, to generate a learned policy that reduces duration of search sessions.
-
公开(公告)号:US10192551B2
公开(公告)日:2019-01-29
申请号:US15252049
申请日:2016-08-30
Applicant: Google LLC
Inventor: Victor Carbune , Daniel Keysers , Thomas Deselaers
IPC: G10L15/26 , G10L15/00 , G10L21/00 , G10L15/18 , G06F17/27 , G10L15/22 , G06F17/30 , G10L15/06 , G10L15/16 , G06Q10/10 , G06F3/0482
Abstract: Methods, apparatus, and computer readable media related to receiving textual input of a user during a dialog between the user and an automated assistant (and optionally one or more additional users), and generating responsive reply content based on the textual input and based on user state information. The reply content is provided for inclusion in the dialog. In some implementations, the reply content is provided as a reply, by the automated assistant, to the user's textual input and may optionally be automatically incorporated in the dialog between the user and the automated assistant. In some implementations, the reply content is suggested by the automated assistant for inclusion in the dialog and is only included in the dialog in response to further user interface input.
-
公开(公告)号:US10185872B2
公开(公告)日:2019-01-22
申请号:US14967901
申请日:2015-12-14
Applicant: Google LLC
Inventor: Daniel M. Keysers , Thomas Deselaers , Henry Allan Rowley
Abstract: An optimal recognition for handwritten input based on receiving a touch input from a user may be selected by applying both a delayed stroke recognizer as well as an overlapping recognizer to the handwritten input. A score may be generated for both the delayed stroke recognition as well as the overlapping recognition and the recognition corresponding to the highest score may be presented as the overall recognition.
-
16.
公开(公告)号:US20240296290A1
公开(公告)日:2024-09-05
申请号:US18659956
申请日:2024-05-09
Applicant: GOOGLE LLC
Inventor: Victor Carbune , Thomas Deselaers
IPC: G06F40/30 , G06F16/93 , G06F40/169 , G06F40/20 , G06F40/216 , G06F40/284 , G06F40/295 , G06N3/042
CPC classification number: G06F40/30 , G06F16/93 , G06F40/169 , G06F40/20 , G06F40/216 , G06F40/284 , G06F40/295 , G06N3/042
Abstract: Implementations described herein determine, for a given document generated by a given source, one or more portions of content (e.g., phrase(s), image(s), paragraph(s), etc.) of the given document that may be influenced by a source perspective of the given source. Further, implementations determine one or more additional resources that are related to the given source and that are related to the portion(s) of content of the given document. Yet further, implementations utilize the additional resource(s) to determine additional content that provides context for the portion(s) that may be influenced by a source perspective. A relationship, between the additional resource(s) and the portions of the given document, can be defined. Based on the relationship being defined, the additional content can be caused to be rendered at a client device in response to the client device accessing the given document.
-
公开(公告)号:US12079954B2
公开(公告)日:2024-09-03
申请号:US17603362
申请日:2019-06-10
Applicant: Google LLC
Inventor: Victor Carbune , Daniel M. Keysers , Thomas Deselaers
IPC: G06K9/40 , G06T3/4046 , G06T5/00 , G06T5/50
CPC classification number: G06T3/4046 , G06T5/50 , G06T2207/20081 , G06T2207/20084
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, that use generative adversarial models to increase the quality of sensor data generated by a first environmental sensor to resemble the quality of sensor data generated by another sensor having a higher quality than the first environmental sensor. A set of first and second training data generated by a first environmental sensor having a first quality and a second sensor having a target quality, respectively, is received. A generative adversarial mode is trained, using the set of first training data and the set of second training data, to modify sensor data from the first environmental sensor by reducing a difference in quality between the sensor data generated by the first environmental sensor and sensor data generated by the target environmental sensor.
-
18.
公开(公告)号:US20240040037A1
公开(公告)日:2024-02-01
申请号:US18378080
申请日:2023-10-09
Applicant: GOOGLE LLC
Inventor: Sandro Feuz , Thomas Deselaers
CPC classification number: H04M3/4365 , G06N20/00 , G06F3/017 , H04M3/42042 , H04M2203/25
Abstract: Implementations set forth herein relate to generating a pre-call analysis for one or more users that are receiving and/or initializing a call with one or more other users, and/or prioritizing pre-call content according to whether security-related value was gleaned from provisioning certain pre-call content. One or more machine learning models can be employed for determining the pre-call content to be cached and/or presented prior to a user accepting a call from another user. Feedback provided before, during, and/or after the call can be used as a basis from which to prioritize certain content and/or sources of content when generating pre-call content for a subsequent call. Other information, such as contextual data (e.g., calendar entries, available peripheral devices, location, etc.) corresponding to the previous call and/or the subsequent call, can also be used as a basis from which to provide a pre-call analysis.
-
公开(公告)号:US20230206923A1
公开(公告)日:2023-06-29
申请号:US18074758
申请日:2022-12-05
Applicant: GOOGLE LLC
Inventor: Victor Carbune , Pedro Gonnet Andres , Thomas Deselaers , Sandro Feuz
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for collaboration between multiple voice controlled devices are disclosed. In one aspect, a method includes the actions of identifying, by a first computing device, a second computing device that is configured to respond to a particular, predefined hotword; receiving audio data that corresponds to an utterance; receiving a transcription of additional audio data outputted by the second computing device in response to the utterance; based on the transcription of the additional audio data and based on the utterance, generating a transcription that corresponds to a response to the additional audio data; and providing, for output, the transcription that corresponds to the response.
-
公开(公告)号:US11651196B2
公开(公告)日:2023-05-16
申请号:US16617949
申请日:2019-03-06
Applicant: Google LLC
Inventor: Victor Carbune , Thomas Deselaers
CPC classification number: G06N3/0454 , G06N3/00 , G06N3/02 , G10L15/00 , G10L15/20 , G10L15/22 , G10L15/222 , G10L2015/223
Abstract: Techniques are disclosed that enable automating user interface input by generating a sequence of actions to perform a task utilizing a multi-agent reinforcement learning framework. Various implementations process an intent associated with received user interface input using a holistic reinforcement policy network to select a software reinforcement learning policy network. The sequence of actions can be generated by processing the intent, as well as a sequence of software client state data, using the selected software reinforcement learning policy network. The sequence of actions are utilized to control the software client corresponding to the selected software reinforcement learning policy network.
-
-
-
-
-
-
-
-
-