Hybrid fetching using a on-device cache

    公开(公告)号:US11853381B2

    公开(公告)日:2023-12-26

    申请号:US17098016

    申请日:2020-11-13

    Applicant: Google LLC

    Abstract: Techniques of this disclosure are directed to enable a computing device to process voice queries and provide query answers even when the computing device and vehicle do not have internet connectivity. According to the disclosed techniques, a computing device may detect a query via input devices of the computing device and output a query answer determined based on the detected query. Rather than directly querying a remote computing system, various aspects of the techniques of this disclosure may enable the computing device to use a query answer cache to generate the query answer. The query answer cache may include predicted queries and query answers retrieved from a query answer cache of a remote computing system, thereby enabling the computing device to respond to the detected queries while experiencing unreliable internet connection.

    CONDITIONAL RESPONSE FULFILLMENT CACHE FOR LOCALLY RESPONDING TO AUTOMATED ASSISTANT INPUTS

    公开(公告)号:US20220318248A1

    公开(公告)日:2022-10-06

    申请号:US17217671

    申请日:2021-03-30

    Applicant: Google LLC

    Abstract: Implementations set forth herein relate to conditionally caching responses to automated assistant queries according to certain contextual data that may be associated with each automated assistant query. Each query can be identified based on historical interactions between a user and an automated assistant, and—depending on the query, fulfillment data can be cached according to certain contextual data that influences the query response. Depending on how the contextual data changes, a cached response stored at a client device can be discarded and/or replaced with an updated cached response. For example, a query that users commonly ask prior to leaving for work can have a corresponding assistant response that depends on features of an environment of the users. This unique assistant response can be cached, before the users provide the query, to minimize latency that can occur when network or processing bandwidth is unpredictable.

    HYBRID FETCHING USING A ON-DEVICE CACHE

    公开(公告)号:US20220156340A1

    公开(公告)日:2022-05-19

    申请号:US17098016

    申请日:2020-11-13

    Applicant: Google LLC

    Abstract: Techniques of this disclosure are directed to enable a computing device to process voice queries and provide query answers even when the computing device and vehicle do not have internet connectivity. According to the disclosed techniques, a computing device may detect a query via input devices of the computing device and output a query answer determined based on the detected query. Rather than directly querying a remote computing system, various aspects of the techniques of this disclosure may enable the computing device to use a query answer cache to generate the query answer. The query answer cache may include predicted queries and query answers retrieved from a query answer cache of a remote computing system, thereby enabling the computing device to respond to the detected queries while experiencing unreliable internet connection.

    SELECTIVE DELAYING OF PROVISIONING, TO ASSISTANT DEVICE(S), ASSISTANT DATA THAT IS LOCALLY UTILIZABLE BY A CORRESPONDING LOCAL ASSISTANT CLIENT

    公开(公告)号:US20220272048A1

    公开(公告)日:2022-08-25

    申请号:US17187199

    申请日:2021-02-26

    Applicant: GOOGLE LLC

    Abstract: Implementations set forth herein relate to conditionally delaying fulfillment of client update requests in order to preserve network bandwidth and other resources that may be consumed when an ecosystem of linked assistant devices are repeatedly pinging servers for updates. In some implementations, a server device can delay and/or bypass fulfillment of a client request based on one or more indications that certain requested data is currently, or is expected to be, expired. For example, a user that is modifying assistant settings via a cellular device can cause an update notification to be pushed to several other assistant devices before the user finishes editing the assistant settings. Implementations herein can limit fulfillment of update requests from the client devices according to certain criteria—such as whether the user is continuing to modify the assistant settings from their cellular device.

    Conditional response fulfillment cache for locally responding to automated assistant inputs

    公开(公告)号:US11567935B2

    公开(公告)日:2023-01-31

    申请号:US17217671

    申请日:2021-03-30

    Applicant: Google LLC

    Abstract: Implementations set forth herein relate to conditionally caching responses to automated assistant queries according to certain contextual data that may be associated with each automated assistant query. Each query can be identified based on historical interactions between a user and an automated assistant, and—depending on the query, fulfillment data can be cached according to certain contextual data that influences the query response. Depending on how the contextual data changes, a cached response stored at a client device can be discarded and/or replaced with an updated cached response. For example, a query that users commonly ask prior to leaving for work can have a corresponding assistant response that depends on features of an environment of the users. This unique assistant response can be cached, before the users provide the query, to minimize latency that can occur when network or processing bandwidth is unpredictable.

Patent Agency Ranking