CONDITIONAL RESPONSE FULFILLMENT CACHE FOR LOCALLY RESPONDING TO AUTOMATED ASSISTANT INPUTS

    公开(公告)号:US20220318248A1

    公开(公告)日:2022-10-06

    申请号:US17217671

    申请日:2021-03-30

    Applicant: Google LLC

    Abstract: Implementations set forth herein relate to conditionally caching responses to automated assistant queries according to certain contextual data that may be associated with each automated assistant query. Each query can be identified based on historical interactions between a user and an automated assistant, and—depending on the query, fulfillment data can be cached according to certain contextual data that influences the query response. Depending on how the contextual data changes, a cached response stored at a client device can be discarded and/or replaced with an updated cached response. For example, a query that users commonly ask prior to leaving for work can have a corresponding assistant response that depends on features of an environment of the users. This unique assistant response can be cached, before the users provide the query, to minimize latency that can occur when network or processing bandwidth is unpredictable.

    Conditional response fulfillment cache for locally responding to automated assistant inputs

    公开(公告)号:US11567935B2

    公开(公告)日:2023-01-31

    申请号:US17217671

    申请日:2021-03-30

    Applicant: Google LLC

    Abstract: Implementations set forth herein relate to conditionally caching responses to automated assistant queries according to certain contextual data that may be associated with each automated assistant query. Each query can be identified based on historical interactions between a user and an automated assistant, and—depending on the query, fulfillment data can be cached according to certain contextual data that influences the query response. Depending on how the contextual data changes, a cached response stored at a client device can be discarded and/or replaced with an updated cached response. For example, a query that users commonly ask prior to leaving for work can have a corresponding assistant response that depends on features of an environment of the users. This unique assistant response can be cached, before the users provide the query, to minimize latency that can occur when network or processing bandwidth is unpredictable.

Patent Agency Ranking