MULTITIER CACHE FRAMEWORK
    32.
    发明申请

    公开(公告)号:US20190387071A1

    公开(公告)日:2019-12-19

    申请号:US16554028

    申请日:2019-08-28

    Abstract: The described technology is directed towards a cache framework that accesses a tier of ordered caches, in tier order, to satisfy requests for data. The cache framework may be implemented at a front-end service level server, and/or a back end service level server, or both. The cache framework handles read-through and write-through operations, including handling batch requests for multiple data items. The cache framework also facilitates dynamically changing the tier structure, e.g., for adding, removing, replacing and/or reordering caches in the tier, e.g., by re-declaring a data structure such as an array that identifies the tiered cache configuration.

    Multitier cache framework
    33.
    发明授权

    公开(公告)号:US10404823B2

    公开(公告)日:2019-09-03

    申请号:US15167321

    申请日:2016-05-27

    Abstract: The described technology is directed towards a cache framework that accesses a tier of ordered caches, in tier order, to satisfy requests for data. The cache framework may be implemented at a front-end service level server, and/or a back end service level server, or both. The cache framework handles read-through and write-through operations, including handling batch requests for multiple data items. The cache framework also facilitates dynamically changing the tier structure, e.g., for adding, removing, replacing and/or reordering caches in the tier, e.g., by re-declaring a data structure such as an array that identifies the tiered cache configuration.

    DATA REQUEST MULTIPLEXING
    34.
    发明申请

    公开(公告)号:US20180063280A1

    公开(公告)日:2018-03-01

    申请号:US15252166

    申请日:2016-08-30

    CPC classification number: H04L67/32 H04L67/10 H04L67/2833 H04L67/2842

    Abstract: The described technology is generally directed towards combining (multiplexing) two or more pending data requests for the same data item into a single request that is sent to a data providing entity such as a back-end data service. Described is maintaining a mapping of the requests to requesting entities so that a single response to a multiplexed request having data for a requested data item may be re-associated (de-multiplexed) to each requesting entity that requested that data item. Also described is batching a plurality of requests, which may include one or more multiplexed requests, into a batch request sent to a data providing entity.

    CACHED DATA EXPIRATION AND REFRESH
    35.
    发明申请

    公开(公告)号:US20170353577A1

    公开(公告)日:2017-12-07

    申请号:US15170668

    申请日:2016-06-01

    Abstract: The described technology is directed towards maintaining a cache of data items, with cached data items having current value subsets and next value subsets. The cache is accessed for data item requests, to return a cache miss if a requested data item is not cached, to return data from the current value subset if not expired, to return data from the next value subset if the current value subset is expired and the next value subset is not expired, or to return a cache miss (or expired data) if both subsets are expired. Cached data items are refreshed, (e.g., periodically), when a data item's current value subset is expired by replacing the data item's current value subset with the next value subset and caching a new next value subset, or caching a new next value subset when the next value subset will expire within a threshold time.

    TIME OFFSET DATA REQUEST HANDLING
    36.
    发明申请

    公开(公告)号:US20170324986A1

    公开(公告)日:2017-11-09

    申请号:US15148943

    申请日:2016-05-06

    Abstract: The described technology is directed towards obtaining and returning time offset data instead of current data in response to a data request. The time offset data may be limited to privileged clients only, and only provided thereto when desired, using a time offset value set by the client, for example. For example, a privileged user may request time offset data corresponding to a future time so as to preview how the data may be presented at a future time. Time offset data may be used by a system entity to fill a cache, e.g., as secondary cached data that may be used once primary cached data expires.

    Data delivery architecture for transforming client response data

    公开(公告)号:US11200251B2

    公开(公告)日:2021-12-14

    申请号:US16709089

    申请日:2019-12-10

    Abstract: The described technology is directed towards a data transformation pipeline architecture of a data service that processes generalized datasets into datasets (e.g., video data or graph nodes) customized for a particular client device. Described herein is maintaining a set of data transformation models at a data service, and upon receiving a client request for data, selecting a relevant subset of the transformation models and arranging the subset into a data transformation pipeline. In general, the pipeline of transformation models transforms the generalized data into the format and shape that each client device expects. The subset may be selected based upon device type, device class and/or software version information (and possibly state data) sent with each data request. The transformation models may be maintained in a hierarchical data store such as files in a file system to facilitate retrieval by searching the hierarchy for appropriate transformation models.

    Multitier cache framework
    40.
    发明授权

    公开(公告)号:US11146654B2

    公开(公告)日:2021-10-12

    申请号:US16554028

    申请日:2019-08-28

    Abstract: The described technology is directed towards a cache framework that accesses a tier of ordered caches, in tier order, to satisfy requests for data. The cache framework may be implemented at a front-end service level server, and/or a back end service level server, or both. The cache framework handles read-through and write-through operations, including handling batch requests for multiple data items. The cache framework also facilitates dynamically changing the tier structure, e.g., for adding, removing, replacing and/or reordering caches in the tier, e.g., by re-declaring a data structure such as an array that identifies the tiered cache configuration.

Patent Agency Ranking