Multi-tenant caching service in a hosted computing environment

    公开(公告)号:US12050534B1

    公开(公告)日:2024-07-30

    申请号:US17657557

    申请日:2022-03-31

    CPC classification number: G06F12/0806 G06F2212/62

    Abstract: Systems and methods are described for implementing a multi-tenant caching service. The multi-tenant caching service provides a scalable infrastructure with dedicated per-tenant cache widths for tenants of a hosted computing environment, and allows tenants to implement a caching layer between cloud-based services that would otherwise need to scale up in response to load. Tenants may also use the service as a public facing endpoint that caches content provided by backend servers. Content provided by the tenants may be distributed and cached across a cell-based architecture, each cell of which may include a set of storage volumes that are partitioned into caches for individual tenants and configured to store a portion of the content provided by that tenant. Eviction policies may be implemented based on tenant cache usage across multiple cells, and geolocation policies may be implemented to ensure that cached content remains within a particular geographic region.

    Distributed metric collection for dynamic content delivery network selection using DNS

    公开(公告)号:US11641410B1

    公开(公告)日:2023-05-02

    申请号:US17030759

    申请日:2020-09-24

    Abstract: Techniques for dynamic content delivery network (CDN) selection using the domain name service (DNS) protocol are described. A DNS resolver utilizes a network identifier provided within a DNS query seeking to resolve a domain to select between different CDNs. The selection can be based on an analysis of network metric summary data corresponding to the CDNs from the perspective of an approximate location of the requesting client, as determined via the network identifier as a proxy. The selection process and involved network metric types can be configured by the user associated with the domain via a selection policy. Network metrics can be provided by the user or collected based on reported data generated by remote clients through provided metric-generating code, and thereafter transformed into network metric summary data that is used for resolution.

    Low latency query processing and data retrieval at the edge

    公开(公告)号:US11550800B1

    公开(公告)日:2023-01-10

    申请号:US17039992

    申请日:2020-09-30

    Abstract: A datastore engine at an edge location of a content delivery network (CDN) may perform low-latency query processing and data retrieval for multiple types of databases at one or more origin servers. When a client sends a query to the edge location, the datastore engine translates the query from a back-end database format into a native format of the local edge datastore. If the requested data is not there, then the datastore engine retrieves the data from the back-end table and inserted inserts the data into the local edge datastore. By using multiple queries over time to re-construct data from the backend database tables at the edge, the datastore engine may provide low-latency access to data from the backend database tables (avoiding the need to retrieve data from the back-end tables to serve subsequent queries).

    PROTECTING DATA INTEGRITY IN A CONTENT DISTRIBUTION NETWORK

    公开(公告)号:US20220207184A1

    公开(公告)日:2022-06-30

    申请号:US17698302

    申请日:2022-03-18

    Abstract: Various embodiments of apparatuses and methods for protecting data integrity in a content distribution network (“CDN”) are described. Code or data in one of the servers or instances of a CDN might sometimes become incorrect or corrupt. One corrupted server or instance can potentially impact a considerable portion of the CDN. To solve these and other problems, various embodiments of a CDN can designate one or more parameters, which are then identified in a request for content to another entity. In these embodiments, the CDN can generate an encoding of the expected values of the designated parameters. The CDN can then compare, in these embodiments, its encoding of the expected values to an encoding of the values received from the other entity in response to the request. The CDN can validate the content of the response, as well as the identity of the other entity, in some embodiments.

    Stateful server-less multi-tenant computing at the edge

    公开(公告)号:US10805652B1

    公开(公告)日:2020-10-13

    申请号:US16370712

    申请日:2019-03-29

    Abstract: Techniques for stateful computing at the edge of a content delivery network are described. In some embodiments, a point of presence of the content delivery network includes proxy servers, function execution units, and function state cache servers executing on computer systems within the point of presence. A proxy server checks for requests for resources hosted on behalf of customers of the content delivery network that trigger a customer-specified function. When a function is triggered, the proxy server selects an execution unit and sends a function execution request to the execution unit. The execution unit executes functions of many different customers of the provider network. Upon receiving a request to execute a function that is stateful, the execution unit retrieves the function state from a function state cache server, execute the function, and returns a result to the proxy server.

    Using forensic trails to mitigate effects of a poisoned cache

    公开(公告)号:US11463535B1

    公开(公告)日:2022-10-04

    申请号:US17489581

    申请日:2021-09-29

    Abstract: A content delivery network may store forensic trail metadata for cache entries in order to identify and evict poisoned cache entries, mitigating the effects of a poisoned cache due to corrupted cache servers. Each entry of a cache server may include the cached item as well as forensic metadata. The forensic metadata includes identifiers for cache servers that the item was served from, as well as a timestamp for the time that the item was served. The cache server also maintains a list of corrupted servers, as well as a time window for each corrupted server. The cache server determines, based on the list of corrupted servers and the forensic metadata, whether to evict cache entries.

    Protecting data integrity in a content distribution network

    公开(公告)号:US11281804B1

    公开(公告)日:2022-03-22

    申请号:US16368705

    申请日:2019-03-28

    Abstract: Various embodiments of apparatuses and methods for protecting data integrity in a content distribution network (“CDN”) are described. Code or data in one of the servers or instances of a CDN might sometimes become incorrect or corrupt. One corrupted server or instance can potentially impact a considerable portion of the CDN. To solve these and other problems, various embodiments of a CDN can designate one or more parameters, which are then identified in a request for content to another entity. In these embodiments, the CDN can generate an encoding of the expected values of the designated parameters. The CDN can then compare, in these embodiments, its encoding of the expected values to an encoding of the values received from the other entity in response to the request. The CDN can validate the content of the response, as well as the identity of the other entity, in some embodiments.

    Intelligent hierarchical caching based on metrics for objects in different cache levels

    公开(公告)号:US11216382B1

    公开(公告)日:2022-01-04

    申请号:US16820414

    申请日:2020-03-16

    Abstract: A cache system may maintain size and/or request rate metrics for objects in a lower level cache and for objects in a higher level cache. When an L1 cache does not have an object, it requests the object from an L2 cache and sends to the L2 cache aggregate size and request rate metrics for objects in the L1 cache. The L2 cache may obtain a size metric and a request rate metric for the requested object and then determine, based on the aggregate size and request rate metrics for the objects in the L1 cache and the size metric and the request rate metric for the requested object in the L2 cache, an indication of whether or not the L1 cache should cache the requested object. The L2 cache provides the object and the indication to the L1 cache.

Patent Agency Ranking