Abstract:
A system for dictionary generation that can generate a set of candidate dictionaries based at least in part on subsets of content, where candidate dictionaries of the set of candidate dictionaries are generated based at least in part on a different subset of the content. The system can further use candidate dictionaries of the set of candidate dictionaries to compress the content and can identify one or more dictionary quality metrics for candidate dictionaries of the set of candidate dictionaries based at least in part on the respective compression of the content by candidate dictionaries of the set of candidate dictionaries.
Abstract:
In one aspect, the disclosure teaches a system configured to receive from a device a request for content including an identifier of a first set of dictionaries available locally at the device. The system is also configured to select a second set of dictionaries to compress content requested by the device based at least on the set of dictionaries available at the device, the second set of dictionaries selected from a third set of local system dictionaries available at the system.
Abstract:
At a rule processing unit of an evolving, self-organized machine learning-based resource management service, a rule of a first rule set is applied to a value of a first collected metric, resulting in the initiation of a first corrective action. A set of metadata indicating the metric value and the corrective action is transmitted to a repository, and is used as part of an input data set for a machine learning model trained to generate rule modification recommendations. In response to determining that the corrective actions did not meet a success criterion, an escalation message is transmitted to another rule processing unit.
Abstract:
A resource delivery network and method for distributing content in the network is disclosed herein. The network comprises a plurality of servers arranged in tiers and partitioned. Each server includes a resource store with a set of resources for distribution to a successive tier. Updates to each successive tier are provided by a pull-forward client on servers in the tier. This forward propagation mechanism maximizes resource availability at edge servers in the network. Resources transmitted to the edge tier servers may be transformed, combined, and rendered without taxing lower tier servers. Transformation and pre-rendering of data can be performed by low priority CPU tasks at each layer of the system.
Abstract:
A resource delivery network and method for distributing content in the network is disclosed herein. The network comprises a plurality of servers arranged in tiers and partitioned. Each server includes a resource store with a set of resources for distribution to a successive tier. Updates to each successive tier are provided by a pull-forward client on servers in the tier. This forward propagation mechanism maximizes resource availability at edge servers in the network. Resources transmitted to the edge tier servers may be transformed, combined, and rendered without taxing lower tier servers. Transformation and pre-rendering of data can be performed by low priority CPU tasks at each layer of the system.
Abstract:
A system configured to generate a set of compression dictionary snapshots. The system can determine a subset of a set of compression dictionary definitions, the subset having a first subset comprising one or more definitions that have changed since a time of a previous snapshot and a second subset having one or more definitions associated with a predetermined portion of the dictionary. The system can further generate and store snapshots based at least in part on the determined subset of one or more definitions and determine a plurality of active snapshots from the set of snapshots such that the set of one or more definitions is included in the plurality of active snapshots.
Abstract:
A system for dictionary generation that can generate a set of candidate dictionaries based at least in part on subsets of content, where candidate dictionaries of the set of candidate dictionaries are generated based at least in part on a different subset of the content. The system can further use candidate dictionaries of the set of candidate dictionaries to compress the content and can identify one or more dictionary quality metrics for candidate dictionaries of the set of candidate dictionaries based at least in part on the respective compression of the content by candidate dictionaries of the set of candidate dictionaries.
Abstract:
A resource delivery network and method for distributing content in the network is disclosed herein. The network comprises a plurality of servers arranged in tiers and partitioned. Each server includes a resource store with a set of resources for distribution to a successive tier. Updates to each successive tier are provided by a pull-forward client on servers in the tier. This forward propagation mechanism maximizes resource availability at edge servers in the network. Resources transmitted to the edge tier servers may be transformed, combined, and rendered without taxing lower tier servers. Transformation and pre-rendering of data can be performed by low priority CPU tasks at each layer of the system.
Abstract:
In one aspect, the disclosure teaches a system configured to receive from a device a request for content including an identifier of a first set of dictionaries available locally at the device. The system is also configured to select a second set of dictionaries to compress content requested by the device based at least on the set of dictionaries available at the device, the second set of dictionaries selected from a third set of local system dictionaries available at the system.
Abstract:
Non-blocking processing of federated transactions may be implemented for distributed data partitions. A transaction may be received that specifies keys at data nodes to lock in order to perform the transaction. Lock requests are generated and sent to the data nodes which identify sibling keys to be locked at other data nodes for the transaction. In response to receiving the lock requests, data nodes may send to lock queues indicating other lock requests for the keys at the data node. An evaluation of the lock queues based, at least in part, on an ordering of the lock requests in the lock queues may be performed to identify a particular transaction to commit. Once identified, a request to commit the identified transaction may be sent to the particular data nodes indicated by the sibling keys in a lock request for the identified transaction.