Abstract:
A system and method that enables quick access to large volumes of data on a realtime basis and is totally transparent to application programs that use the data. This is accomplished by placing information extracted from the database into a master file stored in a data storage device and then loaded into memory for access by application programs. When information in the database changes the corresponding information is updated using an incremental file and an index file that are then loaded into memory for access by application programs. The master file, index file and incremental file are linked in such as fashion to enable quick access to data desired.
Abstract:
Un système de répertoire de bureau paysager mémorise des informations sous forme d'entrées dans une base de données globale et de vues dans une base de données locale avec un horodateur indiquant la date à laquelle la vue a été originalement écrite ou dernièrement modifiée. Des sous-ensembles des entrées globales sélectionnées sont mémorisés en tant que vues dans une base de données locale avec l'horodateur de la dernière génération ou modification de ladite vue dans la base de données globale. Les entrées de la base de données globale sont interrogées périodiquement par la base locale, de façon à comparer les horodateurs des vues locales avec les entrées respectives de la base de données globale. Le résultat d'une non-comparaison consiste à mettre à jour la vue locale, y compris son horodateur, de façon à être en accord avec les entrées de la base de donnés globale.
Abstract:
A data cache platform maintains pre-computed database query results computed by a computation platform based on data maintained in the computation platform and is configured to determine probabilities of the pre-computed database query results being outdated, to automatically issue re-computation orders to the computation platform for updating pre- computed database query results on the basis of the determined probabilities of the pre- computed database query results being outdated and to receive the updated pre-computed database query results as results of the re-computation orders. The probability determination depends on a probabilistic model and on the occurrence of asynchronous real-time events. The real-time events are indeterministic with regard to the expiration of the cached database query results and only have a probabilistic influence on the discrepancies between the database query results maintained in the data cache platform and presumed actual database query results.
Abstract:
Embodiments of the present invention provide a data operation method and a data management server. The method includes: obtaining an identifier of a tenant and a data operation request, where the data operation request is used to request to perform a data operation on data of the tenant, and the data operation request includes a first table name corresponding to the data; determining, according to the identifier of the tenant and the first table name, a second table name of a data table that is corresponding to the data and that is in a first database, where the first database is used to store respective data tables of multiple tenants, the data table of each tenant is corresponding to an identifier of each tenant, and the tenant is one of the multiple tenants; replacing the first table name in a first SQL corresponding to the data operation request with the second table name; and requesting the first database to execute the first SQL obtained after the replacement, so as to complete the data operation on the data. In the embodiments of the present invention, a complex SQL rewriting process can be avoided, data operation efficiency can be improved, and in addition, data interference between tenants can be prevented to ensure data security.
Abstract:
Pre-computed search results are re-computed within a given time interval by a computation platform. The number of pre-computed search results re-computed by the computation platform is limited by the computation platform's computation resources available for the re-computation within the given time interval. The computation resources to re-compute a pre-computed search result i depend on whether or not other pre-computed search results related to the pre-computed search result i are re-computed during the given time interval. A re-computation controller dynamically estimates the computation resources needed to re-compute pre-computed search result i depending on which other pre-computed search results related to the pre-computed search result i are selected for re-computation during the given time interval.
Abstract:
Embodiments of the present invention provide a cache method, a cache edge server and a cache core server, where the cache method includes: receiving, from the cache core server by using a channel, information about a Transmission Control Protocol flow; determining, according to the information, whether the cache edge server stores content corresponding to the information; sending a migrate-out request to the cache core server based on that the cache edge server stores the content corresponding to the information; receiving a migrate-out response from the cache core server upon the sending of the migrate-out request; performing a Transmission Control Protocol connection to user equipment according to the migrate-out response; and reading content corresponding to the connection from storage of the cache edge server according to a byte quantity, sent by the cache core server, of the content, and sending the content to the user equipment. In this way, the network cache system in the embodiments of the present invention can still implement an existing function in a case of no cache token server. In addition, because a cache token server is omitted, a download speed is further improved and costs for deploying the cache token server are further reduced.
Abstract:
A bit mask is associated to records of a database. A query condition is attributed a bit position in the bit mask. For queries that have a query condition for which a bit in the bit mask exists, search processing speed is improved as checking the query condition for a record of the database is reduced to verifying a bit in the bit mask associated to the record.