Abstract:
Systems and methods for data sharing include generating at least one sharing plan with a cheapest cost and/or a shortest execution time for one or more sharing arrangements. Admissibility of the one or more sharing arrangements is determined such that a critical time path of the at least one sharing plan does not exceed a staleness level and a cost of the at least one sharing plan does not exceed a capacity. Sharing plans of admissible sharing arrangements are executed while maintaining the staleness level.
Abstract:
Systems and methods for data sharing include generating at least one sharing plan with a cheapest cost and/or a shortest execution time for one or more sharing arrangements. Admissibility of the one or more sharing arrangements is determined such that a critical time path of the at least one sharing plan does not exceed a staleness level and a cost of the at least one sharing plan does not exceed a capacity. Sharing plans of admissible sharing arrangements are executed while maintaining the staleness level.
Abstract:
A system includes first and second data stores, each store having a set of materialized views of the base data and the views comprise a multistore physical design; an execution layer coupled to the data stores; a query optimizer coupled to the execution layer; and a tuner coupled to the query optimizer and the execution layer, wherein the tuner determines a placement of the materialized views across the stores to improve workload performance upon considering each store's view storage budget and a transfer budget when moving views across the stores.
Abstract:
A system includes first and second data stores, each store having a set of materialized views of the base data and the views comprise a multistore physical design; an execution layer coupled to the data stores; a query optimizer coupled to the execution layer; and a tuner coupled to the query optimizer and the execution layer, wherein the tuner determines a placement of the materialized views across the stores to improve workload performance upon considering each store's view storage budget and a transfer budget when moving views across the stores.
Abstract:
Systems and methods for data sharing include merging sharing plans of admissible sharing arrangements to provide a merged sharing plan. A set of all possible plumbings are determined for the merged sharing plan. A plumbing with a maximum profit is iteratively applied, using a processor, to the merged sharing plan for each plumbing of the set such that a staleness level is maintained to provide an optimized sharing plan.
Abstract:
Systems and methods are disclosed for managing a multi-store execution environment by applying opportunistic materialized views to improve workload performance and executing a plan on multiple database engines to increase query processing speed by leveraging unique capabilities of each engine by enabling stages of a query to execute on multiple engines, and by moving materialized views across engines.
Abstract:
Systems and methods are disclosed to run a multistore system by receiving by-products of query processing in the multistore system, wherein the by-products include views or materializations of intermediate data; placing the views or materializations across the stores based on recently observed queries as indicative of a future query workload; determining a benefit score for each view based on a predicted future query workload, wherein each store has an allotted view storage budget, and there is a view transfer budget for transferring views between the stores; and tuning a physical design of the multistore system.
Abstract:
Systems and methods are disclosed for managing a multi-store execution environment by applying opportunistic materialized views to improve workload performance and executing a plan on multiple database engines to increase query processing speed by leveraging unique capabilities of each engine by enabling stages of a query to execute on multiple engines, and by moving materialized views across engines.
Abstract:
Methods and systems for seamless context transfers include receiving a context object from one or more applications, where the context object including updated context information for a user having an associated timestamp; entering the updated context information into a context information database; determining entries of the context information database for the user having a timestamp older than a predetermined threshold using a processor; purging the determined entries from the context information database; and sending an updated context object to one or more applications that reflects a current state of the context information for the user.