Abstract:
Systems and methods are disclosed to run a multistore system by receiving by-products of query processing in the multistore system, wherein the by-products include views or materializations of intermediate data; placing the views or materializations across the stores based on recently observed queries as indicative of a future query workload; determining a benefit score for each view based on a predicted future query workload, wherein each store has an allotted view storage budget, and there is a view transfer budget for transferring views between the stores; and tuning a physical design of the multistore system.
Abstract:
A system includes first and second data stores, each store having a set of materialized views of the base data and the views comprise a multistore physical design; an execution layer coupled to the data stores; a query optimizer coupled to the execution layer; and a tuner coupled to the query optimizer and the execution layer, wherein the tuner determines a placement of the materialized views across the stores to improve workload performance upon considering each store's view storage budget and a transfer budget when moving views across the stores.
Abstract:
A system includes first and second data stores, each store having a set of materialized views of the base data and the views comprise a multistore physical design; an execution layer coupled to the data stores; a query optimizer coupled to the execution layer; and a tuner coupled to the query optimizer and the execution layer, wherein the tuner determines a placement of the materialized views across the stores to improve workload performance upon considering each store's view storage budget and a transfer budget when moving views across the stores.