Abstract:
Example embodiments relate to predicting execution times of concurrent queries. In example embodiments, historical data is iteratively generated for a machine learning model by varying a concurrency level of query executions in a database, determining a query execution plan for a pending concurrent query, extracting query features from the query execution plan, and executing the pending concurrent query to determine a query execution time. The machine learning model may then be created based on the query features, variation in the concurrency level, and the query execution time. The machine learning model is used to generate an execution schedule for production queries, where the execution schedule satisfies service level agreements of the production queries.
Abstract:
Described herein are techniques for modifying an analytic flow. A flow may be associated with an execution engine. A flow graph representative of the flow may be obtained. The flow graph may be modified using a logical language. For example, a new flow graph expressed in the logical language may be generated. A program may be generated from the modified flow graph.
Abstract:
Described herein are techniques for identifying a path in a workload that may be associated with a deviation. A workload may be associated with multiple measurements of a plurality of metrics generated during execution of the workload. The multiple measurements may be aggregated at multiple levels of execution. One or more measurements may be compared to one or more other measurements or estimates to determine whether there is a deviation from an expected correlation. If determined that there is a deviation, a path can be identified in the workload that may be associated with the deviation.
Abstract:
Methods, apparatus, systems and articles of manufacture are disclosed to provide candidate services for an application. An example method includes determining a plurality of candidate services for a cloud application, determining an indication that a first candidate service from the plurality of candidate services is more relevant to the cloud application than a second candidate service based on a first prediction score corresponding to the first candidate service and a second prediction score corresponding to the second candidate service; presenting the first candidate service and the second candidate service to a user based on the first prediction score and the second prediction score; and adjusting a first weight corresponding to the first candidate service and a second weight corresponding to the second candidate service based on whether the first candidate service or the second candidate service is selected for inclusion in the cloud application.