WORKLOAD-AWARE DATA PLACEMENT ADVISOR FOR OLAP DATABASE SYSTEMS

    公开(公告)号:US20230297573A1

    公开(公告)日:2023-09-21

    申请号:US17699607

    申请日:2022-03-21

    Abstract: Embodiments implement a prediction-driven, rather than a trial-driven, approach to automatic data placement recommendations for partitioning data across multiple nodes in a database system. The system is configured to extract workload-specific features of a database workload running at a database system and dataset-specific features of a database running on the database system. The workload-specific features characterize utilization of the database workload. The dataset-specific features characterize how data is organized within the database. The system identifies a plurality of candidate keys for determining how to partition data stored in the database across nodes. Based at least in part on the workload-specific features, the dataset specific features, and the plurality of candidate keys, a set of candidate key combinations for partitioning data is generated. Using a machine learning model, determine a particular candidate key combination that optimizes query execution performance benefit based on the workload-specific features and the dataset specific features. Generate data placement commands to allocate the database tables across the nodes.

    Datacenter level utilization prediction without operating system involvement

    公开(公告)号:US11657256B2

    公开(公告)日:2023-05-23

    申请号:US17867552

    申请日:2022-07-18

    Abstract: Embodiments use a hierarchy of machine learning models to predict datacenter behavior at multiple hardware levels of a datacenter without accessing operating system generated hardware utilization information. The accuracy of higher-level models in the hierarchy of models is increased by including, as input to the higher-level models, hardware utilization predictions from lower-level models. The hierarchy of models includes: server utilization models and workload/OS prediction models that produce predictions at a server device-level of a datacenter; and also top-of-rack switch models and backbone switch models that produce predictions at higher levels of the datacenter. These models receive, as input, hardware utilization information from non-OS sources. Based on datacenter-level network utilization predictions from the hierarchy of models, the datacenter automatically configures its hardware to avoid any predicted over-utilization of hardware in the datacenter. Also, the predictions from the hierarchy of models can be used to detect anomalies of datacenter hardware behavior.

    Disk drive failure prediction with neural networks

    公开(公告)号:US11579951B2

    公开(公告)日:2023-02-14

    申请号:US16144912

    申请日:2018-09-27

    Abstract: Techniques are described herein for predicting disk drive failure using a machine learning model. The framework involves receiving disk drive sensor attributes as training data, preprocessing the training data to select a set of enhanced feature sequences, and using the enhanced feature sequences to train a machine learning model to predict disk drive failures from disk drive sensor monitoring data. Prior to the training phase, the RNN LSTM model is tuned using a set of predefined hyper-parameters. The preprocessing, which is performed during the training and evaluation phase as well as later during the prediction phase, involves using predefined values for a set of parameters to generate the set of enhanced sequences from raw sensor reading. The enhanced feature sequences are generated to maintain a desired healthy/failed disk ratio, and only use samples leading up to a last-valid-time sample in order to honor a pre-specified heads-up-period alert requirement.

    Chaining bloom filters to estimate the number of keys with low frequencies in a dataset

    公开(公告)号:US11520834B1

    公开(公告)日:2022-12-06

    申请号:US17387841

    申请日:2021-07-28

    Abstract: Techniques are described for generating an approximate frequency histogram using a series of Bloom filters (BF). For example, to estimate the f1 and f2 cardinalities in a dataset, an ordered chain of three BFs is established (“BF1”, “BF2”, and “BF3”). An insertion operation is performed for each datum in the dataset, whereby the BFs are tested in order (starting at BF1) for the datum. If the datum is represented in a currently-tested BF, the subsequent BF in the chain is tested for the datum. If the datum is not represented in the currently-tested BF, the datum is added to the BF, a counter for the BF is incremented, and the insertion operation for the current datum ends. To estimate the cardinality of f1-values in the dataset, the BF2-counter is subtracted from the BF1-counter. Similarly, to estimate the cardinality of f2-values in the dataset, the BF3-counter is subtracted from the BF2-counter.

    Efficient adjustment of spin-locking parameter values

    公开(公告)号:US11379456B2

    公开(公告)日:2022-07-05

    申请号:US17060999

    申请日:2020-10-01

    Abstract: Systems and methods for adjusting parameters for a spin-lock implementation of concurrency control are described herein. In an embodiment, a system continuously retrieves, from a resource management system, one or more state values defining a state of the resource management system. Based on the one or more state values, the system determines that the resource management system has reached a steady state and, in response adjusts a plurality of parameters for spin-locking performed by said resource management system to identify optimal values for the plurality of parameters. After adjusting the plurality of parameters, the system detects, based on one or more current state values, a workload change in the resource management system and, in response, readjusts the plurality of parameters for spin-locking performed by said resource management system to identify new optimal values for the parameters.

    APPLICATION- AND INFRASTRUCTURE-AWARE ORCHESTRATION FOR CLOUD MONITORING APPLICATIONS

    公开(公告)号:US20200259722A1

    公开(公告)日:2020-08-13

    申请号:US16271535

    申请日:2019-02-08

    Abstract: Herein are computerized techniques for autonomous and artificially intelligent administration of a computer cloud health monitoring system. In an embodiment, an orchestration computer automatically detects a current state of network elements of a computer network by processing: a) a network plan that defines a topology of the computer network, and b) performance statistics of the network elements. The network elements include computers that each hosts virtual execution environment(s). Each virtual execution environment hosts analysis logic that transforms raw performance data of a network element into a portion of the performance statistics. For each computer, a configuration specification for each virtual execution environment of the computer is automatically generated based on the network plan and the current state of the computer network. At least one virtual execution environment is automatically tuned and/or re-provisioned based on a generated configuration specification.

    Prediction of buffer pool size for transaction processing workloads

    公开(公告)号:US11868261B2

    公开(公告)日:2024-01-09

    申请号:US17381072

    申请日:2021-07-20

    CPC classification number: G06F12/0842 G06F16/24552 G06F2212/6022

    Abstract: Techniques are described herein for prediction of an buffer pool size (BPS). Before performing BPS prediction, gathered data are used to determine whether a target workload is in a steady state. Historical utilization data gathered while the workload is in a steady state are used to predict object-specific BPS components for database objects, accessed by the target workload, that are identified for BPS analysis based on shares of the total disk I/O requests, for the workload, that are attributed to the respective objects. Preference of analysis is given to objects that are associated with larger shares of disk I/O activity. An object-specific BPS component is determined based on a coverage function that returns a percentage of the database object size (on disk) that should be available in the buffer pool for that database object. The percentage is determined using either a heuristic-based or a machine learning-based approach.

Patent Agency Ranking