Abstract:
Techniques are described for automatically detecting and accommodating state changes in a computer-generated forecast. In one or more embodiments, a representation of a time-series signal is generated within volatile and/or non-volatile storage of a computing device. The representation may be generated in such a way as to approximate the behavior of the time-series signal across one or more seasonal periods. Once generated, a set of one or more state changes within the representation of the time-series signal is identified. Based at least in part on at least one state change in the set of one or more state changes, a subset of values from the sequence of values is selected to train a model. An analytical output is then generated, within volatile and/or non-volatile storage of the computing device, using the trained model.
Abstract:
Techniques are described for generating period profiles. According to an embodiment, a set of time series data is received, where the set of time series data includes data spanning a plurality of time windows having a seasonal period. Based at least in part on the set of time-series data, a first set of sub-periods of the seasonal period is associated with a particular class of seasonal pattern. A profile for a seasonal period that identifies which sub-periods of the seasonal period are associated with the particular class of seasonal pattern is generated and stored, in volatile or non-volatile storage. Based on the profile, a visualization is generated for at least one sub-period of the first set of sub-periods of the seasonal period that indicates that the at least one sub-period is part of the particular class of seasonal pattern.
Abstract:
Techniques are described for orchestrating execution of multi-step recipes. In an embodiment, a method comprises receiving a request to execute a recipe specification that defines a sequence of steps to execute for a particular recipe; responsive to receiving the request to execute the recipe specification, instantiating a set of one or more recipe-level processes; wherein each recipe-level process in the set of one or more recipe-level processes manages execution of a respective instance of the particular recipe; triggering, by each recipe-level process for the respective instance of the particular recipe managed by the recipe-level process, execution of the sequence of steps; wherein triggering execution of at least one step in the sequence of steps by a recipe-level process comprises instantiating, by the recipe-level process, a plurality of step-level processes to execute the step on a plurality of target resources in parallel.
Abstract:
Techniques are described for modeling variations in correlation to facilitate analytic operations. In one or more embodiments, at least one computing device receives first metric data that tracks a first metric for a first target resource and second metric data that tracks a second metric for a second target resource. In response to receiving the first metric data and the second metric data, the at least one computing device generates a time-series of correlation values that tracks correlation between the first metric and the second metric over time. Based at least in part on the time-series of correlation data, an expected correlation is determined and compared to an observed correlation. If the observed correlation falls outside of a threshold range or otherwise does not satisfy the expected correlation, then an alert and/or other output may be generated.
Abstract:
The disclosed embodiments relate to a system that gathers telemetry data while testing a computer system. During operation, the system obtains a test script that generates a load profile to exercise the computer system, wherein a running time of the test script is designed to be relatively prime in comparison to a sampling interval for telemetry data in the computer system. Next, the system gathers telemetry data during multiple successive executions of the test script on the computer system. The system merges the telemetry data gathered during the multiple successive executions of the test script, wherein the relatively prime relationship between the running time of the test script and the sampling interval for the telemetry data causes a sampling point for the telemetry data to precess through different points in the test script during the multiple successive executions of the test script, thereby densifying sampled telemetry data points gathered for the test script. Finally, the system outputs the densified telemetry data.
Abstract:
Techniques are described for generating period profiles. According to an embodiment, a set of time series data is received, where the set of time series data includes data spanning a plurality of time windows having a seasonal period. Based at least in part on the set of time-series data, a first set of sub-periods of the seasonal period is associated with a particular class of seasonal pattern. A profile for a seasonal period that identifies which sub-periods of the seasonal period are associated with the particular class of seasonal pattern is generated and stored, in volatile or non-volatile storage. Based on the profile, a visualization is generated for at least one sub-period of the first set of sub-periods of the seasonal period that indicates that the at least one sub-period is part of the particular class of seasonal pattern.
Abstract:
Techniques are described for automatically detecting and accommodating state changes in a computer-generated forecast. In one or more embodiments, a representation of a time-series signal is generated within volatile and/or non-volatile storage of a computing device. The representation may be generated in such a way as to approximate the behavior of the time-series signal across one or more seasonal periods. Once generated, a set of one or more state changes within the representation of the time-series signal is identified. Based at least in part on at least one state change in the set of one or more state changes, a subset of values from the sequence of values is selected to train a model. An analytical output is then generated, within volatile and/or non-volatile storage of the computing device, using the trained model.
Abstract:
Techniques are described for classifying seasonal patterns in a time series. In an embodiment, a set of time series data is decomposed to generate a noise signal and a dense signal, where the noise signal includes a plurality of sparse features from the set of time series data and the dense signal includes a plurality of dense features from the set of time series data. A set of one or more sparse features from the noise signal is selected for retention. After selecting the sparse features, a modified set of time series data is generated by combining the set of one or more sparse features with a set of one or more dense features from the plurality of dense features. At least one seasonal pattern is identified from the modified set of time series data. A summary for the seasonal pattern may then be generated and stored.
Abstract:
The disclosed embodiments provide a system that detects anomalous events. During operation, the system obtains machine-generated time-series performance data collected during execution of a software program in a computer system. Next, the system removes a subset of the machine-generated time-series performance data within an interval around one or more known anomalous events of the software program to generate filtered time-series performance data. The system uses the filtered time-series performance data to build a statistical model of normal behavior in the software program and obtains a number of unique patterns learned by the statistical model. When the number of unique patterns satisfies a complexity threshold, the system applies the statistical model to subsequent machine-generated time-series performance data from the software program to identify an anomaly in an activity of the software program and stores an indication of the anomaly for the software program upon identifying the anomaly.
Abstract:
The disclosed embodiments provide a system that detects anomalous events in a virtual machine. During operation, the system obtains time-series virtual machine (VM) data including garbage-collection (GC) data collected during execution of a virtual machine in a computer system. Next, the system computes, by a service processor, a time window for analyzing the time-series VM data based at least in part on a working time scale of high-activity patterns in the time-series GC data. The system then uses a trend-estimation technique to analyze the time-series VM data within the time window to determine an out-of-memory (OOM) risk in the virtual machine. Finally, the system stores an indication of the OOM risk for the virtual machine based at least in part on determining the OOM risk in the virtual machine.