Abstract:
Methods and systems for performance testing in a software deployment pipeline are disclosed. One or more performance tests are automatically performed on a build of a software product in a test environment in response to deploying the build to the test environment. One or more performance metrics are collected based on the performance tests. Based on the performance metrics, the build of the software product is accepted or rejected.
Abstract:
Systems and methods for validation of log formats are described herein. Log data is stored via a logging service in a data store or other storage system. An example log or proposed log format is received by the logging service. The proposed log format is validated against validation rules provided by log consumers.
Abstract:
Optimization preferences are defined for optimizing execution of a distributed application. Candidate sets of application parameter values may be tested in test execution environments. Measures of performance for metrics of interest are determined based upon the execution of the distributed application using the candidate sets of application parameter values. Utility curves may be utilized to compute measures of effectiveness for metrics of interest. A multi-attribute rollup operation may utilize the computed measures of effectiveness and weights to compute a grand measure of merit (MOM) for the candidate sets of application parameter values. An optimized set of application parameter values may then be selected based upon the computed grand MOMs. The optimized set of application parameter values may be deployed to a production execution environment executing the distributed application. Production safe application parameters might also be identified and utilized to optimize execution of the distributed application in a production execution environment.
Abstract:
Methods, systems, and computer-readable media for load testing with automated service dependency discovery are disclosed. A request is received to approve load testing for a service. One or more downstream services are identified for the service. The one or more downstream services are identified based at least in part using automated discovery. The availability of the one or more downstream services for load testing is determined. The request is approved or denied based at least in part on the availability of the one or more downstream services for load testing.
Abstract:
A generic transaction generator framework for testing a network-based production service may work in conjunction with a product-specific transaction creator module that executes transactions on the service. The transaction creator module may include runtime-discoverable information, such as source code annotations, to communicate product specific details to the framework. Runtime-discoverable information may identify transaction types, transaction methods, data provider methods and data sources. The framework may generate and execute various test transactions and may call a data provider method to prepare data for the transaction and pass the prepared data to a transaction method. The framework may also load and parse test data from a data source and provide the test data to the data provider method for use when preparing data for the transaction.
Abstract:
A generic transaction generator framework for testing a network-based production service may work in conjunction with a product-specific transaction creator module that executes transactions on the service. The transaction creator module may include runtime-discoverable information to communicate product specific details to the framework. Runtime-discoverable information may identify initialization methods, terminate methods, transaction types, transaction methods, transaction dependencies as well as testing parameters, such as transaction rate, testing period and a desired distribution of transaction types. The framework may generate and execute various test transactions and collect performance metrics regarding how well the service performed the test transactions.
Abstract:
Methods and systems for automated tuning of a service configuration are disclosed. An optimal configuration for a test computer is selected by performing one or more load tests using the test computer for each of a plurality of test configurations. The performance of a plurality of additional test computers configured with the optimal configuration is automatically determined by performing additional load tests using the additional test computers. A plurality of production computers are automatically configured with the optimal configuration if the performance of the additional test computers is improved with the optimal configuration.
Abstract:
Methods and systems for automated tuning of a service configuration are disclosed. An optimal configuration for a test computer is selected by performing one or more load tests using the test computer for each of a plurality of test configurations. The performance of a plurality of additional test computers configured with the optimal configuration is automatically determined by performing additional load tests using the additional test computers. A plurality of production computers are automatically configured with the optimal configuration if the performance of the additional test computers is improved with the optimal configuration.
Abstract:
A contributed test management system receives a first request from a consumer system, where the first request comprises a request for a contributed test to be added to a deployment pipeline of a producer system, and where the contributed test is associated with an application component in the deployment pipeline. The contributed test management system causes the contributed test to test a code update provided by the producer system for the application component in the deployment pipeline, detects whether the first test fails during execution, and, if so indicates to the consumer system that the first test has failed.
Abstract:
Methods and systems for automated tuning of a service configuration are disclosed. An optimal configuration for a test computer is selected by performing one or more load tests using the test computer for each of a plurality of test configurations. The performance of a plurality of additional test computers configured with the optimal configuration is automatically determined by performing additional load tests using the additional test computers. A plurality of production computers are automatically configured with the optimal configuration if the performance of the additional test computers is improved with the optimal configuration.