Abstract:
The current document is directed to a safe-operation-constrained reinforcement-learning-based application manager that can be deployed in various different computational environments, without extensive manual modification and interface development, to manage the computational environments with respect to one or more reward-specified goals. Control actions undertaken by the safe-operation-constrained reinforcement-learning-based application manager are constrained, by stored action filters, to constrain state/action-space exploration by the safe-operation-constrained reinforcement-learning-based application manager to safe actions and thus prevent deleterious impact to the managed computational environment.
Abstract:
The current document is directed to automated reinforcement-learning-based application managers that that are trained using adversarial training. During adversarial training, potentially disadvantageous next actions are selected for issuance by an automated reinforcement-learning-based application manager at a lower frequency than selection of next actions, according to a policy that is learned to provide optimal or near-optimal control over a computing environment that includes one or more applications controlled by the automated reinforcement-learning-based application manager. By selecting disadvantageous actions, the automated reinforcement-learning-based application manager is forced to explore a much larger subset of the system-state space during training, so that, upon completion of training, the automated reinforcement-learning-based application manager has learned a more robust and complete optimal or near-optimal control policy than had the automated reinforcement-learning-based application manager been trained by simulators or using management actions and computing-environment responses recorded during previous controlled operation of a computing-environment.
Abstract:
The current document is directed to automated reinforcement-learning-based application managers that obtain increased computational efficiency by reusing learned models and by using human-management experience to truncate state and observation vectors. Learned models of managed environments that receive component-associated inputs can be partially or completely reused for similar environments. Human managers and administrators generally use only a subset of the available metrics in managing an application, and that subset can be used as an initial subset of metrics for learning an optimal or near-optimal control policy by an automated reinforcement-learning-based application manager.
Abstract:
This disclosure is directed to data-agnostic computational methods and systems for adjusting hard thresholds based on user feedback. Hard thresholds are used to monitor time-series data generated by a data-generating entity. The time-series data may be metric data that represents usage of the data-generating entity over time. The data is compared with a hard threshold associated with usage of the resource or process and when the data violates the threshold, an alert is typically generated and presented to a user. Methods and systems collect user feedback after a number of alerts to determine the quality and significance of the alerts. Based on the user feedback, methods and systems automatically adjust the hard thresholds to better represent how the user perceives the alerts.
Abstract:
Some embodiments provide, for a policy framework that manages application of a plurality of policies to a plurality of resources in a computing environment, a method for providing a user interface. The method displays a first display area for viewing and editing policies imported by the policy framework from a first several heterogeneous sources. The method displays a second display area for viewing and editing information regarding computing resources imported by the policy framework from a second several heterogeneous sources. The method displays a third display area for viewing and editing binding rules for binding the policies to the computing resources.
Abstract:
This disclosure is directed to computational, closed-loop user feedback systems and methods for ranking or updating beliefs for a user based on user feedback. The systems and methods are based on a data-agnostic user feedback formulation that uses user feedback to automatically rank beliefs for a user or update the beliefs. The methods and systems are based on a general statistical inference model, which, in turn, is based on an assumption of convergence in user opinion. The closed-loop user feedback methods and systems may be used to rank or update beliefs prior to inputting the beliefs to a recommender engine. As a result, the recommender engine is expected to be more responsive to customer environments and efficient at deployment and reducing the level of unnecessary user recommendations.
Abstract:
The current document is directed to transfer of training received by a first automated reinforcement-learning-based application manager while controlling a first application is transferred to a second automated reinforcement-learning-based application manager which controls a second application different from the first application. Transferable training provides a basis for automated generation of applications from application components. Transferable training is obtained from composition of applications from application components and composition of reinforcement-learning-based-control-and-learning constructs from reinforcement-learning-based-control-and-learning constructs of application components.
Abstract:
The current document is directed to an administrator-monitored reinforcement-learning-based application manager that can be deployed in various different computational environments to manage the computational environments with respect to one or more reward-specified goals. Certain control actions undertaken by the administrator-monitored reinforcement-learning-based application manager are first proposed, to one or more administrators or other users, who can accept or reject the proposed control actions prior to their execution. The reinforcement-learning-based application manager can therefore continue to explore the state/action space, but the exploration can be parametrically constrained as well as by human-administrator oversight and intervention.
Abstract:
This disclosure is directed to data-agnostic computational methods and systems for adjusting hard thresholds based on user feedback. Hard thresholds are used to monitor time-series data generated by a data-generating entity. The time-series data may be metric data that represents usage of the data-generating entity over time. The data is compared with a hard threshold associated with usage of the resource or process and when the data violates the threshold, an alert is typically generated and presented to a user. Methods and systems collect user feedback after a number of alerts to determine the quality and significance of the alerts. Based on the user feedback, methods and systems automatically adjust the hard thresholds to better represent how the user perceives the alerts.
Abstract:
This disclosure is directed to data-agnostic computational methods and systems for adjusting hard thresholds based on user feedback. Hard thresholds are used to monitor time-series data generated by a data-generating entity. The time-series data may be metric data that represents usage of the data-generating entity over time. The data is compared with a hard threshold associated with usage of the resource or process and when the data violates the threshold, an alert is typically generated and presented to a user. Methods and systems collect user feedback after a number of alerts to determine the quality and significance of the alerts. Based on the user feedback, methods and systems automatically adjust the hard thresholds to better represent how the user perceives the alerts.