-
公开(公告)号:US11037058B2
公开(公告)日:2021-06-15
申请号:US16518831
申请日:2019-07-22
Applicant: VMware, Inc.
Inventor: Dev Nag , Yanislav Yankov , Dongni Wang , Gregory T. Burk , Nicholas Mark Grant Stephen
Abstract: The current document is directed to transfer of training received by a first automated reinforcement-learning-based application manager while controlling a first application is transferred to a second automated reinforcement-learning-based application manager which controls a second application different from the first application. Transferable training provides a basis for automated generation of applications from application components. Transferable training is obtained from composition of applications from application components and composition of reinforcement-learning-based-control-and-learning constructs from reinforcement-learning-based-control-and-learning constructs of application components.
-
公开(公告)号:US10922092B2
公开(公告)日:2021-02-16
申请号:US16518617
申请日:2019-07-22
Applicant: VMware, Inc.
Inventor: Dev Nag , Yanislav Yankov , Dongni Wang , Gregory T. Burk , Nicholas Mark Grant Stephen
Abstract: The current document is directed to an administrator-monitored reinforcement-learning-based application manager that can be deployed in various different computational environments to manage the computational environments with respect to one or more reward-specified goals. Certain control actions undertaken by the administrator-monitored reinforcement-learning-based application manager are first proposed, to one or more administrators or other users, who can accept or reject the proposed control actions prior to their execution. The reinforcement-learning-based application manager can therefore continue to explore the state/action space, but the exploration can be parametrically constrained as well as by human-administrator oversight and intervention.
-
3.
公开(公告)号:US20200348978A1
公开(公告)日:2020-11-05
申请号:US16400393
申请日:2019-05-01
Applicant: VMware, Inc
Inventor: Nicholas Mark Grant Stephen
Abstract: The current document is directed to a resource-identifier-correlation service and/or application that maintains correlation information about the different resource identifiers used by different management applications and/or services within a cloud-computing facility or distributed cloud-computing facility. In one implementation, the resource-identifier-correlation service and/or application continuously monitors streams of inventory/configuration data for different management applications and/or services in order to construct and maintain a database of computational resources and the resource identifiers associated with the computational resources. The resource-identifier-correlation service and/or application includes one or both of a service interface and an application programming interface (“API”) that provide many different functionalities related to identifying and monitoring correlations between resource identifiers used by the different management applications and/or services.
-
公开(公告)号:US20240036910A1
公开(公告)日:2024-02-01
申请号:US18319351
申请日:2023-05-17
Applicant: VMware, Inc.
Inventor: Nicholas Mark Grant Stephen , Santoshkumar Kavadimatti , Saurabh Kedia
IPC: G06F9/455
CPC classification number: G06F9/45558 , G06F2009/4557
Abstract: The current document is directed to a meta-level management system (“MMS”) that aggregates information and functionalities provided by multiple underlying management systems in addition to providing additional information and management functionalities. In one implementation, the MMS creates and maintains a single inventory-and-configuration-management database (“ICMDB”), implemented using a graph database, to store a comprehensive inventory of managed entities known to, and managed by, the multiple underlying management systems. Each managed entity is associated with an entity identifier and is represented in the ICMBD by a node. Managed entities that are managed by two or more of the multiple underlying management systems are represented by nodes that include references to one or more namespaces. Each of the underlying management systems is associated with at least one data collector that collects inventory and configuration information from the underlying management system for storing within ICMDB nodes and ICMDB-node namespaces.
-
公开(公告)号:US11042640B2
公开(公告)日:2021-06-22
申请号:US16502587
申请日:2019-07-03
Applicant: VMware, Inc.
Inventor: Dev Nag , Gregory T. Burk , Yanislav Yankov , Nicholas Mark Grant Stephen , Dongni Wang
Abstract: The current document is directed to a safe-operation-constrained reinforcement-learning-based application manager that can be deployed in various different computational environments, without extensive manual modification and interface development, to manage the computational environments with respect to one or more reward-specified goals. Control actions undertaken by the safe-operation-constrained reinforcement-learning-based application manager are constrained, by stored action filters, to constrain state/action-space exploration by the safe-operation-constrained reinforcement-learning-based application manager to safe actions and thus prevent deleterious impact to the managed computational environment.
-
公开(公告)号:US10977579B2
公开(公告)日:2021-04-13
申请号:US16518807
申请日:2019-07-22
Applicant: VMware, Inc.
Inventor: Dev Nag , Yanislav Yankov , Dongni Wang , Gregory T. Burk , Nicholas Mark Grant Stephen
Abstract: The current document is directed to automated reinforcement-learning-based application managers that that are trained using adversarial training. During adversarial training, potentially disadvantageous next actions are selected for issuance by an automated reinforcement-learning-based application manager at a lower frequency than selection of next actions, according to a policy that is learned to provide optimal or near-optimal control over a computing environment that includes one or more applications controlled by the automated reinforcement-learning-based application manager. By selecting disadvantageous actions, the automated reinforcement-learning-based application manager is forced to explore a much larger subset of the system-state space during training, so that, upon completion of training, the automated reinforcement-learning-based application manager has learned a more robust and complete optimal or near-optimal control policy than had the automated reinforcement-learning-based application manager been trained by simulators or using management actions and computing-environment responses recorded during previous controlled operation of a computing-environment.
-
公开(公告)号:US10949263B2
公开(公告)日:2021-03-16
申请号:US16518717
申请日:2019-07-22
Applicant: VMware, Inc.
Inventor: Dev Nag , Yanislav Yankov , Dongni Wang , Gregory T. Burk , Nicholas Mark Grant Stephen
Abstract: The current document is directed to automated reinforcement-learning-based application managers that obtain increased computational efficiency by reusing learned models and by using human-management experience to truncate state and observation vectors. Learned models of managed environments that receive component-associated inputs can be partially or completely reused for similar environments. Human managers and administrators generally use only a subset of the available metrics in managing an application, and that subset can be used as an initial subset of metrics for learning an optimal or near-optimal control policy by an automated reinforcement-learning-based application manager.
-
公开(公告)号:US20200065704A1
公开(公告)日:2020-02-27
申请号:US16518845
申请日:2019-07-22
Applicant: VMware, Inc.
Inventor: Dev Nag , Yanislav Yankov , Dongni Wang , Gregory T. Burk , Nicholas Mark Grant Stephen
Abstract: The current document is directed to methods and systems for simulation-based training of automated reinforcement-learning-based application managers. Simulators are generated from data collected from controlled computing environments controlled and may employ any of a variety of different machine-learning models to learn state-transition and reward models. The current disclosed methods and systems provide facilities for visualizing aspects of the models learned by a simulator and for initializing simulator models using domain information. In addition, the currently disclosed simulators employ weighted differences computed from simulator-generated and training-data state transitions for feedback to the machine-learning models to address various biases and deficiencies of commonly employed difference metrics in the context of training automated reinforcement-learning-based application managers.
-
公开(公告)号:US20200065703A1
公开(公告)日:2020-02-27
申请号:US16518807
申请日:2019-07-22
Applicant: VMware, Inc.
Inventor: Dev Nag , Yanislav Yankov , Dongni Wang , Gregory T. Burk , Nicholas Mark Grant Stephen
Abstract: The current document is directed to automated reinforcement-learning-based application managers that that are trained using adversarial training. During adversarial training, potentially disadvantageous next actions are selected for issuance by an automated reinforcement-learning-based application manager at a lower frequency than selection of next actions, according to a policy that is learned to provide optimal or near-optimal control over a computing environment that includes one or more applications controlled by the automated reinforcement-learning-based application manager. By selecting disadvantageous actions, the automated reinforcement-learning-based application manager is forced to explore a much larger subset of the system-state space during training, so that, upon completion of training, the automated reinforcement-learning-based application manager has learned a more robust and complete optimal or near-optimal control policy than had the automated reinforcement-learning-based application manager been trained by simulators or using management actions and computing-environment responses recorded during previous controlled operation of a computing-environment.
-
公开(公告)号:US20200065156A1
公开(公告)日:2020-02-27
申请号:US16518717
申请日:2019-07-22
Applicant: VMware, Inc.
Inventor: Dev Nag , Yanislav Yankov , Dongni Wang , Gregory T. Burk , Nicholas Mark Grant Stephen
Abstract: The current document is directed to automated reinforcement-learning-based application managers that obtain increased computational efficiency by reusing learned models and by using human-management experience to truncate state and observation vectors. Learned models of managed environments that receive component-associated inputs can be partially or completely reused for similar environments. Human managers and administrators generally use only a subset of the available metrics in managing an application, and that subset can be used as an initial subset of metrics for learning an optimal or near-optimal control policy by an automated reinforcement-learning-based application manager.
-
-
-
-
-
-
-
-
-