Abstract:
A computer-implemented method for monitoring a system includes training a recurrent Gaussian mixture model to model a probability distribution for each sensor of the system based on a set of training data. The recurrent Gaussian mixture model applies a Gaussian process to each sensor dimension to estimate current sensor values based on previous sensor values. Measured sensor data is received from the sensors of the system and an expectation maximization technique is performed to determine an expected value for a particular sensor based on the recurrent Gaussian mixture model and the measured sensor data. A measured sensor value is identified for the particular sensor in the measured sensor data. If the measured sensor value and the expected sensor value deviate by more than a predetermined amount, a fault detection alarm is generated to indicate that the system is not operating within a normal operating range.
Abstract:
The Support Vector Machine (SVM) has been used in a wide variety of classification problems. The original SVM uses the hinge loss function, which is nondifferentiable and makes the problem difficult to solve in particular for regularized SVMs, such as with l1-norm. The Huberized SVM (HSVM) is considered, which uses a differentiable approximation of the hinge loss function. The Proximal Gradient (PG) method is used to solving binary-class HSVM (BHSVM) and then generalized to multi-class HSVM (MHSVM). Under strong convexity assumptions, the algorithm converges linearly. A finite convergence result about the support of the solution is given, based on which the algorithm is further accelerated by a two-stage method.
Abstract:
A method for training a deep learning network includes defining a loss function corresponding to the network. Training samples are received and current parameter values are set to initial parameter values. Then, a computing platform is used to perform an optimization method which iteratively minimizes the loss function. Each iteration comprises the following steps. An eigCG solver is applied to determine a descent direction by minimizing a local approximated quadratic model of the loss function with respect to current parameter values and the training dataset. An approximate leftmost eigenvector and eigenvalue is determined while solving the Newton system. The approximate leftmost eigenvector is used as negative curvature direction to prevent the optimization method from converging to saddle points. Curvilinear and adaptive line-searches are used to guide the optimization method to a local minimum. At the end of the iteration, the current parameter values are updated based on the descent direction.
Abstract:
A system and method of predicting future power plant operations is based upon an artificial neural network model including one or more hidden layers. The artificial neural network is developed (and trained) to build a model that is able to predict future time series values of a specific power plant operation parameter based on prior values. By accurately predicting the future values of the time series, power plant personnel are able to schedule future events in a cost-efficient, timely manner. The scheduled events may include providing an inventory of replacement parts, determining a proper number of turbines required to meet a predicted demand, determining the best time to perform maintenance on a turbine, etc. The inclusion of one or more hidden layers in the neural network model creates a prediction that is able to follow trends in the time series data, without overfitting.
Abstract:
The Support Vector Machine (SVM) has been used in a wide variety of classification problems. The original SVM uses the hinge loss function, which is nondifferentiable and makes the problem difficult to solve in particular for regularized SVMs, such as with l1-norm. The Huberized SVM (HSVM) is considered, which uses a differentiable approximation of the hinge loss function. The Proximal Gradient (PG) method is used to solving binary-class HSVM (BHSVM) and then generalized to multi-class HSVM (MHSVM). Under strong convexity assumptions, the algorithm converges linearly. A finite convergence result about the support of the solution is given, based on which the algorithm is further accelerated by a two-stage method.