Recurrent Gaussian Mixture Model For Sensor State Estimation In Condition Monitoring

    公开(公告)号:US20200184373A1

    公开(公告)日:2020-06-11

    申请号:US16638778

    申请日:2017-08-30

    Abstract: A computer-implemented method for monitoring a system includes training a recurrent Gaussian mixture model to model a probability distribution for each sensor of the system based on a set of training data. The recurrent Gaussian mixture model applies a Gaussian process to each sensor dimension to estimate current sensor values based on previous sensor values. Measured sensor data is received from the sensors of the system and an expectation maximization technique is performed to determine an expected value for a particular sensor based on the recurrent Gaussian mixture model and the measured sensor data. A measured sensor value is identified for the particular sensor in the measured sensor data. If the measured sensor value and the expected sensor value deviate by more than a predetermined amount, a fault detection alarm is generated to indicate that the system is not operating within a normal operating range.

    Proximal gradient method for huberized support vector machine

    公开(公告)号:US10332025B2

    公开(公告)日:2019-06-25

    申请号:US14642904

    申请日:2015-03-10

    Abstract: The Support Vector Machine (SVM) has been used in a wide variety of classification problems. The original SVM uses the hinge loss function, which is nondifferentiable and makes the problem difficult to solve in particular for regularized SVMs, such as with l1-norm. The Huberized SVM (HSVM) is considered, which uses a differentiable approximation of the hinge loss function. The Proximal Gradient (PG) method is used to solving binary-class HSVM (BHSVM) and then generalized to multi-class HSVM (MHSVM). Under strong convexity assumptions, the algorithm converges linearly. A finite convergence result about the support of the solution is given, based on which the algorithm is further accelerated by a two-stage method.

    EFFICIENT CALCULATIONS OF NEGATIVE CURVATURE IN A HESSIAN FREE DEEP LEARNING FRAMEWORK

    公开(公告)号:US20180101766A1

    公开(公告)日:2018-04-12

    申请号:US15290154

    申请日:2016-10-11

    CPC classification number: G06N3/084

    Abstract: A method for training a deep learning network includes defining a loss function corresponding to the network. Training samples are received and current parameter values are set to initial parameter values. Then, a computing platform is used to perform an optimization method which iteratively minimizes the loss function. Each iteration comprises the following steps. An eigCG solver is applied to determine a descent direction by minimizing a local approximated quadratic model of the loss function with respect to current parameter values and the training dataset. An approximate leftmost eigenvector and eigenvalue is determined while solving the Newton system. The approximate leftmost eigenvector is used as negative curvature direction to prevent the optimization method from converging to saddle points. Curvilinear and adaptive line-searches are used to guide the optimization method to a local minimum. At the end of the iteration, the current parameter values are updated based on the descent direction.

    SYSTEM AND METHOD FOR PREDICTING POWER PLANT OPERATIONAL PARAMETERS UTILIZING ARTIFICIAL NEURAL NETWORK DEEP LEARNING METHODOLOGIES

    公开(公告)号:US20170091615A1

    公开(公告)日:2017-03-30

    申请号:US14867380

    申请日:2015-09-28

    CPC classification number: G06N3/084 G06N3/0445

    Abstract: A system and method of predicting future power plant operations is based upon an artificial neural network model including one or more hidden layers. The artificial neural network is developed (and trained) to build a model that is able to predict future time series values of a specific power plant operation parameter based on prior values. By accurately predicting the future values of the time series, power plant personnel are able to schedule future events in a cost-efficient, timely manner. The scheduled events may include providing an inventory of replacement parts, determining a proper number of turbines required to meet a predicted demand, determining the best time to perform maintenance on a turbine, etc. The inclusion of one or more hidden layers in the neural network model creates a prediction that is able to follow trends in the time series data, without overfitting.

    Proximal Gradient Method for Huberized Support Vector Machine
    45.
    发明申请
    Proximal Gradient Method for Huberized Support Vector Machine 审中-公开
    Huberized支持向量机的近似梯度法

    公开(公告)号:US20150262083A1

    公开(公告)日:2015-09-17

    申请号:US14642904

    申请日:2015-03-10

    CPC classification number: G06N20/00

    Abstract: The Support Vector Machine (SVM) has been used in a wide variety of classification problems. The original SVM uses the hinge loss function, which is nondifferentiable and makes the problem difficult to solve in particular for regularized SVMs, such as with l1-norm. The Huberized SVM (HSVM) is considered, which uses a differentiable approximation of the hinge loss function. The Proximal Gradient (PG) method is used to solving binary-class HSVM (BHSVM) and then generalized to multi-class HSVM (MHSVM). Under strong convexity assumptions, the algorithm converges linearly. A finite convergence result about the support of the solution is given, based on which the algorithm is further accelerated by a two-stage method.

    Abstract translation: 支持向量机(SVM)已被用于各种分类问题。 原始SVM使用铰链损失函数,这是非不可分辨的,并且使问题难以解决,特别是对于正则化SVM,如l1范数。 考虑了Huberized SVM(HSVM),其使用铰链损失函数的可微分近似。 近程梯度(PG)方法用于求解二进制HSVM(BHSVM),然后推广到多级HSVM(MHSVM)。 在强凸度假设下,算法线性收敛。 给出了关于解的支持的有限收敛结果,基于此,该算法通过两阶段方法进一步加速。

Patent Agency Ranking