Abstract:
There are provided a system, a method and a computer program product for generating an optimal preventive maintenance/replacement schedule for a set of assets. The method includes receiving data regarding an asset, said data including a failure rate function of said asset, a cost of preventative maintenance (PM) of said asset, a cost of an asset failure, and a cost of replacing an asset. An optimal number K of preventative maintenance time intervals tk and an indication of a possible replacement is computed and stored for each asset by minimizing a mean cost-rate value function with respect to an electrical age of the asset. A first PM schedule is formed without consideration of labor and budget resource constraints. The method further generates a second maintenance schedule for a system of assets by minimizing a deviation from the optimal PM time intervals subject to the labor and budget resource constraints.
Abstract:
The present disclosure relates generally to the field of distributed charging of electrical assets. In various examples, distributed charging of electrical assets may be implemented in the form of systems, methods and/or algorithms.
Abstract:
An exemplary method includes solving on a computing system an optimal power flow formulation for a plurality of generators in a power system. The solving includes computing using multi-threaded parallelism a plurality of constraints for the formulation, computing using multi-threaded parallelism a plurality of Jacobian functions of the constraints,and computing using multi-threaded parallelism a Hessian of Lagrangian functions. The method further includes outputting results of the solving, wherein the results comprise values of generation levels for the plurality of generators. Apparatus and program products are also disclosed.
Abstract:
A computer-implemented method is provided of using a machine learning model for disentanglement of prosody in spoken natural language. The method includes encoding, by a computing device, the spoken natural language to produce content code. The method further includes resampling, by the computing device without text transcriptions, the content code to obscure the prosody by applying an unsupervised technique to the machine learning model to generate prosody-obscured content code. The method additionally includes decoding, by the computing device, the prosody-obscured content code to synthesize speech indirectly based upon the content code.
Abstract:
A processor training a reinforcement learning model can include receiving a first dataset representing an observable state in reinforcement learning to train a machine to perform an action. The processor receives a second dataset. Using the second dataset, the processor trains a machine learning classifier to make a prediction about an entity related to the action. The processor extracts an embedding from the trained machine learning classifier, and augments the observable state with the embedding to create an augmented state. Based on the augmented state, the processor trains a reinforcement learning model to learn a policy for performing the action, the policy including a mapping from state space to action space.
Abstract:
An input transformation function that transforms input data for a second machine learning system is learned using a first machine learning system, the learning being based on minimizing a summation of a task loss and a post-activation density loss. The input data is transformed using the learned input transformation function to alter the post-activation density to reduce an amount of energy consumed for an inferencing task and the inferencing task is carried out on the transformed input data using the second machine learning system.
Abstract:
A computer-implemented method for generating an abstract meaning representation (“AMR”) of a sentence, comprising receiving, by a computing device, an input sentence and parsing the input sentence into one or more syntactic and/or semantic graphs. An input graph including a node set and an edge set is formed from the one or more syntactic and/or semantic graphs. Node representations are generated by natural language processing. The input graph is provided to a first neural network to provide an output graph having learned node representations aligned with the node representations in the input graph. The method further includes predicting via a second neural network, node label and predicting, via a third neural network, edge labels in the output graph. The AMR is generated based on the predicted node labels and predicted edge labels. A system and a non-transitory computer readable storage medium are also disclosed.
Abstract:
A computer-implemented method is provided for transferring a target text style using Reinforcement Learning (RL). The method includes pre-determining, by a Long Short-Term Memory (LSTM) Neural Network (NN), the target text style of a target-style natural language sentence. The method further includes transforming, by a hardware processor using the LSTM NN, a source-style natural language sentence into the target-style natural language sentence that maintains the target text style of the target-style natural language sentence. The method also includes calculating an accuracy rating of a transformation of the source-style natural language sentence into the target-style natural language sentence based upon rewards relating to at least the target text style of the source-style natural language sentence.
Abstract:
Embodiments for providing direct access to non-volatile memory by a processor. One or more accelerators may be provided, via an application programming interface (“API”), direct access to non-volatile storage independent of a host central processing unit (“CPU”) on a control path or data path to perform a read operation and write operation of data.
Abstract:
Embodiments of the invention describe a computer-implemented method that includes receiving a query that includes a query sequence having query characters grouped into query words. A segment of program code is retrieved from a database for evaluation. The program code includes a program code sequence including program code characters grouped into program code words. The query sequence, the query word, the program code sequence, and the program code word are each converted to sequence and word representations. Query sequence-level features, query word-level features, program code sequence-level features, and program code word-level features are extracted from the sequence and word representation. Similarity between the query and the segment of program code is determined by applying a similarity metric technique to the query sequence-level features, the query word-level features, the program code sequence-level features, and the program code word-level features.