Abstract:
A process control tool for processing wide data from automated manufacturing operations. The tool including a feature selector, an analysis server, and a visualization engine. The feature selector receives process input data from at least one manufacturing process application, wherein the process input data includes a plurality of observations and associated variables, converts the received process input data to a stacked format having one row for each variable in each observation, converts identified categorical variables into numerical variables and identified time-series data into fixed numbers of intervals, computes statistics that measure the strengths of relationships between predictor values and an outcome variable, orders, filters, and pivots the predictor values. The analysis server performs at least one operation to identify interactions between predictor values, e.g. using maximum Likelihood computations or predefined searches, in the filtered predictor values. The visualization engine displays the interactions for use in managing the manufacturing operations.
Abstract:
Processing a message is disclosed. For each field group applicable to a message from one or more unique field groups of one or more fields identified using one or more content matchers, a compiled message corresponding to the field group applicable to the message is generated. It is determined whether one or more of the compiled messages matches one or more of the one or more content matchers.
Abstract:
Predictive systems for deploying enterprise applications include memory structures that output predictions to a user. The predictive system may include an HTM structure that comprises a tree-shaped hierarchy of memory nodes, wherein each memory node has a learning and memory function, and is hierarchical in space and time that allows them to efficiently model the structure of the world. The memory nodes learn causes, predicts with probability values, and form beliefs based on the input data, where the learning algorithm stores likely sequence of patterns in the nodes. By combining memory of likely sequences with current input data, the nodes may predict the next event. The predictive system may employ an HHMM structure comprising states, wherein each state is itself an HHMM. The states of the HHMM generate sequences of observation symbols for making predictions.
Abstract:
Disclosed are systems and methods for efficient matching for content-based addressing wherein the systems and methods may: accept, at a receiver machine, a query; generate, at the receiver machine, a tree structure ordered by one or more fields of the query; analyze, at the receiver machine, a message from the sender machine; search, by the receiver machine, the tree structure using content from one or more fields of the message; determine, by the receiver machine, if the content values of the message match a content value of the query stored in the tree structure; and accept, by the receiver machine, the message if the content value of the message matches one or more content values of the query.
Abstract:
An extended state machine that makes use of an inference engine as the infrastructure for adding inferential capabilities to the state machine's execution. The result is a state machine that may operate on partial or disordered information, inferring intermediate states that have yet to be formally traversed. In addition, controls such as state timeouts and transition priorities allow for finer control of the state machine's execution, particularly in unexpected circumstances.
Abstract:
Methods for efficiently determining and managing version information associated with sets of data objects, persistently storing the version information, and utilizing the stored version information to determine compatibility between the sets of data objects and applications performing operations utilizing the sets of data objects.
Abstract:
An extended state machine that makes use of an inference engine as the infrastructure for adding inferential capabilities to the state machine's execution. The result is a state machine that may operate on partial or disordered information, inferring intermediate states that have yet to be formally traversed. In addition, controls such as state timeouts and transition priorities allow for finer control of the state machine's execution, particularly in unexpected circumstances.
Abstract:
Techniques to store graph information in a database are disclosed. In various embodiments, each node in a graph may be modeled as a micro b-tree. Node identity, attribute, edge, and edge attribute data may be stored in one or more pages modeled on page formats typically used to store index data for a relational database index. Data associated with a plurality of nodes and edges, each of said edges representing a relationship between two or more of said nodes, may be received. For each node, one or more pages of data may be created, each corresponding to a prescribed page size associated with a storage device in which said one or more pages are to be stored, and each page having a data structure that includes a variable-sized set of fixed length data slots and a variable-sized variable length data region
Abstract:
Processing a message is disclosed. For each field group applicable to a message from one or more unique field groups of one or more fields identified using one or more content matchers, a compiled message corresponding to the field group applicable to the message is generated. It is determined whether one or more of the compiled messages matches one or more of the one or more content matchers.
Abstract:
Steady state data distribution is provided between a client application, a leader machine, and a plurality of replica machines. The distribution comprises the leader machine receiving an operation request from the client application, the leader machine sending the prepare message to each of the plurality of replica machines, the replica machines recording in their logs information on the operation, the replica machines sending acknowledgement messages to the leader machine, and the leader machine sending commit command messages to the replica machines. A new quorum of the replica machines is created by using log information. Replica machines that become part of the new quorum are updated in an efficient manner.