Abstract:
Methods and apparatus to facilitate generation of database queries are disclosed. An example apparatus includes a generator to generate a global importance tensor. The global importance tensor based on a knowledge graph representative of information stored in a database. The knowledge graph includes objects and connections between the objects. The global importance tensor includes importance values for different types of the connections between the objects. The example apparatus further includes an importance adaptation analyzer to generate a session importance tensor based on the global importance tensor and a user query, and a user interface to provide a suggested query to a user based on the session importance tensor.
Abstract:
Systems and methods may be used to prevent attacks on a malware detection system. A method may include modeling a time series of directed graphs using incoming binary files during training of a machine learning system and detecting, during a time-window of the time series, an anomaly based on a directed graph of the time series of directed graphs. The method may include providing an alert that the anomaly has corrupted the machine learning system. The method may include preventing or remedying corruption of the machine learning system.
Abstract:
Systems and methods may be used to classify incoming testing data, such as binaries, function calls, an application package, or the like, to determine whether the testing data is contaminated using an adversarial attack or benign while training a machine learning system to detect malware. A method may include using a sparse coding technique or a semi-supervised learning technique to classify the testing data. Training data may be used to represent the testing data using the sparse coding technique or to train the supervised portion of the semi-supervised learning technique.
Abstract:
A head-mounted information system is provided, the head-mounted information system comprising a frame configured to be mounted on a head of a user, a display unit coupled to the frame, a sensor unit coupled to the frame comprising one or more motion sensors, and, a processor unit coupled to the frame and connected to receive signals from the motion sensors. The processor unit comprises a processor and a memory accessible by the processor. The processor unit is configured to monitor the received signals and enter a gesture control mode upon detection of a gesture control enable signal. In the gesture control mode the processor is configured to convert signals received from the motion sensors into menu navigation commands.
Abstract:
Systems and methods may be used to classify incoming testing data, such as binaries, function calls, an application package, or the like, to determine whether the testing data is contaminated using an adversarial attack or benign while training a machine learning system to detect malware. A method may include using a sparse coding technique or a semi-supervised learning technique to classify the testing data. Training data may be used to represent the testing data using the sparse coding technique or to train the supervised portion of the semi-supervised learning technique.
Abstract:
Methods, apparatus, systems and articles of manufacture are disclosed for anomalous memory access pattern detection for translational lookaside buffers. An example apparatus includes a communication interface to retrieve a first eviction data set from a translational lookaside buffer associated with a central processing unit; a machine learning engine to: generate an anomaly detection model based upon at least one of a second eviction data set not including an anomaly and a third eviction data set including the anomaly; and determine whether the anomaly is present in the first eviction data set based on the anomaly detection model; and an alert generator to at least one of modify a bit value or terminate memory access operations when the anomaly is determined to be present.
Abstract:
An apparatus comprising at least one interface to receive a signal identifying a second vehicle in proximity of a first vehicle; and processing circuitry to obtain a behavioral model associated with the second vehicle, wherein the behavioral model defines driving behavior of the second vehicle; use the behavioral model to predict actions of the second vehicle; and determine a path plan for the first vehicle based on the predicted actions of the second vehicle.
Abstract:
A technique includes processing a plurality of sets of program code to extract call graphs; determining similarities between the call graphs; applying unsupervised machine learning to an input formed from the determined similarities to determine latent features of the input; clustering the determined latent features; and determining a characteristic of a given program code set of the plurality of program code sets based on a result of the clustering.
Abstract:
In one example an apparatus comprises a memory and a processor to create, from a first deep neural network (DNN) model, a first plurality of DNN models, generate a first set of adversarial examples that are misclassified by the first plurality of deep neural network (DNN) models, determine a first set of activation path differentials between the first plurality of adversarial examples, generate, from the first set of activation path differentials, at least one composite adversarial example which incorporates at least one intersecting critical path that is shared between at least two adversarial examples in the first set of adversarial examples, and use the at least one composite adversarial example to generate a set of inputs for a subsequent training iteration of the DNN model. Other examples may be described.
Abstract:
Methods, systems, articles of manufacture and apparatus to detect process hijacking are disclosed herein. An example apparatus to detect control flow anomalies includes a parsing engine to compare a target instruction pointer (TIP) address to a dynamic link library (DLL) module list, and in response to detecting a match of the TIP address to a DLL in the DLL module list, set a first portion of a normalized TIP address to a value equal to an identifier of the DLL. The example apparatus disclosed herein also includes a DLL entry point analyzer to set a second portion of the normalized TIP address based on a comparison between the TIP address and an entry point of the DLL, and a model compliance engine to generate a flow validity decision based on a comparison between (a) the first and second portion of the normalized TIP address and (b) a control flow integrity model.