SYSTEM AND METHOD FOR SECURITY IN INTERNET-OF-THINGS AND CYBER-PHYSICAL SYSTEMS BASED ON MACHINE LEARNING

    公开(公告)号:US20220201014A1

    公开(公告)日:2022-06-23

    申请号:US17603453

    申请日:2020-02-25

    Abstract: According to various embodiments, a method for detecting security vulnerabilities in at least one of cyber-physical systems (CPSs) and Internet of Things (IoT) devices is disclosed. The method includes constructing an attack directed acyclic graph (DAG) from a plurality of regular expressions, where each regular expression corresponds to control-data flow for a known CPS/IoT attack. The method further includes performing a linear search on the attack DAG to determine unexploited CPS/IoT attack vectors, where a path in the attack DAG that does not represent a known CPS/IoT attack vector represents an unexploited CPS/IoT attack vector. The method also includes applying a trained machine learning module to the attack DAG to predict new CPS/IoT vulnerability exploits. The method further includes constructing a defense DAG configured to protect against the known CPS/IoT attacks, the unexploited CPS/IoT attacks, and the new CPS/IoT vulnerability exploits.

    SYSTEM AND METHOD FOR WEARABLE MEDICAL SENSOR AND NEURAL NETWORK BASED DIABETES ANALYSIS

    公开(公告)号:US20220240864A1

    公开(公告)日:2022-08-04

    申请号:US17619449

    申请日:2020-06-16

    Abstract: According to various embodiments, a machine-learning based system for diabetes analysis is disclosed. The system includes one or more processors configured to interact with a plurality of wearable medical sensors (WMSs). The processors are configured to receive physiological data from the WMSs and demographic data from a user interface. The processors are further configured to train at least one neural network based on a grow-and-prune paradigm to generate at least one diabetes inference model. The neural network grows at least one of connections and neurons based on gradient information and prunes away at least one of connections and neurons based on magnitude information. The processors are also configured to output a diabetes-based decision by inputting the received physiological data and demographic data into the generated diabetes inference model.

    HARDWARE-SOFTWARE CO-DESIGN FOR EFFICIENT TRANSFORMER TRAINING AND INFERENCE

    公开(公告)号:US20250037028A1

    公开(公告)日:2025-01-30

    申请号:US18782768

    申请日:2024-07-24

    Abstract: Methods for co-designing transformer-accelerator pairs are provided. The methods may include using a transformer embedding to generate a computational graph and a transformer model. The methods may include running the computational graph through a surrogate model and outputting accuracy data of the surrogate model. The methods may include using an accelerator embedding and the transformer model to simulate training and inference tasks and outputting hardware performance data of the transformer model. The methods may include sending the hardware performance data (such as latency, energy leakage, dynamic energy, and chip area, which may be optimizable performance parameters) and model accuracy data to a co-design optimizer. The methods may include generating an output transformer-accelerator or a transformer-edge-device pair from the co-design optimizer. The transformer model and accelerator embedding may be the output transformer-accelerator or a transformer-edge-device pair.

    SYSTEM AND METHOD FOR LABEL ERROR DETECTION VIA CLUSTERING TRAINING LOSSES

    公开(公告)号:US20240419966A1

    公开(公告)日:2024-12-19

    申请号:US18743768

    申请日:2024-06-14

    Abstract: Systems and methods for tackling a significant problem in data analytics: inaccurate dataset labeling. Such inaccuracies can compromise machine learning model performance. To counter this, label error detection algorithm is provided that efficiently identifies and removes samples with corrupted labels. The provided framework (CTRL) detects label errors in two steps based on the observation that models learn clean and noisy labels in different ways. First, one trains a neural network using the noisy training dataset and obtains the loss curve for each sample. Then, one applies clustering algorithms to the training losses to group samples into two categories: cleanly-labeled and noisily-labeled. After label error detection, one removes samples with noisy labels and retrains the model.

Patent Agency Ranking