NODE DISAMBIGUATION
    1.
    发明申请

    公开(公告)号:US20220215260A1

    公开(公告)日:2022-07-07

    申请号:US17702064

    申请日:2022-03-23

    Abstract: A data processing system for implementing a machine learning process in dependence on a graph neural network, the system being configured to receive a plurality of input graphs each having a plurality of nodes, at least some of the nodes having an attribute, the system being configured to: for at least one graph of the input graphs: determine one or more sets of nodes of the plurality of nodes, the nodes of each set having identical attributes; for each set, assign a label to each of the nodes of that set so that each node of a set has a different label from the other nodes of that set; process the sets to form an aggregate value; and implement the machine learning process taking as input: (i) the input graphs with the exception of the said sets and (ii) the aggregate value.

    APPARATUSES AND METHODS FOR DETECTING MALWARE

    公开(公告)号:US20220092176A1

    公开(公告)日:2022-03-24

    申请号:US17538703

    申请日:2021-11-30

    Abstract: Apparatuses and methods for determining if a computer program is malware and to which malware class it belongs to. In the method the behaviour of a computer program is traced by observing the activity of the program. Behaviour sequences comprising API-calls or similar activity of a computer program are then provided into a classifier for classifying the computer program. From the outcome of the classifier a classification result and the portions relevant to decision can be provided to a person for further confirmation.

    APPARATUS AND METHOD FOR DISTRIBUTED NEURAL NETWORKS

    公开(公告)号:US20240062074A1

    公开(公告)日:2024-02-22

    申请号:US18495528

    申请日:2023-10-26

    CPC classification number: G06N3/098

    Abstract: The present invention relates to a first device and at least two second devices for performing distributive machine learning and inference in a communication system. Each device comprises a neural network, NN. The NNs are trained distributively in a training phase and may also be activated during the inference phase, so that an amount of data exchange may be reduced in the communication system. During the training phase and the inference phase, the at least two second devices provide activation vectors of output layers of their NNs to the first device. The first device combines those activation vectors to generate an input for its NN. During backpropagation, the first device may split or broadcast an error vector of the input layer of its NN to the at least two second devices. In this way, an arbitrary number of data sources may be handled by the communication system.

Patent Agency Ranking