TRANSFER LEARNING SYSTEM AND METHOD FOR DEEP NEURAL NETWORK

    公开(公告)号:US20230259761A1

    公开(公告)日:2023-08-17

    申请号:US17938650

    申请日:2022-10-06

    IPC分类号: G06N3/08 G06N3/04

    CPC分类号: G06N3/08 G06N3/0454

    摘要: Disclosed is a transfer learning system for a deep neural network. The transfer learning system includes a pre-trained model storage unit configured to store a plurality of pre-trained models that are deep neural network models learned using one or more pre-training datasets, a transfer learning data input unit configured to receive transfer learning data, a pre-trained model selecting unit configured to select a pre-trained model corresponding to the transfer learning data from among the plurality of stored pre-trained models, and a transfer learning unit configured to generate one or more transfer learning models by performing transfer learning using the selected pre-trained model and the transfer learning data.

    METHOD FOR STRUCTURE LEARNING AND MODEL COMPRESSION FOR DEEP NEURAL NETWORK

    公开(公告)号:US20220343162A1

    公开(公告)日:2022-10-27

    申请号:US17760650

    申请日:2020-09-29

    发明人: Yong Jin LEE

    IPC分类号: G06N3/08 G06N3/04

    摘要: The present invention relates to a method for structure learning and model compression for a deep neural network. The method for structure learning and model compression for a deep neural network according to an embodiment of the present invention includes (a) generating a parameter for a neural network model, (b) generating an objective function corresponding to the neural network model on the basis of the parameter, and (c) performing training on the parameter and performing model learning on the basis of the objective function and learning data.

    DEEP NEURAL NETWORK TRAINING METHOD AND SYSTEM, AND CAUSALITY DISCOVERY METHOD

    公开(公告)号:US20220101134A1

    公开(公告)日:2022-03-31

    申请号:US17488812

    申请日:2021-09-29

    发明人: Yong Jin LEE

    IPC分类号: G06N3/08 G06N3/04 G06K9/62

    摘要: Provided is a deep neural network training method for detecting causality between input values. The method includes inputting an input value of training data acquired from n input variables to an input layer of a first neural network, which is based on a graph neural network, and calculating a predicted value through an output layer; training the first neural network on the basis of first training information, which is a result of comparing the predicted value to a target value of the training data; receiving an intermediate value in an lth hidden layer (l is a natural number greater than or equal to 1) of the first neural network from a second neural network, which is based on a deep neural network, and calculating an intermediate point value between a point at which the input value is observed and a point at which the target value is observed; and training the first and second neural networks on the basis of second training information based on similarity between the intermediate point value and the input value of the training data.