-
公开(公告)号:US12204551B2
公开(公告)日:2025-01-21
申请号:US17249939
申请日:2021-03-19
Inventor: Ji Liu , Haoyi Xiong , Dejing Dou , Siyu Huang , Jizhou Huang , Zhi Feng , Haozhe An
IPC: G06F16/24 , G06F16/2458 , G06F21/53
Abstract: Embodiments of the present disclosure provide a data mining system, a data mining method, and a storage medium. The data mining system includes a transfer device, a first trusted execution space and a second trusted execution space. The transfer device is configured to receive a data calling request of the second trusted execution space, obtain data to be called from the first trusted execution space according to the data calling request, and provide the data to be called to the second trusted execution space, so as to perform data mining based on the data to be called and the mining-related data to obtain a data mining result and to provide the data mining result to a device of the data user.
-
公开(公告)号:US11783227B2
公开(公告)日:2023-10-10
申请号:US16998616
申请日:2020-08-20
Inventor: Xingjian Li , Haoyi Xiong , Jun Huan
IPC: G06N20/00 , G06N3/08 , G06F11/34 , G05B13/04 , G06F18/214
CPC classification number: G06N20/00 , G05B13/042 , G06F11/3452 , G06F18/214 , G06N3/08
Abstract: A method, apparatus, device and readable medium for transfer learning in machine learning are provided. The method includes: constructing a target model according to the number of classes to be achieved by a target task and a duly-trained source model; obtaining a value of a regularized loss function of the corresponding target model and a value of a cross-entropy loss function of the target model, based on sets of training data in a training dataset of the target task; according to the value of the regularized loss function and the value of the cross-entropy loss function corresponding to each set of training data, updating parameters in the target model by a gradient descent method to implement the training of the target model. The above technical solution avoids excessive constraints on parameters in the prior art, thereby refraining from damaging the training effect of the source model on the target task.
-