-
公开(公告)号:US20220255724A1
公开(公告)日:2022-08-11
申请号:US17730988
申请日:2022-04-27
Inventor: Ji LIU , Qilong LI , Dejing DOU , Chongsheng ZHANG
IPC: H04L9/06 , G06V10/774 , G06V10/10
Abstract: The present disclosure provides a method and apparatus for determining an encryption mask, a method and apparatus for recognizing an image, a method and apparatus for training a model, a device, a storage medium and a computer program product. A specific implementation comprises: acquiring a test image set and an encryption mask set; superimposing an image in the test image set with a mask in the encryption mask set to obtain an encrypted image set; recognizing an image in the encrypted image set using a pre-trained encrypted image recognition model and recognizing the image in the encrypted image set using a pre-trained original image recognition model to obtain a first recognition result; and determining a target encryption mask from the encryption mask set based on the first recognition result.
-
公开(公告)号:US20230162087A1
公开(公告)日:2023-05-25
申请号:US17989243
申请日:2022-11-17
Inventor: Ji LIU , Chendi ZHOU , Beichen MA , Jiwen ZHOU , Dejing DOU
CPC classification number: G06N20/00 , G06F9/4881
Abstract: A federated learning method, an electronic device, and a storage medium, which relate to a field of artificial intelligence, in particular to fields of distributed data processing and deep learning. The method includes: determining, for each task in a current learning period, a set of target devices corresponding to the task according to respective scheduling information of a plurality of candidate devices corresponding to the task based on a scheduling policy, the scheduling policy enables a time cost information and a device fairness evaluation information of completing the task in the current learning period to meet a predetermined condition; transmitting a global model corresponding to each task to a set of target devices corresponding to the task; and updating the corresponding global model based on trained models in response to receiving the trained models from the corresponding set of target devices.
-
公开(公告)号:US20230080230A1
公开(公告)日:2023-03-16
申请号:US17991977
申请日:2022-11-22
Inventor: Ji LIU , Sunjie YU , Dejing DOU , Jiwen ZHOU
Abstract: A method for generating a federated learning model is provided. The method includes obtaining images; obtaining sorting results of the images; and generating a trained federated learning model by training a federated learning model to be trained according to the images and the sorting results. The federated learning model to be trained is obtained after pruning a federated learning model to be pruned, and a pruning rate of a convolution layer in the federated learning model to be pruned is automatically adjusted according to a model accuracy during the pruning.
-
公开(公告)号:US20230074417A1
公开(公告)日:2023-03-09
申请号:US18055149
申请日:2022-11-14
Inventor: Ji LIU , Sunjie YU , Jiwen ZHOU , Ruipu ZHOU , Dejing DOU
Abstract: A method for training a longitudinal federated learning model is provided, and is applied to a first participant device. The first participant device includes label data. The longitudinal federated learning model includes a first bottom layer sub-model, an interaction layer sub-model, a top layer sub-model based on a Lipschitz neural network and a second bottom layer sub-model in a second participant device. First bottom layer output data of the first participant device and second bottom layer output data sent by the second participant device are obtained. The first bottom layer output data and the second bottom layer output data are input into an interaction layer sub-model to obtain interaction layer output data. Top layer output data is obtained based on the interaction layer output data and the top layer sub-model. The longitudinal federated learning model is trained according to the top layer output data and the label data.
-
5.
公开(公告)号:US20240086717A1
公开(公告)日:2024-03-14
申请号:US18098514
申请日:2023-01-18
Inventor: Ji LIU , Hao TIAN , Ruipu ZHOU , Dejing DOU
IPC: G06N3/098
CPC classification number: G06N3/098
Abstract: Disclosed is a model training control method based on asynchronous federated learning, an electronic device and a storage medium, relating to data processing technical field, and especially to technical fields such as edge computing and machine learning. The method includes: sending a first parameter of a first global model to a plurality of edge devices; receiving a second parameter of a second global model returned by a first edge device of plurality of edge devices, the second global model being a global model obtained after the first edge device trains the first global model according to a local data set; and sending a third parameter of a third global model to a second edge device of the plurality of edge devices in a case of the third global model is obtained based on aggregation of at least one second global model.
-
公开(公告)号:US20230244932A1
公开(公告)日:2023-08-03
申请号:US18076501
申请日:2022-12-07
Inventor: Ji LIU , Qilong LI , Yu LI , Xingjian LI , Yifan SUN , Dejing DOU
Abstract: Provided are an image occlusion method, a model training method, a device, and a storage medium, which relate to the technical field of artificial intelligence, in particular, to the field of computer vision technologies and deep learning, and may be applied to image recognition, model training and other scenarios. The specific implementation solution is as follows: generating a candidate occlusion region according to an occlusion parameter; according to the candidate occlusion region, occluding an image to be processed to obtain a candidate occlusion image; determining a target occlusion region from the candidate occlusion region according to visual security and data availability of the candidate occlusion image; and according to the target occlusion region, occluding the image to be processed to obtain a target occlusion image. In this manner, the image to be processed is desensitized while the accuracy of target recognition is ensured.
-
公开(公告)号:US20230206123A1
公开(公告)日:2023-06-29
申请号:US18080803
申请日:2022-12-14
Inventor: Ji LIU , Hong ZHANG , Juncheng JIA , Ruipu ZHOU , Dejing DOU
CPC classification number: G06N20/00 , G06F9/4881
Abstract: A technical solution relates to distributed machine learning, and relates to the field of artificial intelligence technologies, such as machine learning technologies, or the like. An implementation includes: acquiring, based on delay information, an optimal scheduling queue of a plurality of edge devices participating in training; and scheduling each edge device of the plurality of edge devices to train a machine learning model based on the optimal scheduling queue of the plurality of edge devices.
-
公开(公告)号:US20230206075A1
公开(公告)日:2023-06-29
申请号:US17991077
申请日:2022-11-21
Inventor: Ji LIU , Zhihua WU , Danlei FENG , Minxu ZHANG , Xinxuan WU , Xuefeng YAO , Beichen MA , Dejing DOU , Dianhai YU , Yanjun MA
Abstract: A method for distributing network layers in a neural network model includes: acquiring a to-be-processed neural network model and a computing device set; generating a target number of distribution schemes according to network layers in the to-be-processed neural network model and computing devices in the computing device set, the distribution schemes including corresponding relationships between the network layers and the computing devices; according to device types of the computing devices, combining the network layers corresponding to the same device type in each distribution scheme into one stage, to obtain a combination result of each distribution scheme; obtaining an adaptive value of each distribution scheme according to the combination result of each distribution scheme; and determining a target distribution scheme from the distribution schemes according to respective adaptive value, and taking the target distribution scheme as a distribution result of the network layers in the to-be-processed neural network model.
-
公开(公告)号:US20220385583A1
公开(公告)日:2022-12-01
申请号:US17817594
申请日:2022-08-04
Inventor: Ji LIU , Jiayuan ZHANG , Ruipu ZHOU , Dejing DOU
IPC: H04L47/22 , H04L47/2475
Abstract: A traffic classification method and apparatus, a training method and apparatus, a device and a medium are provided. An implementation is: performing a preprocessing operation on each characteristic of one or more characteristics of an object to be classified; and inputting the one or more characteristics of the object to be classified into a traffic classifier to determine a traffic type of the object to be classified. The preprocessing operation includes at least one of: setting, in response to determining that a characteristic value of the characteristic is invalid data, the characteristic value to a null value; converting, in response to determining that the characteristic is a non-numeric characteristic, the characteristic value of the characteristic to an integer value, and normalizing, in response to determining that the characteristic is a non-port characteristic, the characteristic value of the characteristic.
-
10.
公开(公告)号:US20240037410A1
公开(公告)日:2024-02-01
申请号:US18108977
申请日:2023-02-13
Inventor: Ji LIU , Beichen MA , Dejing DOU
IPC: G06N3/098
CPC classification number: G06N3/098
Abstract: A method for model aggregation in federated learning (FL), a server, a device, and a storage medium are suggested, which relate to the field of artificial intelligence (AI) technologies such as machine learning. A specific implementation solution involves: acquiring a data not identically and independently distributed (Non-IID) degree value of each of a plurality of edge devices participating in FL; acquiring local models uploaded by the edge devices; and performing aggregation based on the data Non-IID degree values of the edge devices and the local models uploaded by the edge devices to obtain a global model.
-
-
-
-
-
-
-
-
-