-
公开(公告)号:US20230206075A1
公开(公告)日:2023-06-29
申请号:US17991077
申请日:2022-11-21
Inventor: Ji LIU , Zhihua WU , Danlei FENG , Minxu ZHANG , Xinxuan WU , Xuefeng YAO , Beichen MA , Dejing DOU , Dianhai YU , Yanjun MA
Abstract: A method for distributing network layers in a neural network model includes: acquiring a to-be-processed neural network model and a computing device set; generating a target number of distribution schemes according to network layers in the to-be-processed neural network model and computing devices in the computing device set, the distribution schemes including corresponding relationships between the network layers and the computing devices; according to device types of the computing devices, combining the network layers corresponding to the same device type in each distribution scheme into one stage, to obtain a combination result of each distribution scheme; obtaining an adaptive value of each distribution scheme according to the combination result of each distribution scheme; and determining a target distribution scheme from the distribution schemes according to respective adaptive value, and taking the target distribution scheme as a distribution result of the network layers in the to-be-processed neural network model.
-
公开(公告)号:US20230162087A1
公开(公告)日:2023-05-25
申请号:US17989243
申请日:2022-11-17
Inventor: Ji LIU , Chendi ZHOU , Beichen MA , Jiwen ZHOU , Dejing DOU
CPC classification number: G06N20/00 , G06F9/4881
Abstract: A federated learning method, an electronic device, and a storage medium, which relate to a field of artificial intelligence, in particular to fields of distributed data processing and deep learning. The method includes: determining, for each task in a current learning period, a set of target devices corresponding to the task according to respective scheduling information of a plurality of candidate devices corresponding to the task based on a scheduling policy, the scheduling policy enables a time cost information and a device fairness evaluation information of completing the task in the current learning period to meet a predetermined condition; transmitting a global model corresponding to each task to a set of target devices corresponding to the task; and updating the corresponding global model based on trained models in response to receiving the trained models from the corresponding set of target devices.
-
3.
公开(公告)号:US20240037410A1
公开(公告)日:2024-02-01
申请号:US18108977
申请日:2023-02-13
Inventor: Ji LIU , Beichen MA , Dejing DOU
IPC: G06N3/098
CPC classification number: G06N3/098
Abstract: A method for model aggregation in federated learning (FL), a server, a device, and a storage medium are suggested, which relate to the field of artificial intelligence (AI) technologies such as machine learning. A specific implementation solution involves: acquiring a data not identically and independently distributed (Non-IID) degree value of each of a plurality of edge devices participating in FL; acquiring local models uploaded by the edge devices; and performing aggregation based on the data Non-IID degree values of the edge devices and the local models uploaded by the edge devices to obtain a global model.
-
公开(公告)号:US20220391780A1
公开(公告)日:2022-12-08
申请号:US17820758
申请日:2022-08-18
Inventor: Ji LIU , Beichen MA , Chendi ZHOU , Juncheng JIA , Dejing DOU , Shilei JI , Yuan LIAO
Abstract: The present disclosure provides a method of federated learning. A specific implementation solution includes: determining, for a current learning period, a target device for each task of at least one learning task to be performed, from a plurality of candidate devices according to a plurality of resource information of the plurality of candidate devices; transmitting a global model for the each task to the target device for the each task, so that the target device for the each task trains the global model for the each task; and updating, in response to receiving trained models from all target devices for the each task, the global model for the each task according to the trained models, so as to complete the current learning period. The present disclosure further provides an electronic device, and a storage medium.
-
5.
公开(公告)号:US20220374776A1
公开(公告)日:2022-11-24
申请号:US17868113
申请日:2022-07-19
Inventor: Ji LIU , Beichen MA , Chendi ZHOU , Jingbo ZHOU , Ruipu ZHOU , Dejing DOU
IPC: G06N20/00
Abstract: The present disclosure provides a method and apparatus for federated learning, which relate to the technical fields such as big data and deep learning. A specific implementation is: generating, for each task in a plurality of different tasks trained simultaneously, a global model for each task; receiving resource information of each available terminal in a current available terminal set; selecting a target terminal corresponding to each task from the current available terminal set, based on the resource information and the global model; and training the global model using the target terminal until a trained global model for each task meets a preset condition.
-
公开(公告)号:US20220374775A1
公开(公告)日:2022-11-24
申请号:US17867516
申请日:2022-07-18
Inventor: Ji LIU , Beichen MA , Jingbo ZHOU , Ruipu ZHOU , Dejing DOU
Abstract: A method for multi-task scheduling, a device and a storage medium are provided. The method may include: initializing a list of candidate scheduling schemes, the candidate scheduling scheme being used to allocate a terminal device for training to each machine learning task in a plurality of machine learning tasks; perturbing, for each candidate scheduling scheme in the list of candidate scheduling schemes, the candidate scheduling scheme to generate a new scheduling scheme; determining whether to replace the candidate scheduling scheme with the new scheduling scheme based on a fitness value of the candidate scheduling scheme and a fitness value of the new scheduling scheme, to generate a new scheduling scheme list; and determining a target scheduling scheme, based on the fitness value of each new scheduling scheme in the new scheduling scheme list.
-
-
-
-
-