-
公开(公告)号:US20230083116A1
公开(公告)日:2023-03-16
申请号:US17988264
申请日:2022-11-16
Inventor: Ji LIU , Hong ZHANG , Juncheng JIA , Jiwen ZHOU , Shengbo PENG , Ruipu ZHOU , Dejing DOU
Abstract: A federated learning method and system, an electronic device, and a storage medium, which relate to a field of artificial intelligence, in particular to fields of computer vision and deep learning technologies. The method includes: performing a plurality of rounds of training until a training end condition is met, to obtain a trained global model; and publishing the trained global model to a plurality of devices. Each of the plurality of rounds of training includes: transmitting a current global model to at least some devices in the plurality of devices; receiving trained parameters for the current global model from the at least some devices; performing an aggregation on the received parameters to obtain a current aggregation model; and adjusting the current aggregation model based on a globally shared dataset, and updating the adjusted aggregation model as a new current global model for a next round of training.
-
公开(公告)号:US20230222356A1
公开(公告)日:2023-07-13
申请号:US18180594
申请日:2023-03-08
Inventor: Shengbo PENG , Jiwen ZHOU
IPC: G06N3/098
CPC classification number: G06N3/098
Abstract: A federated learning method and apparatus, a device and a medium are provided, and relates to the field of artificial intelligence, in particular to the field of federated learning and machine learning. The federated learning method includes: receiving data related to a federated learning task of a target participant, wherein the target participant at least includes a first computing device for executing the federated learning task; determining computing resources of the first computing device that are able to be used to execute the federated learning task; and generating a first deployment scheme for executing the federated learning task in response to determining that the data and the computing resources meet a predetermined condition, wherein the first deployment scheme instructs to generate at least a first work node and a second work node on the first computing device.
-