METHOD FOR ASYNCHRONOUS FEDERATED LEARNING, METHOD FOR PREDICTING BUSINESS SERVICE, APPARATUS, AND SYSTEM

    公开(公告)号:US20220383198A1

    公开(公告)日:2022-12-01

    申请号:US17879888

    申请日:2022-08-03

    Abstract: The present disclosure provides a method for asynchronous federated learning, including: in response to a request for participating in asynchronous federated learning sent by a target electronic device, determining, according to performance information of a server, a first number of electronic devices that the server supports to participate in the asynchronous federated learning, and acquiring a second number of other electronic devices that have participated in the asynchronous federated learning; if the first number is greater than the second number, sending a global model to be optimized to the target electronic device, and receiving target feedback information which is obtained by the target electronic device from training on the global model to be optimized; and optimizing, according to the target feedback information, the global model to be optimized to obtain an optimized global model.

    METHOD FOR GENERATING FEDERATED LEARNING MODEL

    公开(公告)号:US20230084055A1

    公开(公告)日:2023-03-16

    申请号:US17991958

    申请日:2022-11-22

    Abstract: A method for generating a federated learning model is provided. The method includes obtaining images; obtaining sorting results of the images; and generating a trained federated learning model by training a federated learning model to be trained according to the images and the sorting results. The federated learning model to be trained is obtained after pruning a federated learning model to be pruned, and a pruning rate of a convolution layer in the federated learning model to be pruned is automatically adjusted according to a model accuracy during the pruning.

    FEDERATED LEARNING METHOD AND SYSTEM, ELECTRONIC DEVICE, AND STORAGE MEDIUM

    公开(公告)号:US20230083116A1

    公开(公告)日:2023-03-16

    申请号:US17988264

    申请日:2022-11-16

    Abstract: A federated learning method and system, an electronic device, and a storage medium, which relate to a field of artificial intelligence, in particular to fields of computer vision and deep learning technologies. The method includes: performing a plurality of rounds of training until a training end condition is met, to obtain a trained global model; and publishing the trained global model to a plurality of devices. Each of the plurality of rounds of training includes: transmitting a current global model to at least some devices in the plurality of devices; receiving trained parameters for the current global model from the at least some devices; performing an aggregation on the received parameters to obtain a current aggregation model; and adjusting the current aggregation model based on a globally shared dataset, and updating the adjusted aggregation model as a new current global model for a next round of training.

    METHOD OF FEDERATED LEARNING, ELECTRONIC DEVICE, AND STORAGE MEDIUM

    公开(公告)号:US20220391780A1

    公开(公告)日:2022-12-08

    申请号:US17820758

    申请日:2022-08-18

    Abstract: The present disclosure provides a method of federated learning. A specific implementation solution includes: determining, for a current learning period, a target device for each task of at least one learning task to be performed, from a plurality of candidate devices according to a plurality of resource information of the plurality of candidate devices; transmitting a global model for the each task to the target device for the each task, so that the target device for the each task trains the global model for the each task; and updating, in response to receiving trained models from all target devices for the each task, the global model for the each task according to the trained models, so as to complete the current learning period. The present disclosure further provides an electronic device, and a storage medium.

    METHOD FOR MULTI-TASK SCHEDULING, DEVICE AND STORAGE MEDIUM

    公开(公告)号:US20220374775A1

    公开(公告)日:2022-11-24

    申请号:US17867516

    申请日:2022-07-18

    Abstract: A method for multi-task scheduling, a device and a storage medium are provided. The method may include: initializing a list of candidate scheduling schemes, the candidate scheduling scheme being used to allocate a terminal device for training to each machine learning task in a plurality of machine learning tasks; perturbing, for each candidate scheduling scheme in the list of candidate scheduling schemes, the candidate scheduling scheme to generate a new scheduling scheme; determining whether to replace the candidate scheduling scheme with the new scheduling scheme based on a fitness value of the candidate scheduling scheme and a fitness value of the new scheduling scheme, to generate a new scheduling scheme list; and determining a target scheduling scheme, based on the fitness value of each new scheduling scheme in the new scheduling scheme list.

Patent Agency Ranking