-
公开(公告)号:US20230274139A1
公开(公告)日:2023-08-31
申请号:US18143448
申请日:2023-05-04
Applicant: SAMSUNG ELECTRONICS CO., LTD.
Inventor: Stylianos I. VENIERIS , Mario ALMEIDA , Royson LEE
IPC: G06N3/08
CPC classification number: G06N3/08
Abstract: Broadly speaking, the present techniques generally relate to a computer-implemented method for training a machine learning, ML, model to perform super-resolution on resource-constrained devices.
-
公开(公告)号:US20220245459A1
公开(公告)日:2022-08-04
申请号:US17586178
申请日:2022-01-27
Applicant: Samsung Electronics Co., Ltd.
Inventor: Stefanos LASKARIDIS , Samuel HORVATH , Mario ALMEIDA , Ilias LEONTIADIS , Stylianos I. VENIERIS
Abstract: Broadly speaking, the present techniques generally relates to methods, systems and apparatuses for training a machine learning (ML) model using federated learning. In particular, a method for training a machine learning (ML) model using federated learning performed by a plurality of client devices, the method comprising determining a computation capability of each client device, associating each client device with a value defining how much of each neural network layer of the ML model is to be included in a submodel to be trained by the each client device, based on the determined computation capability and generating a submodel of the ML model by using the value associated with the each client device to perform ordered pruning of at least one neural network layer of the ML model, is provided.
-
公开(公告)号:US20220083386A1
公开(公告)日:2022-03-17
申请号:US17420259
申请日:2020-04-10
Applicant: SAMSUNG ELECTRONICS CO., LTD.
Inventor: Mario ALMEIDA , Stefanos LASKARIDIS , Stylianos VENIERIS , Ilias LEONTIADIS
Abstract: Broadly speaking, the present techniques relate to methods and systems for dynamically distributing the execution of a neural network across multiple computing resources in order to satisfy various criteria associated with implementing the neural network. For example, the distribution may be performed to spread the processing load across multiple device, which may enable the neural network computation to be performed quicker than if performed by a single device and more cost-effectively than if the computation was performed entirely by a cloud server.
-
-