-
1.
公开(公告)号:EP4451179A1
公开(公告)日:2024-10-23
申请号:EP24164209.9
申请日:2024-03-18
申请人: Ricoh Company, Ltd.
摘要: A node (3) includes a display control unit (33), a reception unit (32), and a transmission unit (31). The display control unit (33) displays on a display unit (308), local model identification information (1364) identifying a local model generated by another node and a classification item (1362) that classifies learning data used to generate the local model. The reception unit (32) receives selection of the local model. The transmission unit (31) transmits selection information indicating the selection of the local model to another device.
-
公开(公告)号:EP4026058B1
公开(公告)日:2024-10-09
申请号:EP20751522.2
申请日:2020-08-04
CPC分类号: G06N3/045 , G06N3/098 , G06N3/0495 , G06N3/0464
-
-
4.
公开(公告)号:EP4379610A1
公开(公告)日:2024-06-05
申请号:EP23212470.1
申请日:2023-11-27
申请人: Ricoh Company, Ltd.
发明人: Aizaki, Tomoyasu
IPC分类号: G06N3/098
CPC分类号: G06N3/098
摘要: An information processing apparatus, a node, an information processing method, a carrier means, and an information processing system. The information processing apparatus receives information indicating a local model or output data, from each of a plurality of nodes, the information indicating the local model being obtained by learning a local data processed by the node based on a global model, the output data being obtained by inputting shared data to the local model, updates the global model based on a plurality of the information indicating the local model or a plurality of the output data received from the plurality of nodes, and calculates a contribution degree of each of the plurality of local models or each of the plurality of output data to the updated global model.
-
-
公开(公告)号:EP4345697A1
公开(公告)日:2024-04-03
申请号:EP23198714.0
申请日:2023-09-21
发明人: KIM, Minyoung , HOSPEDALES, Timothy
摘要: Broadly speaking, embodiments of the present techniques provide a method for training a machine learning, ML, model to update global and local versions of a model. We propose a novel hierarchical Bayesian approach to Federated Learning (FL), where our models reasonably describe the generative process of clients' local data via hierarchical Bayesian modeling: constituting random variables of local models for clients that are governed by a higher-level global variate. Interestingly, the variational inference in our Bayesian model leads to an optimisation problem whose block-coordinate descent solution becomes a distributed algorithm that is separable over clients and allows them not to reveal their own private data at all, thus fully compatible with FL.
-
公开(公告)号:EP4325398A1
公开(公告)日:2024-02-21
申请号:EP23190910.2
申请日:2023-08-10
申请人: Devron Corporation
摘要: Provided herein are systems and methods for vertical federated machine learning. Vertical federated machine learning can be performed by a central system communicatively coupled to a plurality of satellite systems. The central system can receive encrypted data from the satellite systems and apply a transformation that transforms the encrypted data into transformed data. The central system can identify matching values in the transformed data and generate a set of location indices that indicate one or more matching values in the transformed data. The central system can transmit instructions to the satellite systems to access data stored at locations indicated by the location indices and to train a machine learning model using data associated with said locations.
-
公开(公告)号:EP4283531A1
公开(公告)日:2023-11-29
申请号:EP23171268.8
申请日:2023-05-03
申请人: Hitachi, Ltd.
发明人: SUZUKI, Mayumi , TARUMI, Shinji
IPC分类号: G06N3/098
摘要: An object of the present invention is to achieve generation of a prediction model appropriate for each site without a necessity of transfer of data located at a plurality of sites to the outside of the sites.
An analysis device capable of communicating with a plurality of learning devices includes a reception unit (301, 401, 1501) that receives transformed features obtained by transforming, in accordance with a predetermined rule, features contained in pieces of learning data individually retained in the plurality of learning devices, a distribution analysis unit (302) that analyzes distributions of a plurality of the features of the plurality of learning devices on the basis of the transformed features received by the reception unit (301, 401, 1501) for each of the learning devices, and an output unit (304, 1504) that outputs a distribution analysis result analyzed by the distribution analysis unit (302).-
公开(公告)号:EP4202680A1
公开(公告)日:2023-06-28
申请号:EP22214447.9
申请日:2022-12-19
发明人: LIU, Ji , ZHANG, Hong , JIA, Juncheng , ZHOU, Ruipu , DOU, Dejing
摘要: The present disclosure provides a distributed machine learning method and system, a server, a device and a storage medium, and relates to the field of artificial intelligence technologies, such as machine learning technologies, or the like. An implementation includes: acquiring, based on delay information, an optimal scheduling queue of a plurality of edge devices participating in training; and scheduling each edge device of the plurality of edge devices to train a machine learning model based on the optimal scheduling queue of the plurality of edge devices. The present disclosure may effectively improve a distributed machine learning efficiency.
-
公开(公告)号:EP4462316A1
公开(公告)日:2024-11-13
申请号:EP24170676.1
申请日:2024-04-17
摘要: Disclosed is a method comprising receiving a plurality of local growing neural gas models from a plurality of distributed trainers, wherein a local growing neural gas model of the plurality of local growing neural gas models represents a local state model of at least one radio access network node; and training a global growing neural gas model based on the plurality of local growing neural gas models, wherein the global growing neural gas model represents a global state model of a plurality of radio access network nodes.
-
-
-
-
-
-
-
-
-