Method and apparatus for decentralized supervised learning in NLP applications
Abstract:
A method of training a neural network as a natural language processing, NLP, model, comprises: inputting annotated training data to first architecture portions of the neural network, the first architecture portions being executed respectively in a plurality of distributed client computing devices in communication with a server computing device, the training data being derived from text data private to the client computing device in which the first architecture portion is executed, the server computing device having no access to any of the private text data; deriving from the training data, using the first architecture portions, weight matrices of numeric weights which are decoupled from the private text data; concatenating the weight matrices, in a second architecture portion of the neural network executed in the server computing device, to obtain a single concatenated weight matrix; and training, on the second architecture portion, the NLP model using the concatenated weight matrix.
Information query
Patent Agency Ranking
0/0