-
公开(公告)号:US20220129753A1
公开(公告)日:2022-04-28
申请号:US17572921
申请日:2022-01-11
Inventor: Yuxiang LU , Jiaxiang LIU , Xuyi CHEN , Shikun FENG , Shuohuan WANG , Yu SUN , Shiwei HUANG , Jingzhou HE
Abstract: A pre-training method of a neural network model, an electronic device, and a medium. The pre-training data is inputted to the initial neural network model, and the initial neural network model is pre-trained in the first training mode, in the first training mode, the plurality of hidden layers share one hidden layer parameter, and the loss value of the initial neural network model is obtained, if the loss value of the initial neural network model is less than a preset threshold, the initial neural network model continues to be pre-trained in the second training mode, in the second training mode, each of the plurality of hidden layers has its own hidden layer parameter.
-
公开(公告)号:US20250094792A1
公开(公告)日:2025-03-20
申请号:US18968790
申请日:2024-12-04
Inventor: Bo KE , Xuyi CHEN , Zhengjie HUANG , Shikun FENG , Weibin LI , Shiwei HUANG
IPC: G06N3/0495 , G06N3/0475 , G06N3/0499 , G06N3/09
Abstract: A task execution method for a large model, an electronic device, and a storage medium are provided, which relate to a field of artificial intelligence technology, particularly to fields of deep learning technology and large model technology. The method includes: executing a modality routing task by using a target computing unit based on a target feature to be processed to obtain a modality recognition result; executing a field routing task by using the target computing unit based on the target feature to be processed and a target field gating model parameter to obtain a field recognition result; and executing a feedforward task by using the target computing unit based on the target feature to be processed and a target feedforward task model parameter to obtain a task execution result
-
公开(公告)号:US20230047980A1
公开(公告)日:2023-02-16
申请号:US17976049
申请日:2022-10-28
Inventor: Xuyi CHEN , Weixin LIU , Yuxiang LU , Jiaxiang LU , Shiwei HUANG
IPC: G06F40/40
Abstract: A method of training a deep learning model, a method of processing a natural language, an electronic device, and a storage medium are provided, which relate to a field of artificial intelligence, in particular to deep learning technology and natural language processing technology. The method includes: inputting first sample data into a first deep learning model to obtain a first output result; training the first deep learning model according to the first output result and a first target output result, the first target output result is obtained by processing the first sample data using a reference deep learning model; inputting second sample data into a second deep learning model to obtain a second output result; and training the second deep learning model according to the second output result and a second target output result, to obtain a trained second deep learning model.
-
公开(公告)号:US20250028958A1
公开(公告)日:2025-01-23
申请号:US18908380
申请日:2024-10-07
Inventor: Xuyi CHEN , Bo KE , Chenhui LI , Zhengjie HUANG , Shiwei HUANG , Weibin LI , Shikun FENG
IPC: G06N3/08
Abstract: A data processing method, and a data processing model and a training method therefor are provided, and relate to the field of artificial intelligence, and specifically, to natural language processing, deep learning technologies, and large model technologies. An implementation solution includes: determining input data, where the input data includes a plurality of tokens; determining a correlation between each of the plurality of tokens and each of a plurality of expert networks based on a gating matrix, where the plurality of expert networks are used to reinforce the plurality of tokens; allocating the plurality of tokens to the plurality of expert networks in a uniform manner based on the correlation and a preset capacity of each expert network, to reinforce the plurality of tokens; and determining a data processing result based on the plurality of reinforced tokens.
-
-
-