-
公开(公告)号:US20230032324A1
公开(公告)日:2023-02-02
申请号:US17966127
申请日:2022-10-14
Inventor: Fan WANG , Hao TIAN , Haoyi XIONG , Hua WU , Jingzhou HE , Haifeng WANG
IPC: G06N20/00
Abstract: A method for training a decision-making model parameter, a decision determination method, an electronic device, and a non-transitory computer-readable storage medium are provided. In the method, a perturbation parameter is generated according to a meta-parameter, and first observation information of a primary training environment is acquired based on the perturbation parameter. According to the first observation information, an evaluation parameter of the perturbation parameter is determined. According to the perturbation parameter and the evaluation parameter thereof, an updated meta-parameter is generated. The updated meta-parameter is determined as a target meta-parameter, when it is determined, according to the meta-parameter and the updated meta-parameter, that a condition for stopping primary training is met. According to the target meta-parameter, a target memory parameter corresponding to a secondary training task is determined, where the target memory parameter and the target meta-parameter are used to make a decision corresponding to a prediction task.
-
公开(公告)号:US20220198153A1
公开(公告)日:2022-06-23
申请号:US17694034
申请日:2022-03-14
Inventor: Jian GONG , Yu SUN , Hao TIAN , Hua WU , Haifeng WANG , Qiaoqiao SHE
IPC: G06F40/40 , G06F40/284 , G06F40/205
Abstract: A model training method, a model training platform, an electronic device and a storage medium are provided, which can be used in the field of artificial intelligence, particularly the fields of natural language processing and deep learning. The model training method includes: receiving an input; determining, based on the input, a user-oriented prefabricated function; determining, based on the input, a model training function; determining, based on the input, a pre-trained model; determining, based on the input, a network structure associated with the pre-trained model so as to support use of the pre-trained model; training, based on the input, the model by using the prefabricated function, the model training function, and the pre-trained model; and providing an output associated with a trained model.
-
公开(公告)号:US20220174369A1
公开(公告)日:2022-06-02
申请号:US17651714
申请日:2022-02-18
IPC: H04N21/488 , G06F40/284 , G06F16/783 , G06V40/16
Abstract: The present disclosure provides examples of a method and apparatus for processing a video, a device and a storage medium. The method may include: acquiring a target video and a target comment of the target video; recognizing a picture in the target video to obtain text information of the picture; determining a target comment matching a content of the text information; and inserting, in response to displaying the picture in the target video, the target comment matching the content in a form of a bullet screen.
-
14.
公开(公告)号:US20210397980A1
公开(公告)日:2021-12-23
申请号:US17036160
申请日:2020-09-29
IPC: G06N5/02 , G06N5/04 , G06F40/279 , G06K9/62
Abstract: The present disclosure provides an information recommendation method, which relates to a field of knowledge graph. The method includes: acquiring request information; extracting a request entity word representing an entity from the request information; determining recommendation information based on the request entity word and a pre-constructed knowledge graph; and pushing the recommendation information, wherein the knowledge graph is constructed based on a text, and the knowledge graph indicates a first word representing a source of the text. The present disclosure further provides an information recommendation apparatus, an electronic device and a computer-readable storage medium.
-
公开(公告)号:US20240412002A1
公开(公告)日:2024-12-12
申请号:US18747641
申请日:2024-06-19
Inventor: Yanbin ZHAO , Siyu DING , Shuohuan WANG , Yu SUN , Hao TIAN , Hua WU , Haifeng WANG
IPC: G06F40/35
Abstract: A method is provided. The method includes: obtaining a first sample dataset; inputting at least one first question text corresponding to at least one piece of first sample data into a dialog model separately to obtain at least one first answer prediction result; inputting each second question text into the dialog model to obtain a second answer prediction result output by the dialog model; inputting the second answer prediction result into a reward model to obtain a score of the second answer prediction result output by the reward model; determining a comprehensive loss based on the at least one first answer prediction result, a first answer text of each of the at least one piece of first sample data, and a score corresponding to each of at least one piece of second sample data; and adjusting at least one parameter of the dialog model based on the comprehensive loss.
-
公开(公告)号:US20220293092A1
公开(公告)日:2022-09-15
申请号:US17828773
申请日:2022-05-31
Inventor: Siyu DING , Chao PANG , Shuohuan WANG , Yanbin ZHAO , Junyuan SHANG , Yu SUN , Shikun FENG , Hao TIAN , Hua WU , Haifeng WANG
Abstract: The present application provides a method of training a natural language processing model, which relates to a field of artificial intelligence, and in particular to a field of natural language processing. A specific implementation scheme includes: performing a semantic learning for multi-tasks on an input text, so as to obtain a semantic feature for the multi-tasks, wherein the multi-tasks include a plurality of branch tasks; performing a feature learning for each branch task based on the semantic feature, so as to obtain a first output result for each branch task; calculating a loss for each branch task according to the first output result for the branch task; and adjusting a parameter of the natural language processing model according to the loss for each branch task. The present application further provides a method of processing a natural language, an electronic device, and a storage medium.
-
-
-
-
-