-
公开(公告)号:US12260186B2
公开(公告)日:2025-03-25
申请号:US17992436
申请日:2022-11-22
Inventor: Zhe Hu , Jiachen Liu , Xinyan Xiao
Abstract: A method of generating a text, a method of training a text generation model, an electronic device, and a storage medium, which relate to a field of a computer technology, in particular to fields of deep learning and natural language processing technologies. A specific implementation solution includes: determining a reference feature representation of a target semantic information; determining, based on the reference feature representation and at least one predetermined logical character, at least one sentence latent representation respectively corresponding to the at least one predetermined logical character; and generating a target text content based on the at least one sentence latent representation.
-
公开(公告)号:US20230114673A1
公开(公告)日:2023-04-13
申请号:US18059645
申请日:2022-11-29
Inventor: Wei Li , Xinyan Xiao , Jiachen Liu
Abstract: A method for recognizing a token is performed by an electronic device. The method includes: obtaining first modal data and second modal data; determining a first token of the first modal data and a second token of the second modal data; determining an associated token between the first token and the second token; and recognizing a target shared token between the first modal data and the second modal data based on the first token, the second token and the associated token.
-
公开(公告)号:US12106062B2
公开(公告)日:2024-10-01
申请号:US17572930
申请日:2022-01-11
Inventor: Zhe Hu , Zhiwei Cao , Jiachen Liu , Xinyan Xiao
CPC classification number: G06F40/40
Abstract: The disclosure provides a method for generating a text. The method includes: obtaining a coding sequence of a first text by coding the first text; obtaining a controllable attribute of a second text to be generated; predicting a hidden state of the second text based on the coding sequence of the first text and the controllable attribute of the second text; and obtaining a second text corresponding to the first text by decoding the coding sequence of the first text based on the hidden state of the second text.
-
公开(公告)号:US12093297B2
公开(公告)日:2024-09-17
申请号:US17577561
申请日:2022-01-18
Inventor: Wenhao Wu , Wei Li , Xinyan Xiao , Jiachen Liu
CPC classification number: G06F16/345 , G06F40/51 , G06F40/56
Abstract: The present disclosure provides a summary generation model training method and apparatus, a device and a storage medium, and relates to the field of computer technologies, and in particular, to the field of artificial intelligence such as natural language processing and deep learning. The summary generation model training method includes: acquiring a document representation corresponding to a document sample; constructing, based on the document representation, a summary representation corresponding to the document representation, the summary representation including a positive summary representation and a negative summary representation; and constructing a total contrastive loss function based on the document representation, the positive summary representation and the negative summary representation, and training a summary generation model based on the total contrastive loss function. The present disclosure may improve accuracy of the summary generation model.
-
公开(公告)号:US20230084438A1
公开(公告)日:2023-03-16
申请号:US17992436
申请日:2022-11-22
Inventor: Zhe Hu , Jiachen Liu , Xinyan Xiao
Abstract: A method of generating a text, a method of training a text generation model, an electronic device, and a storage medium, which relate to a field of a computer technology, in particular to fields of deep learning and natural language processing technologies. A specific implementation solution includes: determining a reference feature representation of a target semantic information; determining, based on the reference feature representation and at least one predetermined logical character, at least one sentence latent representation respectively corresponding to the at least one predetermined logical character; and generating a target text content based on the at least one sentence latent representation.
-
6.
公开(公告)号:US20220327809A1
公开(公告)日:2022-10-13
申请号:US17809133
申请日:2022-06-27
Inventor: Wei Li , Can Gao , Guocheng Niu , Xinyan Xiao , Hao Liu , Jiachen Liu , Hua Wu , Haifeng Wang
IPC: G06V10/778 , G06V10/774 , G06V10/26 , G06F40/284
Abstract: A method for training a model based on multi-modal data joint learning, includes: obtaining multi-modal data; in which the multi-modal data include at least one type of single-modal data and at least one type of Pair multi-modal data; inputting the single-modal data and the Pair multi-modal data into a decoupling attention Transformer network model to generate respectively Token semantic representation features and cross-modal semantic representation features; and training the decoupling attention Transformer network model based on the Token semantic representation features and the cross-modal semantic representation features.
-
-
-
-
-