-
1.
公开(公告)号:US20210397980A1
公开(公告)日:2021-12-23
申请号:US17036160
申请日:2020-09-29
IPC: G06N5/02 , G06N5/04 , G06F40/279 , G06K9/62
Abstract: The present disclosure provides an information recommendation method, which relates to a field of knowledge graph. The method includes: acquiring request information; extracting a request entity word representing an entity from the request information; determining recommendation information based on the request entity word and a pre-constructed knowledge graph; and pushing the recommendation information, wherein the knowledge graph is constructed based on a text, and the knowledge graph indicates a first word representing a source of the text. The present disclosure further provides an information recommendation apparatus, an electronic device and a computer-readable storage medium.
-
2.
公开(公告)号:US20230085599A1
公开(公告)日:2023-03-16
申请号:US18057560
申请日:2022-11-21
Inventor: Jinchang LUO , Haiwei WANG , Junzhao BU , Kunbin CHEN , Wei HE
IPC: G06N3/04
Abstract: The disclosure provides a method for training a tag recommendation model. The method includes: collecting training materials that comprise interest tags in response to receiving an instruction for collecting training materials; obtaining training semantic vectors that comprise the interest tags by representing features of the training materials using a semantic enhanced representation frame; obtaining training encoding vectors by aggregating social networks into the training semantic vectors; and obtaining a tag recommendation model by training a double-layer neural network structure using the training encoding vectors as inputs and the interest tags as outputs. Therefore, the interest tags obtained in the disclosure are more accurate.
-
3.
公开(公告)号:US20240338962A1
公开(公告)日:2024-10-10
申请号:US18747599
申请日:2024-06-19
Inventor: Haiwei WANG , Zhongwen ZHANG , Gang LI
IPC: G06V30/414 , G06V30/418
CPC classification number: G06V30/414 , G06V30/418
Abstract: The present disclosure provides an image based human-computer interaction method, which includes: acquiring a to-be-analyzed image, and determining image layout information and image content information of the to-be-analyzed image, where the to-be-analyzed image includes a variety of modal data, the image layout information represents distribution of image elements with preset granularity in the to-be-analyzed image, and the image content information represents a content expressed by the modal data in the to-be-analyzed image; and determining, in response to acquiring question information, response information corresponding to the question information according to the image layout information and the image content information, where the question information represents a question proposed by a user for the to-be-analyzed image, and the response information represents a reply answer corresponding to the question information. By extracting layout information and content information from an image, the accuracy of answering a question and user experience of human-computer interaction are improved.
-
公开(公告)号:US20220406034A1
公开(公告)日:2022-12-22
申请号:US17822898
申请日:2022-08-29
Inventor: Jingru GAN , Haiwei WANG , Jinchang LUO , Kunbin CHEN , Wei HE , Shuhui WANG
IPC: G06V10/74 , G06F40/295 , G06V10/80
Abstract: A method for extracting information, includes: obtaining an information stream comprising text and an image; generating, according to the text, embedded representations of textual entity mentions and a textual similarity matrix of the textual entity mentions and candidate textual entities; generating, according to the image, embedded representations of image entity mentions and an image similarity matrix of the image entity mentions and candidate image entities; and determining, based on an optimal transport, target textual entities of the textual entity mentions and target image entities of the image entity mentions according to the embedded representations of the textual entity mentions, the embedded representations of the image entity mentions, the textual similarity matrix and the image similarity matrix.
-
-
-