-
公开(公告)号:US20250103963A1
公开(公告)日:2025-03-27
申请号:US18969634
申请日:2024-12-05
IPC: G06N20/00
Abstract: A method for processing a query-response information is provided, which relates to a field of artificial intelligence technology, and in particular to fields of deep learning, large models, intelligent query and response, etc. The method for processing a query-response information includes: generating at least one initial response information according to a query information provided by an object; acquiring at least one feedback information corresponding to the at least one initial response information, wherein the feedback information indicates a preference degree of the object for the initial response information; and generating a training sample according to the query information, the at least one initial response information and the at least one feedback information. The present disclosure further provides a method for training a conversational model, an electronic device, and a storage medium.
-
2.
公开(公告)号:US20250094789A1
公开(公告)日:2025-03-20
申请号:US18968810
申请日:2024-12-04
Inventor: Hua LU , Shilong FAN , Zeyang LEI , Bingjin CHEN , Siqi BAO , Hua WU
IPC: G06N3/0475
Abstract: A method for evaluating a large model, an electronic device and a computer readable storage medium are provided, which relate to a field of artificial intelligence technology, and in particular to fields of large models technology and deep learning technology. The method includes: evaluating a response information of each of M large language models for an input instruction based on a preset evaluation rule, so as to obtain a first evaluation information for each response information, where M is a positive integer greater than 1; evaluating, in response to the first evaluation information for the M large language models being consistent with each other, each response information in a plurality of evaluation dimensions, so as to obtain a second evaluation information for each response information; and determining an evaluation result representing a responsiveness of each large language model, according to the second evaluation information for each response information.
-