-
11.
公开(公告)号:US20250077848A1
公开(公告)日:2025-03-06
申请号:US18817793
申请日:2024-08-28
Applicant: NEC Laboratories America, Inc.
Inventor: Xujiang Zhao , Haoyu Wang , Zhengzhang Chen , Wei Cheng , Haifeng Chen , Yanchi Liu , Chen Ling
IPC: G06N3/0475 , G16H80/00
Abstract: Systems and methods for a demonstration uncertainty-based artificial intelligence model for open information extraction. A large language model (LLM) can generate initial structured sentences using an initial prompt for a domain-specific instruction extracted from an unstructured text input. Structural similarities between the initial structured sentences and sentences from a training dataset can be determined to obtain structurally similar sentences. The LLM can identify relational triplets from combinations of tokens from generated sentences using and the structurally similar sentences. The relational triplets can be filtered based on a calculated demonstration uncertainty to obtain a filtered triplet list. A domain-specific task can be performed using the filtered triplet list to assist the decision-making process of a decision-making entity.
-
公开(公告)号:US20250061334A1
公开(公告)日:2025-02-20
申请号:US18805978
申请日:2024-08-15
Applicant: NEC Laboratories America, Inc.
Inventor: Yanchi Liu , Wei Cheng , Xujiang Zhao , Runxue Bao , Haifeng Chen , Nan Zhang
IPC: G06N3/082 , G06N3/0455
Abstract: Systems and methods for optimizing large language models (LLM) with domain-oriented model compression. Importance weights for general knowledge in a trained LLM, pretrained with deep learning, can be determined by computing the error when removing a weight from the trained LLM. The trained LLM can be iteratively optimized to obtain a domain-compressed LLM with domain knowledge while maintaining general knowledge by: fine-tuning the trained LLM iteratively with domain knowledge using the importance weights for general knowledge to obtain a fine-tuned LLM; determining importance weights for domain knowledge in the LLM with a regularization term by using gradient descent to optimize parameters when the fine-tuned LLM is trained with domain knowledge; and pruning learned knowledge based on importance weights for domain knowledge. A corrective action can be performed on a monitored entity using the domain-compressed LLM.
-
公开(公告)号:US20240304329A1
公开(公告)日:2024-09-12
申请号:US18591838
申请日:2024-02-29
Applicant: NEC Laboratories America, Inc.
Inventor: Wei Cheng , Haifeng Chen , Xujiang Zhao , Xianjun Yang
Abstract: Methods and systems for prompt tuning include training a tuning function to set prompt position, prompt length, or prompt pool based on a language processing task. The tuning function is applied to an input query to generate a combined input, with prompt text having the prompt length, being selected according to the prompt pool, and being added to the input query at the prompt position. The combined input is applied to a language model.
-
-