DEMONSTRATION UNCERTAINTY-BASED ARTIFICIAL INTELLIGENCE MODEL FOR OPEN INFORMATION EXTRACTION

    公开(公告)号:US20250077848A1

    公开(公告)日:2025-03-06

    申请号:US18817793

    申请日:2024-08-28

    Abstract: Systems and methods for a demonstration uncertainty-based artificial intelligence model for open information extraction. A large language model (LLM) can generate initial structured sentences using an initial prompt for a domain-specific instruction extracted from an unstructured text input. Structural similarities between the initial structured sentences and sentences from a training dataset can be determined to obtain structurally similar sentences. The LLM can identify relational triplets from combinations of tokens from generated sentences using and the structurally similar sentences. The relational triplets can be filtered based on a calculated demonstration uncertainty to obtain a filtered triplet list. A domain-specific task can be performed using the filtered triplet list to assist the decision-making process of a decision-making entity.

    OPTIMIZING LARGE LANGUAGE MODELS WITH DOMAIN-ORIENTED MODEL COMPRESSION

    公开(公告)号:US20250061334A1

    公开(公告)日:2025-02-20

    申请号:US18805978

    申请日:2024-08-15

    Abstract: Systems and methods for optimizing large language models (LLM) with domain-oriented model compression. Importance weights for general knowledge in a trained LLM, pretrained with deep learning, can be determined by computing the error when removing a weight from the trained LLM. The trained LLM can be iteratively optimized to obtain a domain-compressed LLM with domain knowledge while maintaining general knowledge by: fine-tuning the trained LLM iteratively with domain knowledge using the importance weights for general knowledge to obtain a fine-tuned LLM; determining importance weights for domain knowledge in the LLM with a regularization term by using gradient descent to optimize parameters when the fine-tuned LLM is trained with domain knowledge; and pruning learned knowledge based on importance weights for domain knowledge. A corrective action can be performed on a monitored entity using the domain-compressed LLM.

Patent Agency Ranking