INTERPRETABLE SPARSE HIGH-ORDER BOLTZMANN MACHINES
    41.
    发明申请
    INTERPRETABLE SPARSE HIGH-ORDER BOLTZMANN MACHINES 审中-公开
    易于理解的高阶BOLTZMANN机械

    公开(公告)号:US20140310221A1

    公开(公告)日:2014-10-16

    申请号:US14243918

    申请日:2014-04-03

    Inventor: Renqiang Min

    CPC classification number: G06N20/00

    Abstract: A method for performing structured learning for high-dimensional discrete graphical models includes estimating a high-order interaction neighborhood structure of each visible unit or a Markov blanket of each unit; once a high-order interaction neighborhood structure of each visible unit is identified, adding corresponding energy functions with respect to the high-order interaction of that unit into an energy function of High-order BM (HBM); and applying Maximum-Likelihood Estimation updates to learn the weights associated with the identified high-order energy functions. The system can effectively identify meaningful high-order interactions between input features for system output prediction, especially for early cancer diagnosis, biomarker discovery, sentiment analysis, automatic essay grading, Natural Language Processing, text summarization, document visualization, and many other data exploration problems in Big Data.

    Abstract translation: 用于执行高维离散图形模型的结构化学习的方法包括估计每个单元的每个可见单元或马尔科夫毯的高阶交互邻域结构; 一旦识别出每个可见单元的高阶相互作用邻域结构,就将该单元的高阶相互作用相应的能量函数加到高阶BM(HBM)的能量函数中; 并应用最大似然估计更新来学习与识别的高阶能量函数相关联的权重。 该系统可以有效地识别系统输出预测的输入特征之间的有意义的高阶交互,特别是对于早期癌症诊断,生物标志物发现,情绪分析,自动散文分级,自然语言处理,文本摘要,文档可视化以及许多其他数据探索问题 在大数据。

    Learning ordinal representations for deep reinforcement learning based object localization

    公开(公告)号:US12205357B2

    公开(公告)日:2025-01-21

    申请号:US17715901

    申请日:2022-04-07

    Abstract: A reinforcement learning based approach to the problem of query object localization, where an agent is trained to localize objects of interest specified by a small exemplary set. We learn a transferable reward signal formulated using the exemplary set by ordinal metric learning. It enables test-time policy adaptation to new environments where the reward signals are not readily available, and thus outperforms fine-tuning approaches that are limited to annotated images. In addition, the transferable reward allows repurposing of the trained agent for new tasks, such as annotation refinement, or selective localization from multiple common objects across a set of images. Experiments on corrupted MNIST dataset and CU-Birds dataset demonstrate the effectiveness of our approach.

    T-CELL RECEPTOR REPERTOIRE SELECTION PREDICTION WITH PHYSICAL MODEL AUGMENTED PSEUDO-LABELING

    公开(公告)号:US20230129568A1

    公开(公告)日:2023-04-27

    申请号:US17969883

    申请日:2022-10-20

    Abstract: Systems and methods for predicting T-Cell receptor (TCR)-peptide interaction, including training a deep learning model for the prediction of TCR-peptide interaction by determining a multiple sequence alignment (MSA) for TCR-peptide pair sequences from a dataset of TCR-peptide pair sequences using a sequence analyzer, building TCR structures and peptide structures using the MSA and corresponding structures from a Protein Data Bank (PDB) using a MODELLER, and generating an extended TCR-peptide training dataset based on docking energy scores determined by docking peptides to TCRs using physical modeling based on the TCR structures and peptide structures built using the MODELLER. TCR-peptide pairs are classified and labeled as positive or negative pairs using pseudo-labels based on the docking energy scores, and the deep learning model is iteratively retrained based on the extended TCR-peptide training dataset and the pseudo-labels until convergence.

    BINDING PEPTIDE GENERATION FOR MHC CLASS I PROTEINS WITH DEEP REINFORCEMENT LEARNING

    公开(公告)号:US20230085160A1

    公开(公告)日:2023-03-16

    申请号:US17899004

    申请日:2022-08-30

    Abstract: A method for generating binding peptides presented by any given Major Histocompatibility Complex (MHC) protein is presented. The method includes, given a peptide and an MHC protein pair, enabling a Reinforcement Learning (RL) agent to interact with and exploit a peptide mutation environment by repeatedly mutating the peptide and observing an observation score of the peptide, learning to form a mutation policy, via a mutation policy network, to iteratively mutate amino acids of the peptide to obtain desired presentation scores, and generating, based on the desired presentation scores, qualified peptides and binding motifs of MHC Class I proteins.

Patent Agency Ranking