BILEVEL DECENTRALIZED MULTI-AGENT LEARNING

    公开(公告)号:US20250005324A1

    公开(公告)日:2025-01-02

    申请号:US18217081

    申请日:2023-06-30

    Abstract: A computer-implemented method of decentralized multi-agent learning for use in a system having a plurality of intelligent agents each having a personal portion and a shared portion, is provided. The method includes iteratively, until each of a personal goal and a network goal are optimized: determining a feedback associated with an action relative to a personal goal and a degree of similarity relative to a shared goal; adjusting a policy based on the feedback to gain a superior feedback from a next action; broadcasting the shared policy; receiving the at least one of the one or more other intelligent agents' shared policy; generating a combined policy by combining the personal policy and the at least one of the one or more other intelligent agents' shared policy; estimating, using the combined policy, a network value function; and conducting the next action in accordance with the combined policy.

    Asynchronous multiple scheme meta learning

    公开(公告)号:US11669780B2

    公开(公告)日:2023-06-06

    申请号:US16675555

    申请日:2019-11-06

    CPC classification number: G06N20/20

    Abstract: Building machine learning models by receiving, a plurality of training process scores associated with the model parameter lists, determining, a best model parameter list according to the training process scores, determining a descendant model parameter list according to the best model parameter list, wherein the descendant parameter list comprises a portion of the best model parameter list, distributing the descendant model parameter list, conducting a model training process according to the descendant model parameter list, determining a training process score according to the descendant model parameter list, and sending the training process score for the descendant model parameter list.

    Faithful and Efficient Sample-Based Model Explanations

    公开(公告)号:US20220383185A1

    公开(公告)日:2022-12-01

    申请号:US17334889

    申请日:2021-05-31

    Abstract: Hessian matrix-free sample-based techniques for model explanations that are faithful to the model are provided. In one aspect, a method for explaining a machine learning model {circumflex over (θ)} (e.g., for natural language processing) is provided. The method includes: training the machine learning model {circumflex over (θ)} with training data D; obtaining a decision of the machine learning model {circumflex over (θ)}; and explaining the decision of the machine learning model {circumflex over (θ)} using training examples from the training data D.

    UPDATING OF A STATISTICAL SET FOR DECENTRALIZED DISTRIBUTED TRAINING OF A MACHINE LEARNING MODEL

    公开(公告)号:US20220374747A1

    公开(公告)日:2022-11-24

    申请号:US17314450

    申请日:2021-05-07

    Abstract: Systems, computer-implemented methods, and/or computer program products to facilitate updating, such as averaging and/or training, of one or more statistical sets are provided. According to an embodiment, a system can comprise a memory that stores computer executable components and a processor that executes the computer executable components stored in the memory. The computer executable components can include a computing component that updates a first statistical set with an additional statistical set from an additional system. The additional statistical set can have been generated from a parent statistical set that is based on underlying data. To update the first statistical set, the additional statistical set can be obtained by the system without obtaining the parent statistical set and without obtaining the underlying data. According to an embodiment, the first statistical set can be a model parameter set generated from a first parent statistical set that is an analytical model.

    HIERARCHICAL DECENTRALIZED DISTRIBUTED DEEP LEARNING TRAINING

    公开(公告)号:US20220027796A1

    公开(公告)日:2022-01-27

    申请号:US16935246

    申请日:2020-07-22

    Abstract: Embodiments of a method are disclosed. The method includes performing a batch of decentralized deep learning training for a machine learning model in coordination with multiple local homogenous learners on a deep learning training compute node, and in coordination with multiple super learners on corresponding deep learning training compute nodes. The method also includes exchanging communications with the super learners in accordance with an asynchronous decentralized parallel stochastic gradient descent (ADPSGD) protocol. The communications are associated with the batch of deep learning training.

Patent Agency Ranking