Training latent variable machine learning models using multi-sample objectives

    公开(公告)号:US11062229B1

    公开(公告)日:2021-07-13

    申请号:US15438436

    申请日:2017-02-21

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a machine learning model. One of the methods includes, for each training observation: determining a plurality of latent variable value configurations, each latent variable value configuration being a combination of latent variable values that includes a respective value for each of the latent variables; determining, for each of the plurality of latent variable value configurations, a respective local learning signal that is minimally dependent on each of the other latent variable value configurations in the plurality of latent variable value configurations; determining an unbiased estimate of a gradient of the objective function using the local learning signals; and updating current values of the parameters of the machine learning model using the unbiased estimate of the gradient.

    CONTROLLING AGENTS USING AMORTIZED Q LEARNING

    公开(公告)号:US20210357731A1

    公开(公告)日:2021-11-18

    申请号:US17287306

    申请日:2019-11-18

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network system used to control an agent interacting with an environment. One of the methods includes receiving a current observation; processing the current observation using a proposal neural network to generate a proposal output that defines a proposal probability distribution over a set of possible actions that can be performed by the agent to interact with the environment; sampling (i) one or more actions from the set of possible actions in accordance with the proposal probability distribution and (ii) one or more actions randomly from the set of possible actions; processing the current observation and each sampled action using a Q neural network to generate a Q value; and selecting an action using the Q values generated by the Q neural network.

    CONTROLLING AGENTS USING AMORTIZED Q LEARNING

    公开(公告)号:US20240160901A1

    公开(公告)日:2024-05-16

    申请号:US18406995

    申请日:2024-01-08

    CPC classification number: G06N3/047 G06N3/006 G06N3/084

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network system used to control an agent interacting with an environment. One of the methods includes receiving a current observation; processing the current observation using a proposal neural network to generate a proposal output that defines a proposal probability distribution over a set of possible actions that can be performed by the agent to interact with the environment; sampling (i) one or more actions from the set of possible actions in accordance with the proposal probability distribution and (ii) one or more actions randomly from the set of possible actions; processing the current observation and each sampled action using a Q neural network to generate a Q value; and selecting an action using the Q values generated by the Q neural network.

    Generating output data items using template data items

    公开(公告)号:US10860928B2

    公开(公告)日:2020-12-08

    申请号:US16689065

    申请日:2019-11-19

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating data items. One of the systems is a neural network system comprising a memory storing a plurality of template data items; one or more processors configured to select a memory address based upon a received input data item, and retrieve a template data item from the memory based upon the selected memory address; an encoder neural network configured to process the received input data item and the retrieved template data item to generate a latent variable representation; and a decoder neural network configured to process the retrieved template data item and the latent variable representation to generate an output data item.

    Controlling agents using amortized Q learning

    公开(公告)号:US11868866B2

    公开(公告)日:2024-01-09

    申请号:US17287306

    申请日:2019-11-18

    CPC classification number: G06N3/047 G06N3/006 G06N3/084

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network system used to control an agent interacting with an environment. One of the methods includes receiving a current observation; processing the current observation using a proposal neural network to generate a proposal output that defines a proposal probability distribution over a set of possible actions that can be performed by the agent to interact with the environment; sampling (i) one or more actions from the set of possible actions in accordance with the proposal probability distribution and (ii) one or more actions randomly from the set of possible actions; processing the current observation and each sampled action using a Q neural network to generate a Q value; and selecting an action using the Q values generated by the Q neural network.

    GENERATING OUTPUT DATA ITEMS USING TEMPLATE DATA ITEMS

    公开(公告)号:US20200090043A1

    公开(公告)日:2020-03-19

    申请号:US16689065

    申请日:2019-11-19

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating data items. One of the systems is a neural network system comprising a memory storing a plurality of template data items; one or more processors configured to select a memory address based upon a received input data item, and retrieve a template data item from the memory based upon the selected memory address; an encoder neural network configured to process the received input data item and the retrieved template data item to generate a latent variable representation; and a decoder neural network configured to process the retrieved template data item and the latent variable representation to generate an output data item.

Patent Agency Ranking