Memory-efficient backpropagation through time

    公开(公告)号:US11256990B2

    公开(公告)日:2022-02-22

    申请号:US16303101

    申请日:2017-05-19

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a recurrent neural network on training sequences using backpropagation through time. In one aspect, a method includes receiving a training sequence including a respective input at each of a number of time steps; obtaining data defining an amount of memory allocated to storing forward propagation information for use during backpropagation; determining, from the number of time steps in the training sequence and from the amount of memory allocated to storing the forward propagation information, a training policy for processing the training sequence, wherein the training policy defines when to store forward propagation information during forward propagation of the training sequence; and training the recurrent neural network on the training sequence in accordance with the training policy.

    Dueling deep neural networks
    2.
    发明授权

    公开(公告)号:US10572798B2

    公开(公告)日:2020-02-25

    申请号:US15349900

    申请日:2016-11-11

    Abstract: Systems, methods, and apparatus, including computer programs encoded on a computer storage medium, for selecting an actions from a set of actions to be performed by an agent interacting with an environment. In one aspect, the system includes a dueling deep neural network. The dueling deep neural network includes a value subnetwork, an advantage subnetwork, and a combining layer. The value subnetwork processes a representation of an observation to generate a value estimate. The advantage subnetwork processes the representation of the observation to generate an advantage estimate for each action in the set of actions. The combining layer combines the value estimate and the respective advantage estimate for each action to generate a respective Q value for the action. The system selects an action to be performed by the agent in response to the observation using the respective Q values for the actions in the set of actions.

    NEURAL POPULATION LEARNING
    3.
    发明申请

    公开(公告)号:US20240412072A1

    公开(公告)日:2024-12-12

    申请号:US18422620

    申请日:2024-01-25

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for controlling an agent interacting with an environment using a population of action selection policies that are jointly represented by a population action selection neural network. In one aspect, a method comprises, at each of a plurality of time steps: obtaining an observation characterizing a current state of the environment at the time step; selecting a target action selection policy from the population of action selection policies; processing a network input comprising: (i) the observation, and (ii) a strategy embedding representing the target action selection policy, using the population action selection neural network to generate an action selection output; and selecting an action to be performed by the agent at the time step using the action selection output.

    JOINTLY UPDATING AGENT CONTROL POLICIES USING ESTIMATED BEST RESPONSES TO CURRENT CONTROL POLICIES

    公开(公告)号:US20240046112A1

    公开(公告)日:2024-02-08

    申请号:US18275881

    申请日:2022-02-07

    CPC classification number: G06N3/092

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating control policies for controlling agents in an environment. One of the methods includes, at each of a plurality of iterations: obtaining a current joint control policy for a plurality of agents, the current joint control policy specifying a respective current control policy for each agent; and updating the current joint control policy, comprising, for each agent: generating a respective reward estimate for each of a plurality of alternate control policies that is an estimate of a reward received by the agent if the agent is controlled using the alternate control policy while the other agents are controlled using the respective current control policies; computing a best response for the agent from the respective reward estimates; and updating the respective current control policy for the agent using the best response for the agent.

    Dueling deep neural networks
    5.
    发明授权

    公开(公告)号:US10296825B2

    公开(公告)日:2019-05-21

    申请号:US15977913

    申请日:2018-05-11

    Abstract: Systems, methods, and apparatus, including computer programs encoded on a computer storage medium, for selecting an actions from a set of actions to be performed by an agent interacting with an environment. In one aspect, the system includes a dueling deep neural network. The dueling deep neural network includes a value subnetwork, an advantage subnetwork, and a combining layer. The value subnetwork processes a representation of an observation to generate a value estimate. The advantage subnetwork processes the representation of the observation to generate an advantage estimate for each action in the set of actions. The combining layer combines the value estimate and the respective advantage estimate for each action to generate a respective Q value for the action. The system selects an action to be performed by the agent in response to the observation using the respective Q values for the actions in the set of actions.

    DUELING DEEP NEURAL NETWORKS
    6.
    发明申请

    公开(公告)号:US20180260689A1

    公开(公告)日:2018-09-13

    申请号:US15977913

    申请日:2018-05-11

    Abstract: Systems, methods, and apparatus, including computer programs encoded on a computer storage medium, for selecting an actions from a set of actions to be performed by an agent interacting with an environment. In one aspect, the system includes a dueling deep neural network. The dueling deep neural network includes a value subnetwork, an advantage subnetwork, and a combining layer. The value subnetwork processes a representation of an observation to generate a value estimate. The advantage subnetwork processes the representation of the observation to generate an advantage estimate for each action in the set of actions. The combining layer combines the value estimate and the respective advantage estimate for each action to generate a respective Q value for the action. The system selects an action to be performed by the agent in response to the observation using the respective Q values for the actions in the set of actions.

Patent Agency Ranking