REINFORCEMENT LEARNING BY DIRECTLY LEARNING AN ADVANTAGE FUNCTION

    公开(公告)号:US20240256882A1

    公开(公告)日:2024-08-01

    申请号:US18424520

    申请日:2024-01-26

    CPC classification number: G06N3/092

    Abstract: A system and method, implemented by one or more computers, of controlling an agent to take actions in an environment to perform a task is provided. The method comprises maintaining a value function neural network an advantage function neural network that is an estimate of a state-action advantage function representing a relative advantage of performing one possible action relative to the other possible actions. The method further comprises using the advantage function neural network to control the agent to take actions in the environment to perform the task. The method also comprises training the value function neural network and the advantage function neural network in a way that takes into account a behavior policy defined by a distribution of actions taken by the agent in training data.

    Training action selection neural networks using leave-one-out-updates

    公开(公告)号:US11604997B2

    公开(公告)日:2023-03-14

    申请号:US16603307

    申请日:2018-06-11

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a policy neural network. The policy neural network is used to select actions to be performed by an agent that interacts with an environment by receiving an observation characterizing a state of the environment and performing an action from a set of actions in response to the received observation. A trajectory is obtained from a replay memory, and a final update to current values of the policy network parameters is determined for each training observation in the trajectory. The final updates to the current values of the policy network parameters are determined from selected action updates and leave-one-out updates.

    TRAINING ACTION SELECTION NEURAL NETWORKS

    公开(公告)号:US20210110271A1

    公开(公告)日:2021-04-15

    申请号:US16603307

    申请日:2018-06-11

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a policy neural network. The policy neural network is used to select actions to be performed by an agent that interacts with an environment by receiving an observation characterizing a state of the environment and performing an action from a set of actions in response to the received observation. A trajectory is obtained from a replay memory, and a final update to current values of the policy network parameters is determined for each training observation in the trajectory. The final updates to the current values of the policy network parameters are determined from selected action updates and leave-one-out updates.

    Noisy neural network layers with noise parameters

    公开(公告)号:US11977983B2

    公开(公告)日:2024-05-07

    申请号:US17020248

    申请日:2020-09-14

    CPC classification number: G06N3/084 G06N3/044

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for selecting an action to be performed by a reinforcement learning agent. The method includes obtaining an observation characterizing a current state of an environment. For each layer parameter of each noisy layer of a neural network, a respective noise value is determined. For each layer parameter of each noisy layer, a noisy current value for the layer parameter is determined from a current value of the layer parameter, a current value of a corresponding noise parameter, and the noise value. A network input including the observation is processed using the neural network in accordance with the noisy current values to generate a network output for the network input. An action is selected from a set of possible actions to be performed by the agent in response to the observation using the network output.

    Reinforcement learning using pseudo-counts

    公开(公告)号:US11727264B2

    公开(公告)日:2023-08-15

    申请号:US16303501

    申请日:2017-05-18

    CPC classification number: G06N3/08 G06F17/18 G06N3/047

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network used to select actions to be performed by an agent interacting with an environment. One of the methods includes obtaining data identifying (i) a first observation characterizing a first state of the environment, (ii) an action performed by the agent in response to the first observation, and (iii) an actual reward received resulting from the agent performing the action in response to the first observation; determining a pseudo-count for the first observation; determining an exploration reward bonus that incentivizes the agent to explore the environment from the pseudo-count for the first observation; generating a combined reward from the actual reward and the exploration reward bonus; and adjusting current values of the parameters of the neural network using the combined reward.

    LEARNING ENVIRONMENT REPRESENTATIONS FOR AGENT CONTROL USING PREDICTIONS OF BOOTSTRAPPED LATENTS

    公开(公告)号:US20230083486A1

    公开(公告)日:2023-03-16

    申请号:US17797886

    申请日:2021-02-08

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an environment representation neural network of a reinforcement learning system controls an agent to perform a given task. In one aspect, the method includes: receiving a current observation input and a future observation input; generating, from the future observation input, a future latent representation of the future state of the environment; processing, using the environment representation neural network, to generate a current internal representation of the current state of the environment; generating, from the current internal representation, a predicted future latent representation; evaluating an objective function measuring a difference between the future latent representation and the predicted future latent representation; and determining, based on a determined gradient of the objective function, an update to the current values of the environment representation parameters.

    Memory-efficient backpropagation through time

    公开(公告)号:US11256990B2

    公开(公告)日:2022-02-22

    申请号:US16303101

    申请日:2017-05-19

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a recurrent neural network on training sequences using backpropagation through time. In one aspect, a method includes receiving a training sequence including a respective input at each of a number of time steps; obtaining data defining an amount of memory allocated to storing forward propagation information for use during backpropagation; determining, from the number of time steps in the training sequence and from the amount of memory allocated to storing the forward propagation information, a training policy for processing the training sequence, wherein the training policy defines when to store forward propagation information during forward propagation of the training sequence; and training the recurrent neural network on the training sequence in accordance with the training policy.

    TRAINING MACHINE LEARNING MODELS USING TASK SELECTION POLICIES TO INCREASE LEARNING PROGRESS

    公开(公告)号:US20210150355A1

    公开(公告)日:2021-05-20

    申请号:US17159961

    申请日:2021-01-27

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a machine learning model. In one aspect, a method includes receiving training data for training the machine learning model on a plurality of tasks, where each task includes multiple batches of training data. A task is selected in accordance with a current task selection policy. A batch of training data is selected from the selected task. The machine learning model is trained on the selected batch of training data to determine updated values of the model parameters. A learning progress measure that represents a progress of the training of the machine learning model as a result of training the machine learning model on the selected batch of training data is determined. The current task selection policy is updated using the learning progress measure.

Patent Agency Ranking