-
公开(公告)号:US11977983B2
公开(公告)日:2024-05-07
申请号:US17020248
申请日:2020-09-14
Applicant: DeepMind Technologies Limited
Inventor: Mohammad Gheshlaghi Azar , Meire Fortunato , Bilal Piot , Olivier Claude Pietquin , Jacob Lee Menick , Volodymyr Mnih , Charles Blundell , Remi Munos
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for selecting an action to be performed by a reinforcement learning agent. The method includes obtaining an observation characterizing a current state of an environment. For each layer parameter of each noisy layer of a neural network, a respective noise value is determined. For each layer parameter of each noisy layer, a noisy current value for the layer parameter is determined from a current value of the layer parameter, a current value of a corresponding noise parameter, and the noise value. A network input including the observation is processed using the neural network in accordance with the noisy current values to generate a network output for the network input. An action is selected from a set of possible actions to be performed by the agent in response to the observation using the network output.
-
公开(公告)号:US11868882B2
公开(公告)日:2024-01-09
申请号:US16624245
申请日:2018-06-28
Applicant: DEEPMIND TECHNOLOGIES LIMITED
Inventor: Olivier Claude Pietquin , Martin Riedmiller , Wang Fumin , Bilal Piot , Mel Vecerik , Todd Andrew Hester , Thomas Rothoerl , Thomas Lampe , Nicolas Manfred Otto Heess , Jonathan Karl Scholz
Abstract: An off-policy reinforcement learning actor-critic neural network system configured to select actions from a continuous action space to be performed by an agent interacting with an environment to perform a task. An observation defines environment state data and reward data. The system has an actor neural network which learns a policy function mapping the state data to action data. A critic neural network learns an action-value (Q) function. A replay buffer stores tuples of the state data, the action data, the reward data and new state data. The replay buffer also includes demonstration transition data comprising a set of the tuples from a demonstration of the task within the environment. The neural network system is configured to train the actor neural network and the critic neural network off-policy using stored tuples from the replay buffer comprising tuples both from operation of the system and from the demonstration transition data.
-
公开(公告)号:US11714990B2
公开(公告)日:2023-08-01
申请号:US16881180
申请日:2020-05-22
Applicant: DeepMind Technologies Limited
Inventor: Adrià Puigdomènech Badia , Pablo Sprechmann , Alex Vitvitskyi , Zhaohan Guo , Bilal Piot , Steven James Kapturowski , Olivier Tieleman , Charles Blundell
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an action selection neural network that is used to select actions to be performed by an agent interacting with an environment. In one aspect, the method comprises: receiving an observation characterizing a current state of the environment; processing the observation and an exploration importance factor using the action selection neural network to generate an action selection output; selecting an action to be performed by the agent using the action selection output; determining an exploration reward; determining an overall reward based on: (i) the exploration importance factor, and (ii) the exploration reward; and training the action selection neural network using a reinforcement learning technique based on the overall reward.
-
14.
公开(公告)号:US20230083486A1
公开(公告)日:2023-03-16
申请号:US17797886
申请日:2021-02-08
Applicant: DeepMind Technologies Limited
Inventor: Zhaohan Guo , Mohammad Gheshlaghi Azar , Bernardo Avila Pires , Florent Altché , Jean-Bastien François Laurent Grill , Bilal Piot , Remi Munos
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an environment representation neural network of a reinforcement learning system controls an agent to perform a given task. In one aspect, the method includes: receiving a current observation input and a future observation input; generating, from the future observation input, a future latent representation of the future state of the environment; processing, using the environment representation neural network, to generate a current internal representation of the current state of the environment; generating, from the current internal representation, a predicted future latent representation; evaluating an objective function measuring a difference between the future latent representation and the predicted future latent representation; and determining, based on a determined gradient of the objective function, an update to the current values of the environment representation parameters.
-
公开(公告)号:US20210065012A1
公开(公告)日:2021-03-04
申请号:US17020248
申请日:2020-09-14
Applicant: DeepMind Technologies Limited
Inventor: Mohammad Gheshlaghi Azar , Meire Fortunato , Bilal Piot , Olivier Claude Pietquin , Jacob Lee Menick , Volodymyr Mnih , Charles Blundell , Remi Munos
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for selecting an action to be performed by a reinforcement learning agent. The method includes obtaining an observation characterizing a current state of an environment. For each layer parameter of each noisy layer of a neural network, a respective noise value is determined. For each layer parameter of each noisy layer, a noisy current value for the layer parameter is determined from a current value of the layer parameter, a current value of a corresponding noise parameter, and the noise value. A network input including the observation is processed using the neural network in accordance with the noisy current values to generate a network output for the network input. An action is selected from a set of possible actions to be performed by the agent in response to the observation using the network output.
-
-
-
-