-
公开(公告)号:US11714990B2
公开(公告)日:2023-08-01
申请号:US16881180
申请日:2020-05-22
Applicant: DeepMind Technologies Limited
Inventor: Adrià Puigdomènech Badia , Pablo Sprechmann , Alex Vitvitskyi , Zhaohan Guo , Bilal Piot , Steven James Kapturowski , Olivier Tieleman , Charles Blundell
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an action selection neural network that is used to select actions to be performed by an agent interacting with an environment. In one aspect, the method comprises: receiving an observation characterizing a current state of the environment; processing the observation and an exploration importance factor using the action selection neural network to generate an action selection output; selecting an action to be performed by the agent using the action selection output; determining an exploration reward; determining an overall reward based on: (i) the exploration importance factor, and (ii) the exploration reward; and training the action selection neural network using a reinforcement learning technique based on the overall reward.
-
公开(公告)号:US20240028866A1
公开(公告)日:2024-01-25
申请号:US18334112
申请日:2023-06-13
Applicant: DeepMind Technologies Limited
Inventor: Adrià Puigdomènech Badia , Pablo Sprechmann , Alex Vitvitskyi , Zhaohan Guo , Bilal Piot , Steven James Kapturowski , Olivier Tieleman , Charles Blundell
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an action selection neural network that is used to select actions to be performed by an agent interacting with an environment. In one aspect, the method comprises: receiving an observation characterizing a current state of the environment; processing the observation and an exploration importance factor using the action selection neural network to generate an action selection output; selecting an action to be performed by the agent using the action selection output; determining an exploration reward; determining an overall reward based on: (i) the exploration importance factor, and (ii) the exploration reward; and training the action selection neural network using a reinforcement learning technique based on the overall reward.
-
公开(公告)号:US20230059004A1
公开(公告)日:2023-02-23
申请号:US17797878
申请日:2021-02-08
Applicant: DeepMind Technologies Limited
Inventor: Adrià Puigdomènech Badia , Bilal Piot , Pablo Sprechmann , Steven James Kapturowski , Alex Vitvitskyi , Zhaohan Guo , Charles Blundell
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for reinforcement learning with adaptive return computation schemes. In one aspect, a method includes: maintaining data specifying a policy for selecting between multiple different return computation schemes, each return computation scheme assigning a different importance to exploring the environment while performing an episode of a task; selecting, using the policy, a return computation scheme from the multiple different return computation schemes; controlling an agent to perform the episode of the task to maximize a return computed according to the selected return computation scheme; identifying rewards that were generated as a result of the agent performing the episode of the task; and updating, using the identified rewards, the policy for selecting between multiple different return computation schemes.
-
公开(公告)号:US20200372366A1
公开(公告)日:2020-11-26
申请号:US16881180
申请日:2020-05-22
Applicant: DeepMind Technologies Limited
Inventor: Adrià Puigdomènech Badia , Pablo Sprechmann , Alex Vitvitskyi , Zhaohan Guo , Bilal Piot , Steven James Kapturowski , Olivier Tieleman , Charles Blundell
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an action selection neural network that is used to select actions to be performed by an agent interacting with an environment. In one aspect, the method comprises: receiving an observation characterizing a current state of the environment; processing the observation and an exploration importance factor using the action selection neural network to generate an action selection output; selecting an action to be performed by the agent using the action selection output; determining an exploration reward; determining an overall reward based on: (i) the exploration importance factor, and (ii) the exploration reward; and training the action selection neural network using a reinforcement learning technique based on the overall reward.
-
-
-