-
公开(公告)号:US11886997B2
公开(公告)日:2024-01-30
申请号:US17962008
申请日:2022-10-07
Applicant: DeepMind Technologies Limited
Inventor: Olivier Pietquin , Martin Riedmiller , Wang Fumin , Bilal Piot , Mel Vecerik , Todd Andrew Hester , Thomas Rothoerl , Thomas Lampe , Nicolas Manfred Otto Heess , Jonathan Karl Scholz
Abstract: An off-policy reinforcement learning actor-critic neural network system configured to select actions from a continuous action space to be performed by an agent interacting with an environment to perform a task. An observation defines environment state data and reward data. The system has an actor neural network which learns a policy function mapping the state data to action data. A critic neural network learns an action-value (Q) function. A replay buffer stores tuples of the state data, the action data, the reward data and new state data. The replay buffer also includes demonstration transition data comprising a set of the tuples from a demonstration of the task within the environment. The neural network system is configured to train the actor neural network and the critic neural network off-policy using stored tuples from the replay buffer comprising tuples both from operation of the system and from the demonstration transition data.
-
公开(公告)号:US20210383225A1
公开(公告)日:2021-12-09
申请号:US17338777
申请日:2021-06-04
Applicant: DeepMind Technologies Limited
Inventor: Jean-Bastien François Laurent Grill , Florian Strub , Florent Altché , Corentin Tallec , Pierre Richemond , Bernardo Avila Pires , Zhaohan Guo , Mohammad Gheshlaghi Azar , Bilal Piot , Remi Munos , Michal Valko
Abstract: A computer-implemented method of training a neural network. The method comprises processing a first transformed view of a training data item, e.g. an image, with a target neural network to generate a target output, processing a second transformed view of the training data item, e.g. image, with an online neural network to generate a prediction of the target output, updating parameters of the online neural network to minimize an error between the prediction of the target output and the target output, and updating parameters of the target neural network based on the parameters of the online neural network. The method can effectively train an encoder neural network without using labelled training data items, and without using a contrastive loss, i.e. without needing “negative examples” which comprise transformed views of different data items.
-
公开(公告)号:US12175737B2
公开(公告)日:2024-12-24
申请号:US17773789
申请日:2020-11-13
Applicant: DEEPMIND TECHNOLOGIES LIMITED
Inventor: Viorica Patraucean , Bilal Piot , Joao Carreira , Volodymyr Mnih , Simon Osindero
Abstract: A system that is configured to receive a sequence of task inputs and to perform a machine learning task is described. The system includes a reinforcement learning (RL) neural network and a task neural network. The RL neural network is configured to: generate, for each task input of the sequence of task inputs, a respective decision that determines whether to encode the task input or to skip the task input, and provide the respective decision of each task input to the task neural network. The task neural network is configured to: receive the sequence of task inputs, receive, from the RL neural network, for each task input of the sequence of task inputs, a respective decision that determines whether to encode the task input or to skip the task input, process each of the un-skipped task inputs in the sequence of task inputs to generate a respective accumulated feature for the un-skipped task input, wherein the respective accumulated feature characterizes features of the un-skipped task input and of previous un-skipped task inputs in the sequence, and generate a machine learning task output for the machine learning task based on the last accumulated feature generated for the last un-skipped task input in the sequence.
-
公开(公告)号:US20230059004A1
公开(公告)日:2023-02-23
申请号:US17797878
申请日:2021-02-08
Applicant: DeepMind Technologies Limited
Inventor: Adrià Puigdomènech Badia , Bilal Piot , Pablo Sprechmann , Steven James Kapturowski , Alex Vitvitskyi , Zhaohan Guo , Charles Blundell
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for reinforcement learning with adaptive return computation schemes. In one aspect, a method includes: maintaining data specifying a policy for selecting between multiple different return computation schemes, each return computation scheme assigning a different importance to exploring the environment while performing an episode of a task; selecting, using the policy, a return computation scheme from the multiple different return computation schemes; controlling an agent to perform the episode of the task to maximize a return computed according to the selected return computation scheme; identifying rewards that were generated as a result of the agent performing the episode of the task; and updating, using the identified rewards, the policy for selecting between multiple different return computation schemes.
-
公开(公告)号:US20230023189A1
公开(公告)日:2023-01-26
申请号:US17962008
申请日:2022-10-07
Applicant: DeepMind Technologies Limited
Inventor: Olivier Pietquin , Martin Riedmiller , Wang Fumin , Bilal Piot , Mel Vecerik , Todd Andrew Hester , Thomas Rothoerl , Thomas Lampe , Nicolas Manfred Otto Heess , Jonathan Karl Scholz
Abstract: An off-policy reinforcement learning actor-critic neural network system configured to select actions from a continuous action space to be performed by an agent interacting with an environment to perform a task. An observation defines environment state data and reward data. The system has an actor neural network which learns a policy function mapping the state data to action data. A critic neural network learns an action-value (Q) function. A replay buffer stores tuples of the state data, the action data, the reward data and new state data. The replay buffer also includes demonstration transition data comprising a set of the tuples from a demonstration of the task within the environment. The neural network system is configured to train the actor neural network and the critic neural network off-policy using stored tuples from the replay buffer comprising tuples both from operation of the system and from the demonstration transition data.
-
公开(公告)号:US20200372366A1
公开(公告)日:2020-11-26
申请号:US16881180
申请日:2020-05-22
Applicant: DeepMind Technologies Limited
Inventor: Adrià Puigdomènech Badia , Pablo Sprechmann , Alex Vitvitskyi , Zhaohan Guo , Bilal Piot , Steven James Kapturowski , Olivier Tieleman , Charles Blundell
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an action selection neural network that is used to select actions to be performed by an agent interacting with an environment. In one aspect, the method comprises: receiving an observation characterizing a current state of the environment; processing the observation and an exploration importance factor using the action selection neural network to generate an action selection output; selecting an action to be performed by the agent using the action selection output; determining an exploration reward; determining an overall reward based on: (i) the exploration importance factor, and (ii) the exploration reward; and training the action selection neural network using a reinforcement learning technique based on the overall reward.
-
公开(公告)号:US20250148774A1
公开(公告)日:2025-05-08
申请号:US18953004
申请日:2024-11-19
Applicant: DeepMind Technologies Limited
Inventor: Viorica Patraucean , Bilal Piot , Joao Carreira , Volodymyr Mnih , Simon Osindero
Abstract: A system that is configured to receive a sequence of task inputs and to perform a machine learning task is described. An RL neural network is configured to: generate, for each task input of the sequence, a respective decision that determines whether to encode the task input or to skip the task input, and provide the respective decision of each task input to the task neural network. The task neural network is configured to: receive the sequence of task inputs, receive, from the RL neural network, for each task input of the sequence, a respective decision, process each of the un-skipped task inputs in the sequence of task inputs to generate a respective accumulated feature for the un-skipped task input, and generate a machine learning task output for the machine learning task based on the last accumulated feature generated for the last un-skipped task input in the sequence.
-
公开(公告)号:US20240028866A1
公开(公告)日:2024-01-25
申请号:US18334112
申请日:2023-06-13
Applicant: DeepMind Technologies Limited
Inventor: Adrià Puigdomènech Badia , Pablo Sprechmann , Alex Vitvitskyi , Zhaohan Guo , Bilal Piot , Steven James Kapturowski , Olivier Tieleman , Charles Blundell
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an action selection neural network that is used to select actions to be performed by an agent interacting with an environment. In one aspect, the method comprises: receiving an observation characterizing a current state of the environment; processing the observation and an exploration importance factor using the action selection neural network to generate an action selection output; selecting an action to be performed by the agent using the action selection output; determining an exploration reward; determining an overall reward based on: (i) the exploration importance factor, and (ii) the exploration reward; and training the action selection neural network using a reinforcement learning technique based on the overall reward.
-
公开(公告)号:US11468321B2
公开(公告)日:2022-10-11
申请号:US16624245
申请日:2018-06-28
Applicant: DEEPMIND TECHNOLOGIES LIMITED
Inventor: Olivier Claude Pietquin , Martin Riedmiller , Wang Fumin , Bilal Piot , Mel Vecerik , Todd Andrew Hester , Thomas Rothoerl , Thomas Lampe , Nicolas Manfred Otto Heess , Jonathan Karl Scholz
Abstract: An off-policy reinforcement learning actor-critic neural network system configured to select actions from a continuous action space to be performed by an agent interacting with an environment to perform a task. An observation defines environment state data and reward data. The system has an actor neural network which learns a policy function mapping the state data to action data. A critic neural network learns an action-value (Q) function. A replay buffer stores tuples of the state data, the action data, the reward data and new state data. The replay buffer also includes demonstration transition data comprising a set of the tuples from a demonstration of the task within the environment. The neural network system is configured to train the actor neural network and the critic neural network off-policy using stored tuples from the replay buffer comprising tuples both from operation of the system and from the demonstration transition data.
-
公开(公告)号:US20200151562A1
公开(公告)日:2020-05-14
申请号:US16624245
申请日:2018-06-28
Applicant: DEEPMIND TECHNOLOGIES LIMITED
Inventor: Olivier Pietquin , Martin Riedmiller , Wang Fumin , Bilal Piot , Mel Vecerik , Todd Andrew Hester , Thomas Rothörl , Thomas Lampe , Nicolas Manfred Otto Heess , Jonathan Karl Scholz
Abstract: An off-policy reinforcement learning actor-critic neural network system configured to select actions from a continuous action space to be performed by an agent interacting with an environment to perform a task. An observation defines environment state data and reward data. The system has an actor neural network which learns a policy function mapping the state data to action data. A critic neural network learns an action-value (Q) function. A replay buffer stores tuples of the state data, the action data, the reward data and new state data. The replay buffer also includes demonstration transition data comprising a set of the tuples from a demonstration of the task within the environment. The neural network system is configured to train the actor neural network and the critic neural network off-policy using stored tuples from the replay buffer comprising tuples both from operation of the system and from the demonstration transition data.
-
-
-
-
-
-
-
-
-