-
公开(公告)号:US10936946B2
公开(公告)日:2021-03-02
申请号:US15349950
申请日:2016-11-11
Applicant: DeepMind Technologies Limited
Inventor: Volodymyr Mnih , Adrià Puigdomènech Badia , Alexander Benjamin Graves , Timothy James Alexander Harley , David Silver , Koray Kavukcuoglu
Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for asynchronous deep reinforcement learning. One of the systems includes a plurality of workers, wherein each worker is configured to operate independently of each other worker, and wherein each worker is associated with a respective actor that interacts with a respective replica of the environment during the training of the deep neural network.
-
公开(公告)号:US20200327399A1
公开(公告)日:2020-10-15
申请号:US16911992
申请日:2020-06-25
Applicant: DeepMind Technologies Limited
Inventor: David Silver , Tom Schaul , Matteo Hessel , Hado Philip van Hasselt
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for prediction of an outcome related to an environment. In one aspect, a system comprises a state representation neural network that is configured to: receive an observation characterizing a state of an environment being interacted with by an agent and process the observation to generate an internal state representation of the environment state; a prediction neural network that is configured to receive a current internal state representation of a current environment state and process the current internal state representation to generate a predicted subsequent state representation of a subsequent state of the environment and a predicted reward for the subsequent state; and a value prediction neural network that is configured to receive a current internal state representation of a current environment state and process the current internal state representation to generate a value prediction.
-
公开(公告)号:US10733501B2
公开(公告)日:2020-08-04
申请号:US16403314
申请日:2019-05-03
Applicant: DeepMind Technologies Limited
Inventor: David Silver , Tom Schaul , Matteo Hessel , Hado Philip van Hasselt
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for prediction of an outcome related to an environment. In one aspect, a system comprises a state representation neural network that is configured to: receive an observation characterizing a state of an environment being interacted with by an agent and process the observation to generate an internal state representation of the environment state; a prediction neural network that is configured to receive a current internal state representation of a current environment state and process the current internal state representation to generate a predicted subsequent state representation of a subsequent state of the environment and a predicted reward for the subsequent state; and a value prediction neural network that is configured to receive a current internal state representation of a current environment state and process the current internal state representation to generate a value prediction.
-
公开(公告)号:US20190258929A1
公开(公告)日:2019-08-22
申请号:US16403388
申请日:2019-05-03
Applicant: DeepMind Technologies Limited
Inventor: Volodymyr Mnih , Adria Puigdomenech Badia , Alexander Benjamin Graves , Timothy James Alexander Harley , David Silver , Koray Kavukcuoglu
Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for asynchronous deep reinforcement learning. One of the systems includes a plurality of workers, wherein each worker is configured to operate independently of each other worker, and wherein each worker is associated with a respective actor that interacts with a respective replica of the environment during the training of the deep neural network.
-
公开(公告)号:US10282662B2
公开(公告)日:2019-05-07
申请号:US15977891
申请日:2018-05-11
Applicant: DeepMind Technologies Limited
Inventor: Tom Schaul , John Quan , David Silver
IPC: G06N3/08
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a neural network used to select actions performed by a reinforcement learning agent interacting with an environment. In one aspect, a method includes maintaining a replay memory, where the replay memory stores pieces of experience data generated as a result of the reinforcement learning agent interacting with the environment. Each piece of experience data is associated with a respective expected learning progress measure that is a measure of an expected amount of progress made in the training of the neural network if the neural network is trained on the piece of experience data. The method further includes selecting a piece of experience data from the replay memory by prioritizing for selection pieces of experience data having relatively higher expected learning progress measures and training the neural network on the selected piece of experience data.
-
公开(公告)号:US12141677B2
公开(公告)日:2024-11-12
申请号:US16911992
申请日:2020-06-25
Applicant: DeepMind Technologies Limited
Inventor: David Silver , Tom Schaul , Matteo Hessel , Hado Philip van Hasselt
IPC: G06N3/045 , G05B13/02 , G06N3/006 , G06N3/044 , G06N3/047 , G06N3/08 , G06N3/10 , G06T1/20 , G06N3/084
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for prediction of an outcome related to an environment. In one aspect, a system comprises a state representation neural network that is configured to: receive an observation characterizing a state of an environment being interacted with by an agent and process the observation to generate an internal state representation of the environment state; a prediction neural network that is configured to receive a current internal state representation of a current environment state and process the current internal state representation to generate a predicted subsequent state representation of a subsequent state of the environment and a predicted reward for the subsequent state; and a value prediction neural network that is configured to receive a current internal state representation of a current environment state and process the current internal state representation to generate a value prediction.
-
公开(公告)号:US12020155B2
公开(公告)日:2024-06-25
申请号:US17733594
申请日:2022-04-29
Applicant: DeepMind Technologies Limited
Inventor: Volodymyr Mnih , Adrià Puigdomènech Badia , Alexander Benjamin Graves , Timothy James Alexander Harley , David Silver , Koray Kavukcuoglu
Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for asynchronous deep reinforcement learning. One of the systems includes a plurality of workers, wherein each worker is configured to operate independently of each other worker, and wherein each worker is associated with a respective actor that interacts with a respective replica of the environment during the training of the deep neural network.
-
公开(公告)号:US20240177002A1
公开(公告)日:2024-05-30
申请号:US18497931
申请日:2023-10-30
Applicant: DeepMind Technologies Limited
Inventor: Timothy Paul Lillicrap , Jonathan James Hunt , Alexander Pritzel , Nicolas Manfred Otto Heess , Tom Erez , Yuval Tassa , David Silver , Daniel Pieter Wierstra
Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training an actor neural network used to select actions to be performed by an agent interacting with an environment. One of the methods includes obtaining a minibatch of experience tuples; and updating current values of the parameters of the actor neural network, comprising: for each experience tuple in the minibatch: processing the training observation and the training action in the experience tuple using a critic neural network to determine a neural network output for the experience tuple, and determining a target neural network output for the experience tuple; updating current values of the parameters of the critic neural network using errors between the target neural network outputs and the neural network outputs; and updating the current values of the parameters of the actor neural network using the critic neural network.
-
公开(公告)号:US20240104353A1
公开(公告)日:2024-03-28
申请号:US18274748
申请日:2022-02-08
Applicant: DeepMind Technologies Limited
Inventor: Rémi Bertrand Francis Leblond , Jean-Baptiste Alayrac , Laurent Sifre , Miruna Pîslar , Jean-Baptiste Lespiau , Ioannis Antonoglou , Karen Simonyan , David Silver , Oriol Vinyals
IPC: G06N3/0455
CPC classification number: G06N3/0455
Abstract: A computer-implemented method for generating an output token sequence from an input token sequence. The method combines a look ahead tree search, such as a Monte Carlo tree search, with a sequence-to-sequence neural network system. The sequence-to-sequence neural network system has a policy output defining a next token probability distribution, and may include a value neural network providing a value output to evaluate a sequence. An initial partial output sequence is extended using the look ahead tree search guided by the policy output and, in implementations, the value output, of the sequence-to-sequence neural network system until a complete output sequence is obtained.
-
公开(公告)号:US20230244933A1
公开(公告)日:2023-08-03
申请号:US18103416
申请日:2023-01-30
Applicant: DeepMind Technologies Limited
Inventor: Tom Schaul , John Quan , David Silver
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a neural network used to select actions performed by a reinforcement learning agent interacting with an environment. In one aspect, a method includes maintaining a replay memory, where the replay memory stores pieces of experience data generated as a result of the reinforcement learning agent interacting with the environment. Each piece of experience data is associated with a respective expected learning progress measure that is a measure of an expected amount of progress made in the training of the neural network if the neural network is trained on the piece of experience data. The method further includes selecting a piece of experience data from the replay memory by prioritizing for selection pieces of experience data having relatively higher expected learning progress measures and training the neural network on the selected piece of experience data.
-
-
-
-
-
-
-
-
-