-
11.
公开(公告)号:US11615310B2
公开(公告)日:2023-03-28
申请号:US16302592
申请日:2017-05-19
Applicant: DEEPMIND TECHNOLOGIES LIMITED
Inventor: Misha Man Ray Denil , Tom Schaul , Marcin Andrychowicz , Joao Ferdinando Gomes de Freitas , Sergio Gomez Colmenarejo , Matthew William Hoffman , David Benjamin Pfau
Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media for training machine learning models. One method includes obtaining a machine learning model, wherein the machine learning model comprises one or more model parameters, and the machine learning model is trained using gradient descent techniques to optimize an objective function; determining an update rule for the model parameters using a recurrent neural network (RNN); and applying a determined update rule for a final time step in a sequence of multiple time steps to the model parameters.
-
公开(公告)号:US11568250B2
公开(公告)日:2023-01-31
申请号:US16866365
申请日:2020-05-04
Applicant: DeepMind Technologies Limited
Inventor: Tom Schaul , John Quan , David Silver
IPC: G06N3/08
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a neural network used to select actions performed by a reinforcement learning agent interacting with an environment. In one aspect, a method includes maintaining a replay memory, where the replay memory stores pieces of experience data generated as a result of the reinforcement learning agent interacting with the environment. Each piece of experience data is associated with a respective expected learning progress measure that is a measure of an expected amount of progress made in the training of the neural network if the neural network is trained on the piece of experience data. The method further includes selecting a piece of experience data from the replay memory by prioritizing for selection pieces of experience data having relatively higher expected learning progress measures and training the neural network on the selected piece of experience data.
-
公开(公告)号:US20180260707A1
公开(公告)日:2018-09-13
申请号:US15977891
申请日:2018-05-11
Applicant: DeepMind Technologies Limited
Inventor: Tom Schaul , John Quan , David Silver
IPC: G06N3/08
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a neural network used to select actions performed by a reinforcement learning agent interacting with an environment. In one aspect, a method includes maintaining a replay memory, where the replay memory stores pieces of experience data generated as a result of the reinforcement learning agent interacting with the environment. Each piece of experience data is associated with a respective expected learning progress measure that is a measure of an expected amount of progress made in the training of the neural network if the neural network is trained on the piece of experience data. The method further includes selecting a piece of experience data from the replay memory by prioritizing for selection pieces of experience data having relatively higher expected learning progress measures and training the neural network on the selected piece of experience data.
-
公开(公告)号:US12271823B2
公开(公告)日:2025-04-08
申请号:US18180754
申请日:2023-03-08
Applicant: DeepMind Technologies Limited
Inventor: Misha Man Ray Denil , Tom Schaul , Marcin Andrychowicz , Joao Ferdinando Gomes de Freitas , Sergio Gomez Colmenarejo , Matthew William Hoffman , David Benjamin Pfau
Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media for training machine learning models. One method includes obtaining a machine learning model, wherein the machine learning model comprises one or more model parameters, and the machine learning model is trained using gradient descent techniques to optimize an objective function; determining an update rule for the model parameters using a recurrent neural network (RNN); and applying a determined update rule for a final time step in a sequence of multiple time steps to the model parameters.
-
公开(公告)号:US12154029B2
公开(公告)日:2024-11-26
申请号:US16268414
申请日:2019-02-05
Applicant: DeepMind Technologies Limited
Inventor: Tom Schaul , Matteo Hessel , Hado Philip van Hasselt , Daniel J. Mankowitz
Abstract: A method of training an action selection neural network for controlling an agent interacting with an environment to perform different tasks is described. The method includes obtaining a first trajectory of transitions generated while the agent was performing an episode of the first task from multiple tasks; and training the action selection neural network on the first trajectory to adjust the control policies for the multiple tasks. The training includes, for each transition in the first trajectory: generating respective policy outputs for the initial observation in the transition for each task in a subset of tasks that includes the first task and one other task; generating respective target policy outputs for each task using the reward in the transition, and determining an update to the current parameter values based on, for each task, a gradient of a loss between the policy output and the target policy output for the task.
-
公开(公告)号:US12141677B2
公开(公告)日:2024-11-12
申请号:US16911992
申请日:2020-06-25
Applicant: DeepMind Technologies Limited
Inventor: David Silver , Tom Schaul , Matteo Hessel , Hado Philip van Hasselt
IPC: G06N3/045 , G05B13/02 , G06N3/006 , G06N3/044 , G06N3/047 , G06N3/08 , G06N3/10 , G06T1/20 , G06N3/084
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for prediction of an outcome related to an environment. In one aspect, a system comprises a state representation neural network that is configured to: receive an observation characterizing a state of an environment being interacted with by an agent and process the observation to generate an internal state representation of the environment state; a prediction neural network that is configured to receive a current internal state representation of a current environment state and process the current internal state representation to generate a predicted subsequent state representation of a subsequent state of the environment and a predicted reward for the subsequent state; and a value prediction neural network that is configured to receive a current internal state representation of a current environment state and process the current internal state representation to generate a value prediction.
-
公开(公告)号:US20230244933A1
公开(公告)日:2023-08-03
申请号:US18103416
申请日:2023-01-30
Applicant: DeepMind Technologies Limited
Inventor: Tom Schaul , John Quan , David Silver
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a neural network used to select actions performed by a reinforcement learning agent interacting with an environment. In one aspect, a method includes maintaining a replay memory, where the replay memory stores pieces of experience data generated as a result of the reinforcement learning agent interacting with the environment. Each piece of experience data is associated with a respective expected learning progress measure that is a measure of an expected amount of progress made in the training of the neural network if the neural network is trained on the piece of experience data. The method further includes selecting a piece of experience data from the replay memory by prioritizing for selection pieces of experience data having relatively higher expected learning progress measures and training the neural network on the selected piece of experience data.
-
公开(公告)号:US20210089908A1
公开(公告)日:2021-03-25
申请号:US17032562
申请日:2020-09-25
Applicant: DeepMind Technologies Limited
Inventor: Tom Schaul , Diana Luiza Borsa , Fengning Ding , David Szepesvari , Georg Ostrovski , Simon Osindero , William Clinton Dabney
Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for controlling an agent. One of the methods includes sampling a behavior modulation in accordance with a current probability distribution; for each of one or more time steps: processing an input comprising an observation characterizing a current state of the environment at the time step using an action selection neural network to generate a respective action score for each action in a set of possible actions that can be performed by the agent; modifying the action scores using the sampled behavior modulation; and selecting the action to be performed by the agent at the time step based on the modified action scores; determining a fitness measure corresponding to the sampled behavior modulation; and updating the current probability distribution over the set of possible behavior modulations using the fitness measure corresponding to the behavior modulation.
-
公开(公告)号:US20190259051A1
公开(公告)日:2019-08-22
申请号:US16403314
申请日:2019-05-03
Applicant: DeepMind Technologies Limited
Inventor: David Silver , Tom Schaul , Matteo Hessel , Hado Philip van Hasselt
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for prediction of an outcome related to an environment. In one aspect, a system comprises a state representation neural network that is configured to: receive an observation characterizing a state of an environment being interacted with by an agent and process the observation to generate an internal state representation of the environment state; a prediction neural network that is configured to receive a current internal state representation of a current environment state and process the current internal state representation to generate a predicted subsequent state representation of a subsequent state of the environment and a predicted reward for the subsequent state; and a value prediction neural network that is configured to receive a current internal state representation of a current environment state and process the current internal state representation to generate a value prediction.
-
公开(公告)号:US20190258938A1
公开(公告)日:2019-08-22
申请号:US16403385
申请日:2019-05-03
Applicant: DeepMind Technologies Limited
Inventor: Volodymyr Mnih , Wojciech Czarnecki , Maxwell Elliot Jaderberg , Tom Schaul , David Silver , Koray Kavukcuoglu
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a reinforcement learning system. The method includes: training an action selection policy neural network, and during the training of the action selection neural network, training one or more auxiliary control neural networks and a reward prediction neural network. Each of the auxiliary control neural networks is configured to receive a respective intermediate output generated by the action selection policy neural network and generate a policy output for a corresponding auxiliary control task. The reward prediction neural network is configured to receive one or more intermediate outputs generated by the action selection policy neural network and generate a corresponding predicted reward. Training each of the auxiliary control neural networks and the reward prediction neural network comprises adjusting values of the respective auxiliary control parameters, reward prediction parameters, and the action selection policy network parameters.
-
-
-
-
-
-
-
-
-