-
公开(公告)号:US11977983B2
公开(公告)日:2024-05-07
申请号:US17020248
申请日:2020-09-14
Applicant: DeepMind Technologies Limited
Inventor: Mohammad Gheshlaghi Azar , Meire Fortunato , Bilal Piot , Olivier Claude Pietquin , Jacob Lee Menick , Volodymyr Mnih , Charles Blundell , Remi Munos
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for selecting an action to be performed by a reinforcement learning agent. The method includes obtaining an observation characterizing a current state of an environment. For each layer parameter of each noisy layer of a neural network, a respective noise value is determined. For each layer parameter of each noisy layer, a noisy current value for the layer parameter is determined from a current value of the layer parameter, a current value of a corresponding noise parameter, and the noise value. A network input including the observation is processed using the neural network in accordance with the noisy current values to generate a network output for the network input. An action is selected from a set of possible actions to be performed by the agent in response to the observation using the network output.
-
2.
公开(公告)号:US20230083486A1
公开(公告)日:2023-03-16
申请号:US17797886
申请日:2021-02-08
Applicant: DeepMind Technologies Limited
Inventor: Zhaohan Guo , Mohammad Gheshlaghi Azar , Bernardo Avila Pires , Florent Altché , Jean-Bastien François Laurent Grill , Bilal Piot , Remi Munos
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an environment representation neural network of a reinforcement learning system controls an agent to perform a given task. In one aspect, the method includes: receiving a current observation input and a future observation input; generating, from the future observation input, a future latent representation of the future state of the environment; processing, using the environment representation neural network, to generate a current internal representation of the current state of the environment; generating, from the current internal representation, a predicted future latent representation; evaluating an objective function measuring a difference between the future latent representation and the predicted future latent representation; and determining, based on a determined gradient of the objective function, an update to the current values of the environment representation parameters.
-
公开(公告)号:US20210065012A1
公开(公告)日:2021-03-04
申请号:US17020248
申请日:2020-09-14
Applicant: DeepMind Technologies Limited
Inventor: Mohammad Gheshlaghi Azar , Meire Fortunato , Bilal Piot , Olivier Claude Pietquin , Jacob Lee Menick , Volodymyr Mnih , Charles Blundell , Remi Munos
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for selecting an action to be performed by a reinforcement learning agent. The method includes obtaining an observation characterizing a current state of an environment. For each layer parameter of each noisy layer of a neural network, a respective noise value is determined. For each layer parameter of each noisy layer, a noisy current value for the layer parameter is determined from a current value of the layer parameter, a current value of a corresponding noise parameter, and the noise value. A network input including the observation is processed using the neural network in accordance with the noisy current values to generate a network output for the network input. An action is selected from a set of possible actions to be performed by the agent in response to the observation using the network output.
-
公开(公告)号:US11604997B2
公开(公告)日:2023-03-14
申请号:US16603307
申请日:2018-06-11
Applicant: DeepMind Technologies Limited
Inventor: Marc Gendron-Bellemare , Mohammad Gheshlaghi Azar , Audrunas Gruslys , Remi Munos
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a policy neural network. The policy neural network is used to select actions to be performed by an agent that interacts with an environment by receiving an observation characterizing a state of the environment and performing an action from a set of actions in response to the received observation. A trajectory is obtained from a replay memory, and a final update to current values of the policy network parameters is determined for each training observation in the trajectory. The final updates to the current values of the policy network parameters are determined from selected action updates and leave-one-out updates.
-
公开(公告)号:US20210383225A1
公开(公告)日:2021-12-09
申请号:US17338777
申请日:2021-06-04
Applicant: DeepMind Technologies Limited
Inventor: Jean-Bastien François Laurent Grill , Florian Strub , Florent Altché , Corentin Tallec , Pierre Richemond , Bernardo Avila Pires , Zhaohan Guo , Mohammad Gheshlaghi Azar , Bilal Piot , Remi Munos , Michal Valko
Abstract: A computer-implemented method of training a neural network. The method comprises processing a first transformed view of a training data item, e.g. an image, with a target neural network to generate a target output, processing a second transformed view of the training data item, e.g. image, with an online neural network to generate a prediction of the target output, updating parameters of the online neural network to minimize an error between the prediction of the target output and the target output, and updating parameters of the target neural network based on the parameters of the online neural network. The method can effectively train an encoder neural network without using labelled training data items, and without using a contrastive loss, i.e. without needing “negative examples” which comprise transformed views of different data items.
-
公开(公告)号:US20210110271A1
公开(公告)日:2021-04-15
申请号:US16603307
申请日:2018-06-11
Applicant: DeepMind Technologies Limited
Inventor: Marc Gendron-Bellemare , Mohammad Gheshlaghi Azar , Audrunas Gruslys , Remi Munos
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a policy neural network. The policy neural network is used to select actions to be performed by an agent that interacts with an environment by receiving an observation characterizing a state of the environment and performing an action from a set of actions in response to the received observation. A trajectory is obtained from a replay memory, and a final update to current values of the policy network parameters is determined for each training observation in the trajectory. The final updates to the current values of the policy network parameters are determined from selected action updates and leave-one-out updates.
-
-
-
-
-