-
公开(公告)号:US20200327405A1
公开(公告)日:2020-10-15
申请号:US16303501
申请日:2017-05-18
Applicant: DEEPMIND TECHNOLOGIES LIMITED
Inventor: Marc Gendron-Bellemare , Remi Munos , Srinivasan Sriram
Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network used to select actions to be performed by an agent interacting with an environment. One of the methods includes obtaining data identifying (i) a first observation characterizing a first state of the environment, (ii) an action performed by the agent in response to the first observation, and (iii) an actual reward received resulting from the agent performing the action in response to the first observation; determining a pseudo-count for the first observation; determining an exploration reward bonus that incentivizes the agent to explore the environment from the pseudo-count for the first observation; generating a combined reward from the actual reward and the exploration reward bonus; and adjusting current values of the parameters of the neural network using the combined reward.
-
公开(公告)号:US11727264B2
公开(公告)日:2023-08-15
申请号:US16303501
申请日:2017-05-18
Applicant: DEEPMIND TECHNOLOGIES LIMITED
Inventor: Marc Gendron-Bellemare , Remi Munos , Srinivasan Sriram
Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network used to select actions to be performed by an agent interacting with an environment. One of the methods includes obtaining data identifying (i) a first observation characterizing a first state of the environment, (ii) an action performed by the agent in response to the first observation, and (iii) an actual reward received resulting from the agent performing the action in response to the first observation; determining a pseudo-count for the first observation; determining an exploration reward bonus that incentivizes the agent to explore the environment from the pseudo-count for the first observation; generating a combined reward from the actual reward and the exploration reward bonus; and adjusting current values of the parameters of the neural network using the combined reward.
-