TRAINING ACTION SELECTION NEURAL NETWORKS USING APPRENTICESHIP

    公开(公告)号:US20230023189A1

    公开(公告)日:2023-01-26

    申请号:US17962008

    申请日:2022-10-07

    IPC分类号: G06N3/08 G06N3/04

    摘要: An off-policy reinforcement learning actor-critic neural network system configured to select actions from a continuous action space to be performed by an agent interacting with an environment to perform a task. An observation defines environment state data and reward data. The system has an actor neural network which learns a policy function mapping the state data to action data. A critic neural network learns an action-value (Q) function. A replay buffer stores tuples of the state data, the action data, the reward data and new state data. The replay buffer also includes demonstration transition data comprising a set of the tuples from a demonstration of the task within the environment. The neural network system is configured to train the actor neural network and the critic neural network off-policy using stored tuples from the replay buffer comprising tuples both from operation of the system and from the demonstration transition data.

    MULTI-OBJECTIVE REINFORCEMENT LEARNING USING WEIGHTED POLICY PROJECTION

    公开(公告)号:US20240185084A1

    公开(公告)日:2024-06-06

    申请号:US18286504

    申请日:2022-05-27

    IPC分类号: G06N3/092

    CPC分类号: G06N3/092

    摘要: Computer implemented systems and methods for training an action selection policy neural network to select actions to be performed by an agent to control the agent to perform a task. The techniques are able to optimize multiple objectives one of which may be to stay close to a behavioral policy of a teacher. The behavioral policy of the teacher may be defined by a predetermined dataset of behaviors and the systems and methods may then learn offline. The described techniques provide a mechanism for explicitly defining a trade-off between the multiple objectives.

    TRAINING ACTION SELECTION NEURAL NETWORKS USING AUXILIARY TASKS OF CONTROLLING OBSERVATION EMBEDDINGS

    公开(公告)号:US20230290133A1

    公开(公告)日:2023-09-14

    申请号:US18016746

    申请日:2021-07-27

    IPC分类号: G06V10/82 G06V10/70

    CPC分类号: G06V10/82 G06V10/87

    摘要: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for selecting actions to be performed by an agent interacting with an environment to accomplish a goal. In one aspect, a method comprises: obtaining an observation characterizing a state of the environment, processing the observation using an embedding model to generate a lower-dimensional embedding of the observation, determining an auxiliary task reward based on a value of a particular dimension of the embedding, determining an overall reward based at least in part on the auxiliary task reward, and determining an update to values of multiple parameters of an action selection neural network based on the overall reward using a reinforcement learning technique.

    DATA-EFFICIENT REINFORCEMENT LEARNING FOR CONTINUOUS CONTROL TASKS

    公开(公告)号:US20190354813A1

    公开(公告)日:2019-11-21

    申请号:US16528260

    申请日:2019-07-31

    IPC分类号: G06K9/62 G06N3/08 G06N3/04

    摘要: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for data-efficient reinforcement learning. One of the systems is a system for training an actor neural network used to select actions to be performed by an agent that interacts with an environment by receiving observations characterizing states of the environment and, in response to each observation, performing an action selected from a continuous space of possible actions, wherein the actor neural network maps observations to next actions in accordance with values of parameters of the actor neural network, and wherein the system comprises: a plurality of workers, wherein each worker is configured to operate independently of each other worker, wherein each worker is associated with a respective agent replica that interacts with a respective replica of the environment during the training of the actor neural network.