Reinforcement learning for active sequence processing

    公开(公告)号:US12175737B2

    公开(公告)日:2024-12-24

    申请号:US17773789

    申请日:2020-11-13

    Abstract: A system that is configured to receive a sequence of task inputs and to perform a machine learning task is described. The system includes a reinforcement learning (RL) neural network and a task neural network. The RL neural network is configured to: generate, for each task input of the sequence of task inputs, a respective decision that determines whether to encode the task input or to skip the task input, and provide the respective decision of each task input to the task neural network. The task neural network is configured to: receive the sequence of task inputs, receive, from the RL neural network, for each task input of the sequence of task inputs, a respective decision that determines whether to encode the task input or to skip the task input, process each of the un-skipped task inputs in the sequence of task inputs to generate a respective accumulated feature for the un-skipped task input, wherein the respective accumulated feature characterizes features of the un-skipped task input and of previous un-skipped task inputs in the sequence, and generate a machine learning task output for the machine learning task based on the last accumulated feature generated for the last un-skipped task input in the sequence.

    REINFORCEMENT LEARNING WITH ADAPTIVE RETURN COMPUTATION SCHEMES

    公开(公告)号:US20230059004A1

    公开(公告)日:2023-02-23

    申请号:US17797878

    申请日:2021-02-08

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for reinforcement learning with adaptive return computation schemes. In one aspect, a method includes: maintaining data specifying a policy for selecting between multiple different return computation schemes, each return computation scheme assigning a different importance to exploring the environment while performing an episode of a task; selecting, using the policy, a return computation scheme from the multiple different return computation schemes; controlling an agent to perform the episode of the task to maximize a return computed according to the selected return computation scheme; identifying rewards that were generated as a result of the agent performing the episode of the task; and updating, using the identified rewards, the policy for selecting between multiple different return computation schemes.

    REINFORCEMENT LEARNING FOR ACTIVE SEQUENCE PROCESSING

    公开(公告)号:US20250148774A1

    公开(公告)日:2025-05-08

    申请号:US18953004

    申请日:2024-11-19

    Abstract: A system that is configured to receive a sequence of task inputs and to perform a machine learning task is described. An RL neural network is configured to: generate, for each task input of the sequence, a respective decision that determines whether to encode the task input or to skip the task input, and provide the respective decision of each task input to the task neural network. The task neural network is configured to: receive the sequence of task inputs, receive, from the RL neural network, for each task input of the sequence, a respective decision, process each of the un-skipped task inputs in the sequence of task inputs to generate a respective accumulated feature for the un-skipped task input, and generate a machine learning task output for the machine learning task based on the last accumulated feature generated for the last un-skipped task input in the sequence.

Patent Agency Ranking