SCALABLE AND COMPRESSIVE NEURAL NETWORK DATA STORAGE SYSTEM

    公开(公告)号:US20210150314A1

    公开(公告)日:2021-05-20

    申请号:US17102318

    申请日:2020-11-23

    Abstract: A system for compressed data storage using a neural network. The system comprises a memory comprising a plurality of memory locations configured to store data; a query neural network configured to process a representation of an input data item to generate a query; an immutable key data store comprising key data for indexing the plurality of memory locations; an addressing system configured to process the key data and the query to generate a weighting associated with the plurality of memory locations; a memory read system configured to generate output memory data from the memory based upon the generated weighting associated with the plurality of memory locations and the data stored at the plurality of memory locations; and a memory write system configured to write received write data to the memory based upon the generated weighting associated with the plurality of memory locations.

    Continuous control with deep reinforcement learning

    公开(公告)号:US10776692B2

    公开(公告)日:2020-09-15

    申请号:US15217758

    申请日:2016-07-22

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training an actor neural network used to select actions to be performed by an agent interacting with an environment. One of the methods includes obtaining a minibatch of experience tuples; and updating current values of the parameters of the actor neural network, comprising: for each experience tuple in the minibatch: processing the training observation and the training action in the experience tuple using a critic neural network to determine a neural network output for the experience tuple, and determining a target neural network output for the experience tuple; updating current values of the parameters of the critic neural network using errors between the target neural network outputs and the neural network outputs; and updating the current values of the parameters of the actor neural network using the critic neural network.

    CONTROLLING AGENTS OVER LONG TIME SCALES USING TEMPORAL VALUE TRANSPORT

    公开(公告)号:US20200117956A1

    公开(公告)日:2020-04-16

    申请号:US16601324

    申请日:2019-10-14

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network system used to control an agent interacting with an environment to perform a specified task. One of the methods includes causing the agent to perform a task episode in which the agent attempts to perform the specified task; for each of one or more particular time steps in the sequence: generating a modified reward for the particular time step from (i) the actual reward at the time step and (ii) value predictions at one or more time steps that are more than a threshold number of time steps after the particular time step in the sequence; and training, through reinforcement learning, the neural network system using at least the modified rewards for the particular time steps.

    DATA-EFFICIENT REINFORCEMENT LEARNING FOR CONTINUOUS CONTROL TASKS

    公开(公告)号:US20190354813A1

    公开(公告)日:2019-11-21

    申请号:US16528260

    申请日:2019-07-31

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for data-efficient reinforcement learning. One of the systems is a system for training an actor neural network used to select actions to be performed by an agent that interacts with an environment by receiving observations characterizing states of the environment and, in response to each observation, performing an action selected from a continuous space of possible actions, wherein the actor neural network maps observations to next actions in accordance with values of parameters of the actor neural network, and wherein the system comprises: a plurality of workers, wherein each worker is configured to operate independently of each other worker, wherein each worker is associated with a respective agent replica that interacts with a respective replica of the environment during the training of the actor neural network.

Patent Agency Ranking