GENERATING AUDIO USING NEURAL NETWORKS
    61.
    发明申请

    公开(公告)号:US20190251987A1

    公开(公告)日:2019-08-15

    申请号:US16390549

    申请日:2019-04-22

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating an output sequence of audio data that comprises a respective audio sample at each of a plurality of time steps. One of the methods includes, for each of the time steps: providing a current sequence of audio data as input to a convolutional subnetwork, wherein the current sequence comprises the respective audio sample at each time step that precedes the time step in the output sequence, and wherein the convolutional subnetwork is configured to process the current sequence of audio data to generate an alternative representation for the time step; and providing the alternative representation for the time step as input to an output layer, wherein the output layer is configured to: process the alternative representation to generate an output that defines a score distribution over a plurality of possible audio samples for the time step.

    Using Hierarchical Representations for Neural Network Architecture Searching

    公开(公告)号:US20240249146A1

    公开(公告)日:2024-07-25

    申请号:US18415376

    申请日:2024-01-17

    CPC classification number: G06N3/086 G06F16/9024 G06N3/045 G06F17/15

    Abstract: A computer-implemented method for automatically determining a neural network architecture represents a neural network architecture as a data structure defining a hierarchical set of directed acyclic graphs in multiple levels. Each graph has an input, an output, and a plurality of nodes between the input and the output. At each level, a corresponding set of the nodes are connected pairwise by directed edges which indicate operations performed on outputs of one node to generate an input to another node. Each level is associated with a corresponding set of operations. At a lowest level, the operations associated with each edge are selected from a set of primitive operations. The method includes repeatedly generating new sample neural network architectures, and evaluating their fitness. The modification is performed by selecting a level, selecting two nodes at that level, and modifying, removing or adding an edge between those nodes according to operations associated with lower levels of the hierarchy.

    MULTI-AGENT REINFORCEMENT LEARNING WITH MATCHMAKING POLICIES

    公开(公告)号:US20230244936A1

    公开(公告)日:2023-08-03

    申请号:US18131567

    申请日:2023-04-06

    CPC classification number: G06N3/08 H04L63/205 G06F18/214

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a policy neural network having a plurality of policy parameters and used to select actions to be performed by an agent to control the agent to perform a particular task while interacting with one or more other agents in an environment. In one aspect, the method includes: maintaining data specifying a pool of candidate action selection policies; maintaining data specifying respective matchmaking policy; and training the policy neural network using a reinforcement learning technique to update the policy parameters. The policy parameters define policies to be used in controlling the agent to perform the particular task.

    Multi-agent reinforcement learning with matchmaking policies

    公开(公告)号:US11627165B2

    公开(公告)日:2023-04-11

    申请号:US16752496

    申请日:2020-01-24

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a policy neural network having a plurality of policy parameters and used to select actions to be performed by an agent to control the agent to perform a particular task while interacting with one or more other agents in an environment. In one aspect, the method includes: maintaining data specifying a pool of candidate action selection policies; maintaining data specifying respective matchmaking policy; and training the policy neural network using a reinforcement learning technique to update the policy parameters. The policy parameters define policies to be used in controlling the agent to perform the particular task.

    Learning observation representations by predicting the future in latent space

    公开(公告)号:US11568207B2

    公开(公告)日:2023-01-31

    申请号:US16586323

    申请日:2019-09-27

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an encoder neural network that is configured to process an input observation to generate a latent representation of the input observation. In one aspect, a method includes: obtaining a sequence of observations; for each observation in the sequence of observations, processing the observation using the encoder neural network to generate a latent representation of the observation; for each of one or more given observations in the sequence of observations: generating a context latent representation of the given observation; and generating, from the context latent representation of the given observation, a respective estimate of the latent representations of one or more particular observations that are after the given observation in the sequence of observations.

Patent Agency Ranking