Generating audio using neural networks

    公开(公告)号:US10304477B2

    公开(公告)日:2019-05-28

    申请号:US16030742

    申请日:2018-07-09

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating an output sequence of audio data that comprises a respective audio sample at each of a plurality of time steps. One of the methods includes, for each of the time steps: providing a current sequence of audio data as input to a convolutional subnetwork, wherein the current sequence comprises the respective audio sample at each time step that precedes the time step in the output sequence, and wherein the convolutional subnetwork is configured to process the current sequence of audio data to generate an alternative representation for the time step; and providing the alternative representation for the time step as input to an output layer, wherein the output layer is configured to: process the alternative representation to generate an output that defines a score distribution over a plurality of possible audio samples for the time step.

    SPEECH RECOGNITION USING CONVOLUTIONAL NEURAL NETWORKS

    公开(公告)号:US20190108833A1

    公开(公告)日:2019-04-11

    申请号:US16209661

    申请日:2018-12-04

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for performing speech recognition by generating a neural network output from an audio data input sequence, where the neural network output characterizes words spoken in the audio data input sequence. One of the methods includes, for each of the audio data inputs, providing a current audio data input sequence that comprises the audio data input and the audio data inputs preceding the audio data input in the audio data input sequence to a convolutional subnetwork comprising a plurality of dilated convolutional neural network layers, wherein the convolutional subnetwork is configured to, for each of the plurality of audio data inputs: receive the current audio data input sequence for the audio data input, and process the current audio data input sequence to generate an alternative representation for the audio data input.

    MULTI-AGENT REINFORCEMENT LEARNING WITH MATCHMAKING POLICIES

    公开(公告)号:US20240370725A1

    公开(公告)日:2024-11-07

    申请号:US18771770

    申请日:2024-07-12

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a policy neural network having a plurality of policy parameters and used to select actions to be performed by an agent to control the agent to perform a particular task while interacting with one or more other agents in an environment. In one aspect, the method includes: maintaining data specifying a pool of candidate action selection policies; maintaining data specifying respective matchmaking policy; and training the policy neural network using a reinforcement learning technique to update the policy parameters. The policy parameters define policies to be used in controlling the agent to perform the particular task.

    Classifying input examples using a comparison set

    公开(公告)号:US12073304B2

    公开(公告)日:2024-08-27

    申请号:US18211085

    申请日:2023-06-16

    CPC classification number: G06N3/044 G06F18/217 G06F18/22 G06F18/2413 G06N3/08

    Abstract: Methods, systems, and apparatus for classifying a new example using a comparison set of comparison examples. One method includes maintaining a comparison set, the comparison set including comparison examples and a respective label vector for each of the comparison examples, each label vector including a respective score for each label in a predetermined set of labels; receiving a new example; determining a respective attention weight for each comparison example by applying a neural network attention mechanism to the new example and to the comparison examples; and generating a respective label score for each label in the predetermined set of labels from, for each of the comparison examples, the respective attention weight for the comparison example and the respective label vector for the comparison example, in which the respective label score for each of the labels represents a likelihood that the label is a correct label for the new example.

    Multi-agent reinforcement learning with matchmaking policies

    公开(公告)号:US12067491B2

    公开(公告)日:2024-08-20

    申请号:US18131567

    申请日:2023-04-06

    CPC classification number: G06N3/08 G06F18/214 H04L63/205

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a policy neural network having a plurality of policy parameters and used to select actions to be performed by an agent to control the agent to perform a particular task while interacting with one or more other agents in an environment. In one aspect, the method includes: maintaining data specifying a pool of candidate action selection policies; maintaining data specifying respective matchmaking policy; and training the policy neural network using a reinforcement learning technique to update the policy parameters. The policy parameters define policies to be used in controlling the agent to perform the particular task.

    CLASSIFYING INPUT EXAMPLES USING A COMPARISON SET

    公开(公告)号:US20230334288A1

    公开(公告)日:2023-10-19

    申请号:US18211085

    申请日:2023-06-16

    CPC classification number: G06N3/044 G06N3/08 G06F18/2413 G06F18/22 G06F18/217

    Abstract: Methods, systems, and apparatus for classifying a new example using a comparison set of comparison examples. One method includes maintaining a comparison set, the comparison set including comparison examples and a respective label vector for each of the comparison examples, each label vector including a respective score for each label in a predetermined set of labels; receiving a new example; determining a respective attention weight for each comparison example by applying a neural network attention mechanism to the new example and to the comparison examples; and generating a respective label score for each label in the predetermined set of labels from, for each of the comparison examples, the respective attention weight for the comparison example and the respective label vector for the comparison example, in which the respective label score for each of the labels represents a likelihood that the label is a correct label for the new example.

Patent Agency Ranking