LEARNING TO EXTRACT ENTITIES FROM CONVERSATIONS WITH NEURAL NETWORKS

    公开(公告)号:US20220075944A1

    公开(公告)日:2022-03-10

    申请号:US17432259

    申请日:2020-02-19

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for extracting entities from conversation transcript data. One of the methods includes obtaining a conversation transcript sequence, processing the conversation transcript sequence using a span detection neural network configured to generate a set of text token spans; and for each text token span: processing a span representation using an entity name neural network to generate an entity name probability distribution over a set of entity names, each probability in the entity name probability distribution representing a likelihood that a corresponding entity name is a name of the entity referenced by the text token span; and processing the span representation using an entity status neural network to generate an entity status probability distribution over a set of entity statuses.

    Joint Speech and Language Model Using Large Language Models

    公开(公告)号:US20240386881A1

    公开(公告)日:2024-11-21

    申请号:US18667763

    申请日:2024-05-17

    Applicant: Google LLC

    Abstract: Methods and systems for recognizing speech are disclosed herein. A method can include performing blank filtering on a received speech input to generate a plurality of filtered encodings and processing the plurality of filtered encodings to generate a plurality of audio embeddings. The method can also include mapping each audio embedding of the plurality of audio embeddings to a textual embedding using a speech adapter to generate a plurality of combined embeddings and receiving one or more specific textual embeddings from a domain-specific entity retriever based on the plurality of filtered encodings. The method can further include providing plurality of combined embeddings and the one or more specific textual embeddings to a machine-trained model and receiving a textual output representing speech from the speech input from the machine-trained model.

    COMPLEX EVOLUTION RECURRENT NEURAL NETWORKS
    5.
    发明申请

    公开(公告)号:US20190156819A1

    公开(公告)日:2019-05-23

    申请号:US16251430

    申请日:2019-01-18

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for speech recognition using complex evolution recurrent neural networks. In some implementations, audio data indicating acoustic characteristics of an utterance is received. A first vector sequence comprising audio features determined from the audio data is generated. A second vector sequence is generated, as output of a first recurrent neural network in response to receiving the first vector sequence as input, where the first recurrent neural network has a transition matrix that implements a cascade of linear operators comprising (i) first linear operators that are complex-valued and unitary, and (ii) one or more second linear operators that are non-unitary. An output vector sequence of a second recurrent neural network is generated. A transcription for the utterance is generated based on the output vector sequence generated by the second recurrent neural network. The transcription for the utterance is provided.

    Complex linear projection for acoustic modeling

    公开(公告)号:US10140980B2

    公开(公告)日:2018-11-27

    申请号:US15386979

    申请日:2016-12-21

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for speech recognition using complex linear projection are disclosed. In one aspect, a method includes the actions of receiving audio data corresponding to an utterance. The method further includes generating frequency domain data using the audio data. The method further includes processing the frequency domain data using complex linear projection. The method further includes providing the processed frequency domain data to a neural network trained as an acoustic model. The method further includes generating a transcription for the utterance that is determined based at least on output that the neural network provides in response to receiving the processed frequency domain data.

    Learning to extract entities from conversations with neural networks

    公开(公告)号:US12216999B2

    公开(公告)日:2025-02-04

    申请号:US17432259

    申请日:2020-02-19

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for extracting entities from conversation transcript data. One of the methods includes obtaining a conversation transcript sequence, processing the conversation transcript sequence using a span detection neural network configured to generate a set of text token spans; and for each text token span: processing a span representation using an entity name neural network to generate an entity name probability distribution over a set of entity names, each probability in the entity name probability distribution representing a likelihood that a corresponding entity name is a name of the entity referenced by the text token span; and processing the span representation using an entity status neural network to generate an entity status probability distribution over a set of entity statuses.

    COMPLEX LINEAR PROJECTION FOR ACOUSTIC MODELING

    公开(公告)号:US20200286468A1

    公开(公告)日:2020-09-10

    申请号:US16879322

    申请日:2020-05-20

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for speech recognition using complex linear projection are disclosed. In one aspect, a method includes the actions of receiving audio data corresponding to an utterance. The method further includes generating frequency domain data using the audio data. The method further includes processing the frequency domain data using complex linear projection. The method further includes providing the processed frequency domain data to a neural network trained as an acoustic model. The method further includes generating a transcription for the utterance that is determined based at least on output that the neural network provides in response to receiving the processed frequency domain data.

    Complex evolution recurrent neural networks

    公开(公告)号:US10529320B2

    公开(公告)日:2020-01-07

    申请号:US16251430

    申请日:2019-01-18

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for speech recognition using complex evolution recurrent neural networks. In some implementations, audio data indicating acoustic characteristics of an utterance is received. A first vector sequence comprising audio features determined from the audio data is generated. A second vector sequence is generated, as output of a first recurrent neural network in response to receiving the first vector sequence as input, where the first recurrent neural network has a transition matrix that implements a cascade of linear operators comprising (i) first linear operators that are complex-valued and unitary, and (ii) one or more second linear operators that are non-unitary. An output vector sequence of a second recurrent neural network is generated. A transcription for the utterance is generated based on the output vector sequence generated by the second recurrent neural network. The transcription for the utterance is provided.

Patent Agency Ranking