SEQUENCE MODELS FOR AUDIO SCENE RECOGNITION

    公开(公告)号:US20210065735A1

    公开(公告)日:2021-03-04

    申请号:US16997314

    申请日:2020-08-19

    Abstract: A method is provided. Intermediate audio features are generated from an input acoustic sequence. Using a nearest neighbor search, segments of the input acoustic sequence are classified based on the intermediate audio features to generate a final intermediate feature as a classification for the input acoustic sequence. Each segment corresponds to a respective different acoustic window. The generating step includes learning the intermediate audio features from Multi-Frequency Cepstral Component (MFCC) features extracted from the input acoustic sequence. The generating step includes dividing the same scene into the different acoustic windows having varying MFCC features. The generating step includes feeding the MFCC features of each of the different acoustic windows into respective LSTM units such that a hidden state of each respective LSTM unit is passed through an attention layer to identify feature correlations between hidden states at different time steps corresponding to different ones of the different acoustic windows.

    Sequence models for audio scene recognition

    公开(公告)号:US10930301B1

    公开(公告)日:2021-02-23

    申请号:US16997314

    申请日:2020-08-19

    Abstract: A method is provided. Intermediate audio features are generated from an input acoustic sequence. Using a nearest neighbor search, segments of the input acoustic sequence are classified based on the intermediate audio features to generate a final intermediate feature as a classification for the input acoustic sequence. Each segment corresponds to a respective different acoustic window. The generating step includes learning the intermediate audio features from Multi-Frequency Cepstral Component (MFCC) features extracted from the input acoustic sequence. The generating step includes dividing the same scene into the different acoustic windows having varying MFCC features. The generating step includes feeding the MFCC features of each of the different acoustic windows into respective LSTM units such that a hidden state of each respective LSTM unit is passed through an attention layer to identify feature correlations between hidden states at different time steps corresponding to different ones of the different acoustic windows.

    AUDIO SCENE RECOGNITION USING TIME SERIES ANALYSIS

    公开(公告)号:US20210065734A1

    公开(公告)日:2021-03-04

    申请号:US16997249

    申请日:2020-08-19

    Abstract: A method is provided. Intermediate audio features are generated from respective segments of an input acoustic time series for a same scene. Using a nearest neighbor search, respective segments of the input acoustic time series are classified based on the intermediate audio features to generate a final intermediate feature as a classification for the input acoustic time series. Each respective segment corresponds to a respective different acoustic window. The generating step includes learning the intermediate audio features from Multi-Frequency Cepstral Component (MFCC) features extracted from the input acoustic time series, dividing the same scene into the different windows having varying MFCC features, and feeding the MFCC features of each window into respective LSTM units such that a hidden state of each respective LSTM unit is passed through an attention layer to identify feature correlations between hidden states at different time steps corresponding to different ones of the different windows.

    Audio scene recognition using time series analysis

    公开(公告)号:US11355138B2

    公开(公告)日:2022-06-07

    申请号:US16997249

    申请日:2020-08-19

    Abstract: A method is provided. Intermediate audio features are generated from respective segments of an input acoustic time series for a same scene. Using a nearest neighbor search, respective segments of the input acoustic time series are classified based on the intermediate audio features to generate a final intermediate feature as a classification for the input acoustic time series. Each respective segment corresponds to a respective different acoustic window. The generating step includes learning the intermediate audio features from Multi-Frequency Cepstral Component (MFCC) features extracted from the input acoustic time series, dividing the same scene into the different windows having varying MFCC features, and feeding the MFCC features of each window into respective LSTM units such that a hidden state of each respective LSTM unit is passed through an attention layer to identify feature correlations between hidden states at different time steps corresponding to different ones of the different windows.

Patent Agency Ranking