CLOUD-BASED PROCESSING USING LOCAL DEVICE PROVIDED SENSOR DATA AND LABELS

    公开(公告)号:US20170270406A1

    公开(公告)日:2017-09-21

    申请号:US15273496

    申请日:2016-09-22

    CPC classification number: G06N3/08 G06N3/04 G06N3/0454

    Abstract: A method of training a device specific cloud-based audio processor includes receiving sensor data captured from multiple sensors at a local device. The method also includes receiving spatial information labels computed on the local device using local configuration information. The spatial information labels are associated with the captured sensor data. Lower layers of a first neural network are trained based on the spatial information labels and sensor data. The trained lower layers are incorporated into a second, larger neural network for audio classification. The second, larger neural network may be retrained using the trained lower layers of the first neural network.

    TRANSFORM AMBISONIC COEFFICIENTS USING AN ADAPTIVE NETWORK FOR PRESERVING SPATIAL DIRECTION

    公开(公告)号:US20230260525A1

    公开(公告)日:2023-08-17

    申请号:US18138684

    申请日:2023-04-24

    Abstract: A device includes a memory configured to store untransformed ambisonic coefficients at different time segments. The device includes one or more processors configured to obtain the untransformed ambisonic coefficients at the different time segments, where the untransformed ambisonic coefficients at the different time segments represent a soundfield at the different time segments. The one or more processors are configured to apply one adaptive network, based on a constraint that includes preservation of a spatial direction of one or more audio sources in the soundfield at the different time segments, to the untransformed ambisonic coefficients at the different time segments to generate transformed ambisonic coefficients at the different time segments, wherein the transformed ambisonic coefficients at the different time segments represent a modified soundfield at the different time segments, that was modified based on the constraint. The one or more processors are also configured to apply an additional adaptive network.

    CONTEXT-BASED SPEECH ENHANCEMENT
    6.
    发明公开

    公开(公告)号:US20230326477A1

    公开(公告)日:2023-10-12

    申请号:US18334641

    申请日:2023-06-14

    CPC classification number: G10L21/0232 G10L21/038 G10L21/02

    Abstract: A device to perform speech enhancement includes one or more processors configured to process image data to detect at least one of an emotion, a speaker characteristic, or a noise type. The one or more processors are also configured to generate context data based at least in part on the at least one of the emotion, the speaker characteristic, or the noise type. The one or more processors are further configured to obtain input spectral data based on an input signal. The input signal represents sound that includes speech. The one or more processors are also configured to process, using a multi-encoder transformer, the input spectral data and the context data to generate output spectral data that represents a speech enhanced version of the input signal.

    MIXED ADAPTIVE AND FIXED COEFFICIENT NEURAL NETWORKS FOR SPEECH ENHANCEMENT

    公开(公告)号:US20210343306A1

    公开(公告)日:2021-11-04

    申请号:US17243434

    申请日:2021-04-28

    Abstract: Systems, methods and computer-readable media are provided for speech enhancement using a hybrid neural network. An example process can include receiving, by a first neural network portion of the hybrid neural network, audio data and reference data, the audio data including speech data, noise data, and echo data; filtering, by the first neural network portion, a portion of the audio data based on adapted coefficients of the first neural network portion, the portion of the audio data including the noise data and/or echo data; based on the filtering, generating, by the first neural network portion, filtered audio data including the speech data and an unfiltered portion of the noise data and/or echo data; and based on the filtered audio data and the reference data, extracting, by a second neural network portion of the hybrid neural network, the speech data from the filtered audio data.

    TRANSFORM AMBISONIC COEFFICIENTS USING AN ADAPTIVE NETWORK

    公开(公告)号:US20210304777A1

    公开(公告)日:2021-09-30

    申请号:US17210357

    申请日:2021-03-23

    Abstract: A device includes a memory configured to store untransformed ambisonic coefficients at different time segments. The device also includes one or more processors configured to obtain the untransformed ambisonic coefficients at the different time segments, where the untransformed ambisonic coefficients at the different time segments represent a soundfield at the different time segments. The one or more processors are also configured to apply one adaptive network, based on a constraint, to the untransformed ambisonic coefficients at the different time segments to generate transformed ambisonic coefficients at the different time segments, wherein the transformed ambisonic coefficients at the different time segments represent a modified soundfield at the different time segments, that was modified based on the constraint.

Patent Agency Ranking