Code-switching speech recognition with end-to-end connectionist temporal classification model

    公开(公告)号:US10964309B2

    公开(公告)日:2021-03-30

    申请号:US16410556

    申请日:2019-05-13

    Abstract: A CS CTC model may be initialed from a major language CTC model by keeping network hidden weights and replacing output tokens with a union of major and secondary language output tokens. The initialized model may be trained by updating parameters with training data from both languages, and a LID model may also be trained with the data. During a decoding process for each of a series of audio frames, if silence dominates a current frame then a silence output token may be emitted. If silence does not dominate the frame, then a major language output token posterior vector from the CS CTC model may be multiplied with the LID major language probability to create a probability vector from the major language. A similar step is performed for the secondary language, and the system may emit an output token associated with the highest probability across all tokens from both languages.

    Learning Student DNN Via Output Distribution
    3.
    发明申请
    Learning Student DNN Via Output Distribution 审中-公开
    通过输出分布学习学生DNN

    公开(公告)号:US20160078339A1

    公开(公告)日:2016-03-17

    申请号:US14853485

    申请日:2015-09-14

    CPC classification number: G06N3/084 G06N3/0454 G06N7/005 G06N20/00 G09B5/00

    Abstract: Systems and methods are provided for generating a DNN classifier by “learning” a “student” DNN model from a larger more accurate “teacher” DNN model. The student DNN may be trained from un-labeled training data because its supervised signal is obtained by passing the un-labeled training data through the teacher DNN. In one embodiment, an iterative process is applied to train the student DNN by minimize the divergence of the output distributions from the teacher and student DNN models. For each iteration until convergence, the difference in the output distributions is used to update the student DNN model, and output distributions are determined again, using the unlabeled training data. The resulting trained student model may be suitable for providing accurate signal processing applications on devices having limited computational or storage resources such as mobile or wearable devices. In an embodiment, the teacher DNN model comprises an ensemble of DNN models.

    Abstract translation: 提供了系统和方法,用于通过从更为精确的“教师”DNN模型学习“学生”DNN模型来生成DNN分类器。 学生DNN可以从未标记的训练数据训练,因为其监督信号是通过传递未标记的训练数据通过教师DNN获得的。 在一个实施例中,应用迭代过程来通过最小化来自教师和学生DNN模型的输出分布的偏差来训练学生DNN。 对于每次迭代直到收敛,输出分布的差异用于更新学生DNN模型,并且使用未标记的训练数据再次确定输出分布。 所得到的训练有素的学生模型可能适合于在具有有限计算或存储资源的设备(例如移动或可穿戴设备)上提供精确的信号处理应用。 在一个实施例中,教师DNN模型包括DNN模型的集合。

    Scalable mining of trending insights from text

    公开(公告)号:US10733221B2

    公开(公告)日:2020-08-04

    申请号:US15085714

    申请日:2016-03-30

    Abstract: A system and method for identifying trending topics in a document corpus are provided. First, multiple topics are identified, some of which topics may be filtered or removed based on co-occurrence. Then, for each remaining topic, a frequency of the topic in the document corpus is determined, one or more frequencies of the topic in one or more other document corpora are determined, a trending score of the topic is generated based on the determined frequencies. Lastly, the remaining topics are ranked based on the generated trending scores.

    Augmented training data for end-to-end models

    公开(公告)号:US11862144B2

    公开(公告)日:2024-01-02

    申请号:US17124341

    申请日:2020-12-16

    CPC classification number: G10L15/063 G10L13/07 G10L15/19 G10L15/26

    Abstract: A computer system is provided that includes a processor configured to store a set of audio training data that includes a plurality of audio segments and metadata indicating a word or phrase associated with each audio segment. For a target training statement of a set of structured text data, the processor is configured to generate a concatenated audio signal that matches a word content of a target training statement by comparing the words or phrases of a plurality of text segments of the target training statement to respective words or phrases of audio segments of the stored set of audio training data, selecting a plurality of audio segments from the set of audio training data based on a match in the words or phrases between the plurality of text segments of the target training statement and the selected plurality of audio segments, and concatenating the selected plurality of audio segments.

Patent Agency Ranking