CONVOLUTIONAL, LONG SHORT-TERM MEMORY, FULLY CONNECTED DEEP NEURAL NETWORKS
    12.
    发明申请
    CONVOLUTIONAL, LONG SHORT-TERM MEMORY, FULLY CONNECTED DEEP NEURAL NETWORKS 审中-公开
    连续长时间的记忆,完全连接的深层神经网络

    公开(公告)号:US20160099010A1

    公开(公告)日:2016-04-07

    申请号:US14847133

    申请日:2015-09-08

    Applicant: Google Inc.

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for identifying the language of a spoken utterance. One of the methods includes receiving input features of an utterance; and processing the input features using an acoustic model that comprises one or more convolutional neural network (CNN) layers, one or more long short-term memory network (LSTM) layers, and one or more fully connected neural network layers to generate a transcription for the utterance.

    Abstract translation: 方法,系统和装置,包括在计算机存储介质上编码的计算机程序,用于识别口语发音的语言。 其中一种方法包括接收话音的输入特征; 以及使用包括一个或多个卷积神经网络(CNN)层,一个或多个长短期存储网络(LSTM)层和一个或多个完全连接的神经网络层的声学模型来处理输入特征,以产生用于 说话。

    TRAINING DISTILLED MACHINE LEARNING MODELS
    13.
    发明申请
    TRAINING DISTILLED MACHINE LEARNING MODELS 审中-公开
    培训机器学习模式

    公开(公告)号:US20150356461A1

    公开(公告)日:2015-12-10

    申请号:US14731349

    申请日:2015-06-04

    Applicant: Google Inc.

    CPC classification number: G06N99/005 G06N3/0454 G06N7/00 G06N7/005

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a distilled machine learning model. One of the methods includes training a cumbersome machine learning model, wherein the cumbersome machine learning model is configured to receive an input and generate a respective score for each of a plurality of classes; and training a distilled machine learning model on a plurality of training inputs, wherein the distilled machine learning model is also configured to receive inputs and generate scores for the plurality of classes, comprising: processing each training input using the cumbersome machine learning model to generate a cumbersome target soft output for the training input; and training the distilled machine learning model to, for each of the training inputs, generate a soft output that matches the cumbersome target soft output for the training input.

    Abstract translation: 方法,系统和装置,包括在计算机存储介质上编码的计算机程序,用于训练蒸馏机器学习模型。 其中一种方法包括训练繁琐的机器学习模型,其中笨重的机器学习模型被配置为接收输入并为多个类中的每一个生成相应的分数; 并且在多个训练输入上训练蒸馏机器学习模型,其中所述蒸馏机器学习模型还被配置为接收所述多个类别的输入并生成分数,其包括:使用所述麻烦的机器学习模型来处理每个训练输入以产生 训练输入的麻烦目标软输出; 并训练蒸馏机器学习模型,对于每个训练输入,产生与训练输入的麻烦的目标软输出相匹配的软输出。

    GENERATING REPRESENTATIONS OF INPUT SEQUENCES USING NEURAL NETWORKS
    14.
    发明申请
    GENERATING REPRESENTATIONS OF INPUT SEQUENCES USING NEURAL NETWORKS 审中-公开
    使用神经网络生成输入序列的表示

    公开(公告)号:US20150356401A1

    公开(公告)日:2015-12-10

    申请号:US14731326

    申请日:2015-06-04

    Applicant: Google Inc.

    CPC classification number: G06N3/02 G06F17/28 G06N3/0445 G06N3/0454

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating representations of input sequences. One of the methods includes obtaining an input sequence, the input sequence comprising a plurality of inputs arranged according to an input order; processing the input sequence using a first long short term memory (LSTM) neural network to convert the input sequence into an alternative representation for the input sequence; and processing the alternative representation for the input sequence using a second LSTM neural network to generate a target sequence for the input sequence, the target sequence comprising a plurality of outputs arranged according to an output order.

    Abstract translation: 方法,系统和装置,包括在计算机存储介质上编码的计算机程序,用于产生输入序列的表示。 所述方法之一包括获得输入序列,所述输入序列包括根据输入顺序排列的多个输入; 使用第一长的短期存储器(LSTM)神经网络来处理输入序列,以将输入序列转换成输入序列的替代表示; 以及使用第二LSTM神经网络处理所述输入序列的替代表示,以生成所述输入序列的目标序列,所述目标序列包括根据输出顺序排列的多个输出。

Patent Agency Ranking