TRAINING A NEURAL NETWORK MODEL
    1.
    发明申请

    公开(公告)号:WO2019106136A1

    公开(公告)日:2019-06-06

    申请号:PCT/EP2018/083102

    申请日:2018-11-30

    CPC classification number: G06N3/082

    Abstract: A concept for training a neural network model. The concept comprises receiving training data and test data, each comprising a set of annotated images. A neural network model is trained using the training data with an initial regularization parameter. Loss functions of the neural network for both the training data and the test data are used to modify the regularization parameter, and the neural network model is retrained using the modified regularization parameter. This process is iteratively repeated until the loss functions both converge. A system, method and a computer program product embodying this concept are disclosed.

    COMPUTER-BASED SYSTEM AND COMPUTER-BASED METHOD

    公开(公告)号:WO2019097749A1

    公开(公告)日:2019-05-23

    申请号:PCT/JP2018/020251

    申请日:2018-05-21

    Abstract: A computer-based system trains a neural network by solving a double layer optimization problem. The system includes an input interface to receive an input to the neural network and labels of the input to the neural network; a processor to solve a double layer optimization to produce parameters of the neural network, and an output interface to output the parameters of the neural network. The double layer optimization includes an optimization of a first layer subject to an optimization of a second layer. The optimization of the first layer minimizes a difference between an output of the neural network processing the input and the labels of the input to the neural network, the optimization of the second layer minimizes a distance between a non-negative output vector of each layer and a corresponding input vector to each layer. The input vector of a current layer is a linear transformation of the non-negative output vector of the previous layer.

    基于剪枝和蒸馏的卷积神经网络压缩方法

    公开(公告)号:WO2018223822A1

    公开(公告)日:2018-12-13

    申请号:PCT/CN2018/087063

    申请日:2018-05-16

    Inventor: 江帆 单羿

    CPC classification number: G06N3/04 G06N3/082

    Abstract: 一种基于剪枝和蒸馏的卷积神经网络压缩方法(400),包括:对原始卷积神经网络模型进行剪枝操作,得到剪枝后的模型(S401);对剪枝后的模型进行参数微调(S403);利用原始卷积神经网络模型作为蒸馏算法的老师网络,将经过参数微调的剪枝后的模型作为蒸馏算法的学生网络,根据蒸馏算法,通过老师网络来指导学生网络进行训练(S405);将经过蒸馏算法训练的学生网络作为压缩后的卷积神经网络模型(S407)。该方法通过将两个传统的网络压缩方法联合使用,更有效地压缩了卷积神经网络模型。

    LEARNING AND DEPLOYMENT OF ADAPTIVE WIRELESS COMMUNICATIONS

    公开(公告)号:WO2018204632A1

    公开(公告)日:2018-11-08

    申请号:PCT/US2018/030876

    申请日:2018-05-03

    CPC classification number: G06N3/08 G06N3/04 G06N3/0445 G06N3/0454 G06N3/082

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training and deploying machine-learned communication over radio frequency (RF) channels. One of the methods includes: determining first information; using an encoder machine-learning network to process the first information and generate a first RF signal for transmission through a communication channel; determining a second RF signal that represents the first RF signal having been altered by transmission through the communication channel; using a decoder machine-learning network to process the second RF signal and generate second information as a reconstruction of the first information; calculating a measure of distance between the second information and the first information; and updating at least one of the encoder machine-learning network or the decoder machine-learning network based on the measure of distance between the second information and the first information.

    IMPROVED SPARSE CONVOLUTION NEURAL NETWORK
    5.
    发明申请
    IMPROVED SPARSE CONVOLUTION NEURAL NETWORK 审中-公开
    改进的稀疏卷积神经网络

    公开(公告)号:WO2018073975A1

    公开(公告)日:2018-04-26

    申请号:PCT/JP2016/081973

    申请日:2016-10-21

    Inventor: DAULTANI, Vijay

    CPC classification number: G06N3/082

    Abstract: A computer-implemented information processing method for an inference phase of a convolution neural network, the method including steps of: generating a list of non-zero elements from a learned sparse kernel to be used for a convolution layer of the convolution neural network; when performing convolution on an input feature map, loading only elements of the input feature map which correspond to the non-zero elements of the generated list; and performing convolution arithmetic operations using the loaded elements of the input data map and the non-zero elements of the list, thereby reducing the number of operations necessary to generate an output feature map of the convolution layer.

    Abstract translation: 一种用于卷积神经网络的推理阶段的计算机实现的信息处理方法,所述方法包括以下步骤:从学习的稀疏核生成非零元素的列表以用于卷积 卷积神经网络的层; 当对输入特征图进行卷积时,仅加载输入特征图中对应于生成列表的非零元素的元素; 并使用输入数据映射的加载元素和列表的非零元素执行卷积算术运算,由此减少生成卷积层的输出特征映射图所必需的操作的数量。

    GENERATING LARGER NEURAL NETWORKS
    6.
    发明申请
    GENERATING LARGER NEURAL NETWORKS 审中-公开
    生成更大的神经网络

    公开(公告)号:WO2017083777A1

    公开(公告)日:2017-05-18

    申请号:PCT/US2016/061704

    申请日:2016-11-11

    Applicant: GOOGLE INC.

    CPC classification number: G06N3/082 G06N3/04 G06N3/0454

    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating a larger neural network from a smaller neural network. In one aspect, a method includes obtaining data specifying an original neural network; generating a larger neural network from the original neural network, wherein the larger neural network has a larger neural network structure including the plurality of original neural network units and a plurality of additional neural network units not in the original neural network structure; initializing values of the parameters of the original neural network units and the additional neural network units so that the larger neural network generates the same outputs from the same inputs as the original neural network; and training the larger neural network to determine trained values of the parameters of the original neural network units and the additional neural network units from the initialized values.

    Abstract translation: 包括编码在计算机存储介质上的计算机程序的方法,系统和装置,用于从更小的神经网络生成更大的神经网络。 在一个方面,一种方法包括获得指定原始神经网络的数据; 其中较大的神经网络具有较大的神经网络结构,其包括多个原始神经网络单元和多个不在原始神经网络结构中的附加神经网络单元; 初始化原始神经网络单元和附加神经网络单元的参数的值,使得较大的神经网络从与原始神经网络相同的输入生成相同的输出; 并且训练较大的神经网络以根据初始化值确定原始神经网络单元和附加神经网络单元的参数的训练值。

    REDUCING IMAGE RESOLUTION IN DEEP CONVOLUTIONAL NETWORKS
    7.
    发明申请
    REDUCING IMAGE RESOLUTION IN DEEP CONVOLUTIONAL NETWORKS 审中-公开
    降低深层网络中的图像分辨率

    公开(公告)号:WO2016176095A1

    公开(公告)日:2016-11-03

    申请号:PCT/US2016/028493

    申请日:2016-04-20

    CPC classification number: G06T3/4046 G06K9/4623 G06K9/627 G06K9/66 G06N3/082

    Abstract: A method of reducing image resolution in a deep convolutional network (DCN) includes dynamically selecting a reduction factor to be applied to an input image. The reduction factor can be selected at each layer of the DCN. The method also includes adjusting the DCN based on the reduction factor selected for each layer.

    Abstract translation: 降低深卷积网络(DCN)中的图像分辨率的方法包括动态地选择要应用于输入图像的缩小因子。 可以在DCN的每一层选择还原因子。 该方法还包括基于为每个层选择的减少因子来调整DCN。

    LOW-FOOTPRINT ADAPTATION AND PERSONALIZATION FOR A DEEP NEURAL NETWORK
    9.
    发明申请
    LOW-FOOTPRINT ADAPTATION AND PERSONALIZATION FOR A DEEP NEURAL NETWORK 审中-公开
    用于深层神经网络的低自适应和个性化

    公开(公告)号:WO2015134294A1

    公开(公告)日:2015-09-11

    申请号:PCT/US2015/017872

    申请日:2015-02-27

    CPC classification number: G10L15/16 G06N3/082 G10L15/075

    Abstract: The adaptation and personalization of a deep neural network (DNN) model for automatic speech recognition is provided. An utterance which includes speech features for one or more speakers may be received in ASR tasks such as voice search or short message dictation. A decomposition approach may then be applied to an original matrix in the DNN model. In response to applying the decomposition approach, the original matrix may be converted into multiple new matrices which are smaller than the original matrix. A square matrix may then be added to the new matrices. Speaker-specific parameters may then be stored in the square matrix. The DNN model may then be adapted by updating the square matrix. This process may be applied to all of a number of original matrices in the DNN model. The adapted DNN model may include a reduced number of parameters than those received in the original DNN model.

    Abstract translation: 提供了一种用于自动语音识别的深层神经网络(DNN)模型的适应和个性化。 可以在诸如语音搜索或短消息听写的ASR任务中接收包括用于一个或多个扬声器的语音特征的话语。 然后可以将分解方法应用于DNN模型中的原始矩阵。 响应于应用分解方法,原始矩阵可以被转换成小于原始矩阵的多个新矩阵。 然后可以将正方形矩阵添加到新矩阵。 然后可以将扬声器特定参数存储在方阵中。 然后可以通过更新方阵来适应DNN模型。 该过程可以应用于DNN模型中的所有原始矩阵。 适应的DNN模型可以包括与原始DNN模型中接收的参数相比减少的参数数量。

    COMPUTED SYNAPSES FOR NEUROMORPHIC SYSTEMS
    10.
    发明申请
    COMPUTED SYNAPSES FOR NEUROMORPHIC SYSTEMS 审中-公开
    神经系统的计算机仿真

    公开(公告)号:WO2015020802A3

    公开(公告)日:2015-05-14

    申请号:PCT/US2014047858

    申请日:2014-07-23

    Applicant: QUALCOMM INC

    Inventor: RANGAN VENKAT

    CPC classification number: G06N3/08 G06N3/049 G06N3/063 G06N3/082

    Abstract: Methods and apparatus are provided for determining synapses in an artificial nervous system based on connectivity patterns. One example method generally includes determining, for an artificial neuron, an event has occurred; based on the event, determining one or more synapses with other artificial neurons based on a connectivity pattern associated with the artificial neuron; and applying a spike from the artificial neuron to the other artificial neurons based on the determined synapses. In this manner, the connectivity patterns (or parameters for determining such patterns) for particular neuron types, rather than the connectivity itself, may be stored. Using the stored information, synapses may be computed on the fly, thereby reducing memory consumption and increasing memory bandwidth. This also saves time during artificial nervous system updates.

    Abstract translation: 提供了用于基于连接模式确定人造神经系统中的突触的方法和装置。 一个示例性方法通常包括为人造神经元确定已经发生事件; 基于事件,基于与人造神经元相关联的连接模式,确定与其他人造神经元的一个或多个突触; 并根据确定的突触将人造神经元的尖峰应用于其他人造神经元。 以这种方式,可以存储用于特定神经元类型而不是连接本身的连接模式(或用于确定这种模式的参数)。 使用存储的信息,可以即时计算突触,从而减少内存消耗并增加内存带宽。 这也节省了人造神经系统更新中的时间。

Patent Agency Ranking