Methods and apparatus for modulating the training of a neural device
    1.
    发明授权
    Methods and apparatus for modulating the training of a neural device 有权
    用于调制神经元装置训练的方法和装置

    公开(公告)号:US09542644B2

    公开(公告)日:2017-01-10

    申请号:US14079181

    申请日:2013-11-13

    CPC classification number: G06N3/08 G06N3/049

    Abstract: Methods and apparatus are provided for training a neural device having an artificial nervous system by modulating at least one training parameter during the training. One example method for training a neural device having an artificial nervous system generally includes observing the neural device in a training environment and modulating at least one training parameter based at least in part on the observing. For example, the training apparatus described herein may modify the neural device's internal learning mechanisms (e.g., spike rate, learning rate, neuromodulators, sensor sensitivity, etc.) and/or the training environment's stimuli (e.g., move a flame closer to the device, make the scene darker, etc.). In this manner, the speed with which the neural device is trained (i.e., the training rate) may be significantly increased compared to conventional neural device training systems.

    Abstract translation: 提供了用于通过在训练期间调制至少一个训练参数来训练具有人造神经系统的神经装置的方法和装置。 用于训练具有人造神经系统的神经装置的一个示例性方法通常包括在训练环境中观察神经装置并且至少部分地基于观察来调制至少一个训练参数。 例如,本文描述的训练装置可以修改神经装置的内部学习机制(例如,尖峰率,学习速率,神经调节器,传感器灵敏度等)和/或训练环境的刺激(例如,将火焰移动到设备附近 ,使场景更暗等)。 以这种方式,与传统的神经元装置训练系统相比,神经装置训练的速度(即,训练速率)可以显着增加。

    Depth-first convolution in deep neural networks

    公开(公告)号:US11487998B2

    公开(公告)日:2022-11-01

    申请号:US16443695

    申请日:2019-06-17

    Abstract: In one embodiment, a depth-first deep convolutional network (DCN) having a first convolutional layer having a first first-layer kernel and adapted to convolve a first input and a second convolutional layer having a first second-layer kernel and adapted to convolve a second-layer input. A method for the DCN includes initiating convolution in the first convolution layer of the first input tensor with the first first-layer kernel to generate a value strip for the second input tensor and, prior to completion of the convolution in the first convolution layer, initiating convolution in the second convolution layer of the second input with the first second-layer kernel to generate a value strip for a third layer.

    Compressed caching in a virtual memory system
    5.
    发明授权
    Compressed caching in a virtual memory system 有权
    在虚拟内存系统中压缩缓存

    公开(公告)号:US09344114B1

    公开(公告)日:2016-05-17

    申请号:US14832739

    申请日:2015-08-21

    Abstract: Data compression systems, methods, and computer program products are disclosed. For each successive input word of an input stream, it is determined whether the input word matches an entry in a lookback table. The lookback table is updated in response to the input word. Input words may be of a number of data types, including zero runs and full or partial matches with an entry in the lookback table. A codeword is generated by entropy encoding a data type corresponding to the input word. The lookback table may be indexed by the position of the input word in the input stream.

    Abstract translation: 公开了数据压缩系统,方法和计算机程序产品。 对于输入流的每个连续输入字,确定输入字是否与回溯表中的条目匹配。 响应于输入字更新回溯表。 输入字可以是数据类型的数量,包括零运行,以及与回溯表中的条目的完全或部分匹配。 通过对对应于输入字的数据类型进行熵编码来生成码字。 回溯表可以由输入流中的输入字的位置索引。

Patent Agency Ranking