Methods and apparatus for spiking neural computation

    公开(公告)号:US09367797B2

    公开(公告)日:2016-06-14

    申请号:US13369080

    申请日:2012-02-08

    IPC分类号: G06N3/04 G06N3/08

    CPC分类号: G06N3/049 G06N3/08

    摘要: Certain aspects of the present disclosure provide methods and apparatus for spiking neural computation of general linear systems. One example aspect is a neuron model that codes information in the relative timing between spikes. However, synaptic weights are unnecessary. In other words, a connection may either exist (significant synapse) or not (insignificant or non-existent synapse). Certain aspects of the present disclosure use binary-valued inputs and outputs and do not require post-synaptic filtering. However, certain aspects may involve modeling of connection delays (e.g., dendritic delays). A single neuron model may be used to compute any general linear transformation x=AX+BU to any arbitrary precision. This neuron model may also be capable of learning, such as learning input delays (e.g., corresponding to scaling values) to achieve a target output delay (or output value). Learning may also be used to determine a logical relation of causal inputs.

    Methods and apparatus for spiking neural computation
    2.
    发明授权
    Methods and apparatus for spiking neural computation 有权
    刺激神经计算的方法和装置

    公开(公告)号:US09111225B2

    公开(公告)日:2015-08-18

    申请号:US13368994

    申请日:2012-02-08

    IPC分类号: G06N3/08 G06N3/04

    CPC分类号: G06N3/049 G06N3/08

    摘要: Certain aspects of the present disclosure provide methods and apparatus for spiking neural computation of general linear systems. One example aspect is a neuron model that codes information in the relative timing between spikes. However, synaptic weights are unnecessary. In other words, a connection may either exist (significant synapse) or not (insignificant or non-existent synapse). Certain aspects of the present disclosure use binary-valued inputs and outputs and do not require post-synaptic filtering. However, certain aspects may involve modeling of connection delays (e.g., dendritic delays). A single neuron model may be used to compute any general linear transformation x=AX+BU to any arbitrary precision. This neuron model may also be capable of learning, such as learning input delays (e.g., corresponding to scaling values) to achieve a target output delay (or output value). Learning may also be used to determine a logical relation of causal inputs.

    摘要翻译: 本公开的某些方面提供了用于通用线性系统的神经计算的方法和装置。 一个示例方面是在尖峰之间的相对定时中编码信息的神经元模型。 然而,突触重量是不必要的。 换句话说,连接可能存在(显着突触)或不存在(不重要或不存在突触)。 本公开的某些方面使用二进制值输入和输出,并且不需要突触后过滤。 然而,某些方面可能涉及对连接延迟的建模(例如树枝状延迟)。 可以使用单个神经元模型来计算任何通用线性变换x = AX + BU到任意精度。 该神经元模型还可能能够学习,例如学习输入延迟(例如,对应于缩放值)以实现目标输出延迟(或输出值)。 学习也可用于确定因果输入的逻辑关系。

    METHODS AND APPARATUS FOR SPIKING NEURAL COMPUTATION
    3.
    发明申请
    METHODS AND APPARATUS FOR SPIKING NEURAL COMPUTATION 有权
    用于扫描神经计算的方法和装置

    公开(公告)号:US20130204820A1

    公开(公告)日:2013-08-08

    申请号:US13369080

    申请日:2012-02-08

    IPC分类号: G06N3/08

    CPC分类号: G06N3/049 G06N3/08

    摘要: Certain aspects of the present disclosure provide methods and apparatus for spiking neural computation of general linear systems. One example aspect is a neuron model that codes information in the relative timing between spikes. However, synaptic weights are unnecessary. In other words, a connection may either exist (significant synapse) or not (insignificant or non-existent synapse). Certain aspects of the present disclosure use binary-valued inputs and outputs and do not require post-synaptic filtering. However, certain aspects may involve modeling of connection delays (e.g., dendritic delays). A single neuron model may be used to compute any general linear transformation x=AX+BU to any arbitrary precision. This neuron model may also be capable of learning, such as learning input delays (e.g., corresponding to scaling values) to achieve a target output delay (or output value). Learning may also be used to determine a logical relation of causal inputs.

    摘要翻译: 本公开的某些方面提供了用于通用线性系统的神经计算的方法和装置。 一个示例方面是在尖峰之间的相对定时中编码信息的神经元模型。 然而,突触重量是不必要的。 换句话说,连接可能存在(显着突触)或不存在(不重要或不存在突触)。 本公开的某些方面使用二进制值输入和输出,并且不需要突触后过滤。 然而,某些方面可能涉及对连接延迟的建模(例如树枝状延迟)。 可以使用单个神经元模型来计算任何通用线性变换x = AX + BU到任意精度。 该神经元模型还可能能够学习,例如学习输入延迟(例如,对应于缩放值)以实现目标输出延迟(或输出值)。 学习也可用于确定因果输入的逻辑关系。

    METHODS AND APPARATUS FOR SPIKING NEURAL COMPUTATION

    公开(公告)号:US20130204819A1

    公开(公告)日:2013-08-08

    申请号:US13368994

    申请日:2012-02-08

    IPC分类号: G06F15/18

    CPC分类号: G06N3/049 G06N3/08

    摘要: Certain aspects of the present disclosure provide methods and apparatus for spiking neural computation of general linear systems. One example aspect is a neuron model that codes information in the relative timing between spikes. However, synaptic weights are unnecessary. In other words, a connection may either exist (significant synapse) or not (insignificant or non-existent synapse). Certain aspects of the present disclosure use binary-valued inputs and outputs and do not require post-synaptic filtering. However, certain aspects may involve modeling of connection delays (e.g., dendritic delays). A single neuron model may be used to compute any general linear transformation x=AX+BU to any arbitrary precision. This neuron model may also be capable of learning, such as learning input delays (e.g., corresponding to scaling values) to achieve a target output delay (or output value). Learning may also be used to determine a logical relation of causal inputs.

    METHOD AND APPARATUS FOR UNSUPERVISED TRAINING OF INPUT SYNAPSES OF PRIMARY VISUAL CORTEX SIMPLE CELLS AND OTHER NEURAL CIRCUITS
    7.
    发明申请
    METHOD AND APPARATUS FOR UNSUPERVISED TRAINING OF INPUT SYNAPSES OF PRIMARY VISUAL CORTEX SIMPLE CELLS AND OTHER NEURAL CIRCUITS 有权
    主要视觉CORTEX简单细胞和其他神经电路输入信号的不间断训练的方法和装置

    公开(公告)号:US20120303566A1

    公开(公告)日:2012-11-29

    申请号:US13115154

    申请日:2011-05-25

    申请人: Vladimir Aparin

    发明人: Vladimir Aparin

    IPC分类号: G06N3/08 G06N3/063

    CPC分类号: G06N3/063

    摘要: Certain aspects of the present disclosure present a technique for unsupervised training of input synapses of primary visual cortex (V1) simple cells and other neural circuits. The proposed unsupervised training method utilizes simple neuron models for both Retinal Ganglion Cell (RGC) and V1 layers. The model simply adds the weighted inputs of each cell, wherein the inputs can have positive or negative values. The resulting weighted sums of inputs represent activations that can also be positive or negative. In an aspect of the present disclosure, the weights of each V1 cell can be adjusted depending on a sign of corresponding RGC output and a sign of activation of that V1 cell in the direction of increasing the absolute value of the activation. The RGC-to-V1 weights can be positive and negative for modeling ON and OFF RGCs, respectively.

    摘要翻译: 本公开的某些方面提供了用于无监督训练初级视皮层(V1)简单细胞和其他神经电路的输入突触的技术。 提出的无监督训练方法使用简单的神经元模型用于视网膜神经节细胞(RGC)和V1层。 该模型简单地添加每个单元的加权输入,其中输入可以具有正值或负值。 所得的加权输入总和代表也可以是正或负的激活。 在本公开的一方面,可以根据对应的RGC输出的符号和在增加激活的绝对值的方向激活该V1小区的符号来调整每个V1小区的权重。 RGC-to-V1权重可以分别为ON和OFF RGC的正负值。

    HIGH-SPEED HIGH-POWER SEMICONDUCTOR DEVICES
    9.
    发明申请
    HIGH-SPEED HIGH-POWER SEMICONDUCTOR DEVICES 有权
    高速大功率半导体器件

    公开(公告)号:US20120211812A1

    公开(公告)日:2012-08-23

    申请号:US13103918

    申请日:2011-05-09

    IPC分类号: H01L29/94 H01L21/8238

    摘要: High-speed high-power semiconductor devices are disclosed. In an exemplary design, a high-speed high-power semiconductor device includes a source, a drain to provide an output signal, and an active gate to receive an input signal. The semiconductor device further includes at least one field gate located between the active gate and the drain, at least one shallow trench isolation (STI) strip formed transverse to the at least one field gate, and at least one drain active strip formed parallel to, and alternating with, the at least one STI strip. The semiconductor device may be modeled by a combination of an active FET and a MOS varactor. The active gate controls the active FET, and the at least one field gate controls the MOS varactor. The semiconductor device has a low on resistance and can handle a high voltage.

    摘要翻译: 公开了高速大功率半导体器件。 在示例性设计中,高速大功率半导体器件包括源极,提供输出信号的漏极和用于接收输入信号的有源栅极。 所述半导体器件还包括位于有源栅极和漏极之间的至少一个场栅极,至少一个横向于至少一个场栅极形成的浅沟槽隔离(STI)条和至少一个平行于该栅极的漏极有源条, 并与所述至少一个STI条交替。 半导体器件可以通过有源FET和MOS变容二极管的组合来建模。 有源栅极控制有源FET,并且至少一个场门控制MOS变容二极管。 半导体器件具有低导通电阻并且可以处理高电压。