Neural semiconductor chip and neural networks incorporated therein
    1.
    发明授权
    Neural semiconductor chip and neural networks incorporated therein 失效
    纳入其中的神经半导体芯片和神经网络

    公开(公告)号:US5717832A

    公开(公告)日:1998-02-10

    申请号:US488443

    申请日:1995-06-07

    CPC classification number: G06K9/00986 G06K9/6271 G06N3/063

    Abstract: A base neural semiconductor chip (10) including a neural network or unit (11(#)). The neural network (11(#)) has a plurality of neuron circuits fed by different buses transporting data such as the input vector data, set-up parameters, and control signals. Each neuron circuit (11) includes logic for generating local result signals of the "fire" type (F) and a local output signal (NOUT) of the distance or category type on respective buses (NR-BUS, NOUT-BUS). An OR circuit (12) performs an OR function for all corresponding local result and output signals to generate respective first global result (R*) and output (OUT*) signals on respective buses (R*-BUS, OUT*-BUS) that are merged in an on-chip common communication bus (COM*-BUS) shared by all neuron circuits of the chip. In a multi-chip network, an additional OR function is performed between all corresponding first global result and output signals (which are intermediate signals) to generate second global result (R**) and output (OUT**) signals, preferably by dotting onto an off-chip common communication bus (COM**-BUS) in the chip's driver block (19). This latter bus is shared by all the base neural network chips that are connected to it in order to incorporate a neural network of the desired size. In the chip, a multiplexer (21) may select either the intermediate output or the global output signal to be fed back to all neuron circuits of the neural network, depending on whether the chip is used in a single or multi-chip environment via a feed-back bus (OR-BUS). The feedback signal is the result of a collective processing of all the local output signals.

    Abstract translation: 一种包括神经网络或单元(11(#))的基本神经半导体芯片(10)。 神经网络(11(#))具有由不同总线馈送的多个神经元电路,其传送诸如输入矢量数据,设置参数和控制信号的数据。 每个神经元电路(11)包括用于在相应总线(NR-BUS,NOUT-BUS)上产生“火”类型(F)的本地结果信号和距离或类别类型的本地输出信号(NOUT)的逻辑。 OR电路(12)对所有对应的本地结果和输出信号执行OR功能,以在相应总线(R * -BUS,OUT * -BUS)上产生相应的第一全局结果(R *)和输出(OUT *)信号, 被合并在由芯片的所有神经元电路共享的片上公共通信总线(COM * -BUS)中。 在多芯片网络中,在所有对应的第一全局结果和输出信号(它们是中间信号)之间执行附加OR功能,以产生第二全局结果(R **)和输出(OUT **)信号,优选地通过点划线 在芯片的驱动器块(19)中的片外公共通信总线(COM ** - BUS)上。 该后一个总线由连接到它的所有基本神经网络芯片共享以便并入所需大小的神经网络。 在芯片中,多路复用器(21)可以选择要反馈给神经网络的所有神经元电路的中间输出或全局输出信号,这取决于芯片是经由单芯片还是多芯片环境使用 反馈总线(OR-BUS)。 反馈信号是对所有局部输出信号的集中处理的结果。

    Daisy chain circuit for serial connection of neuron circuits
    2.
    发明授权
    Daisy chain circuit for serial connection of neuron circuits 失效
    用于串联连接神经元电路的菊花链电路

    公开(公告)号:US5710869A

    公开(公告)日:1998-01-20

    申请号:US485337

    申请日:1995-06-07

    CPC classification number: G06N3/063

    Abstract: Each daisy chain circuit is serially connected to the two adjacent neuron circuits, so that all the neuron circuits form a chain. The daisy chain circuit distinguishes between the two possible states of the neuron circuit (engaged or free) and identifies the first free "or ready to learn" neuron circuit in the chain, based on the respective values of the input (DCI) and output (DCO) signals of the daisy chain circuit. The ready to learn neuron circuit is the only neuron circuit of the neural network having daisy chain input and output signals complementary to each other. The daisy chain circuit includes a 1-bit register (601) controlled by a store enable signal (ST) which is active at initialization or, during the learning phase when a new neuron circuit is engaged. At initialization, all the Daisy registers of the chain are forced to a first logic value. The DCI input of the first daisy chain circuit in the chain is connected to a second logic value, such that after initialization, it is the ready to learn neuron circuit. In the learning phase, the ready to learn neuron's 1-bit daisy register contents are set to the second logic value by the store enable signal, it is said "engaged". As neurons are engaged, each subsequent neuron circuit in the chain then becomes the next ready to learn neuron circuit.

    Abstract translation: 每个菊花链电路串联连接到两个相邻的神经元电路,使得所有的神经元电路形成链。 菊花链电路基于输入(DCI)和输出(DCI)的相应值来区分神经元电路的两种可能状态(被接合或自由)并且识别链中的第一个“准备学习”神经元电路 DCO)信号。 准备学习神经元电路是具有菊花链输入和输出信号彼此互补的神经网络的唯一神经元电路。 菊花链电路包括由初始化时有效的存储使能信号(ST)控制的1位寄存器(601),或者在新的神经元电路被接合时的学习阶段。 在初始化时,链的所有Daisy寄存器都被强制为第一个逻辑值。 链中第一个菊花链电路的DCI输入连接到第二个逻辑值,这样在初始化之后就可以学习神经元电路了。 在学习阶段,准备学习神经元的1位菊花寄存器内容通过存储使能信号设置为第二个逻辑值,它被称为“被接合”。 随着神经元的啮合,链中随后的每个神经元电路就成为下一个准备学习神经元电路的准备。

    Neuron circuit
    3.
    发明授权
    Neuron circuit 失效
    神经元电路

    公开(公告)号:US5621863A

    公开(公告)日:1997-04-15

    申请号:US481591

    申请日:1995-06-07

    CPC classification number: G06K9/6272 G06K9/00986 G06N3/063

    Abstract: In a neural network comprised of a plurality of neuron circuits, an improved neuron circuit that generates local result signals, e.g. of the fire type, and a local output signal of the distance or category type. The neuron circuit which is connected to buses that transport input data (e.g. the input category) and control signals. A multi-norm distance evaluation circuit calculates the distance D between the input vector and a prototype vector stored in a R/W memory circuit. A distance compare circuit compares this distance D with either the stored prototype vector's actual influence field or the lower limit thereof to generate first and second comparison signals. An identification circuit processes the comparison signals, the input category signal, the local category signal and a feedback signal to generate local result signals that represent the neuron circuit's response to the input vector. A minimum distance determination circuit determines the minimum distance Dmin among all the calculated distances from all of the neuron circuits of the neural network and generates a local output signal of the distance type. The circuit may be used to search and sort categories. The feed-back signal is collectively generated by all the neuron circuits by ORing all the local distances/categories. A daisy chain circuit is serially connected to corresponding daisy chain circuits of two adjacent neuron circuits to chain the neurons together. The daisy chain circuit also determines the neuron circuit state as free or engaged. Finally, a context circuitry enables or inhibits neuron participation with other neuron circuits in generation of the feedback signal.

    Abstract translation: 在由多个神经元电路组成的神经网络中,生成本地结果信号的改进的神经元电路,例如, 的火灾类型,以及距离或类别类型的本地输出信号。 连接到传送输入数据(例如输入类别)和控制信号的总线的神经元电路。 多范围距离评估电路计算输入矢量和存储在R / W存储器电路中的原型矢量之间的距离D. 距离比较电路将该距离D与存储的原型矢量的实际影响场或其下限进行比较,以产生第一和第二比较信号。 识别电路处理比较信号,输入类别信号,局部类别信号和反馈信号,以产生表示神经元电路对输入矢量的响应的本地结果信号。 最小距离确定电路确定来自神经网络的所有神经元电路的所有计算距离中的最小距离Dmin,并产生距离类型的局部输出信号。 该电路可用于搜索和分类。 所有的神经元电路通过对所有的局部距离/类别进行OR运算来共同地产生反馈信号。 菊花链电路串联连接到两个相邻神经元电路的相应菊花链电路,以将神经元链接在一起。 菊花链电路还将神经元电路状态确定为自由或接合。 最后,上下文电路在反馈信号的产生中实现或抑制与其他神经元电路的神经元参与。

    Neuron architecture having a dual structure and neural networks incorporating the same
    4.
    发明授权
    Neuron architecture having a dual structure and neural networks incorporating the same 失效
    具有双重结构的神经元结构和包含其的神经网络

    公开(公告)号:US06502083B1

    公开(公告)日:2002-12-31

    申请号:US09470458

    申请日:1999-12-22

    CPC classification number: G06K9/6276 G06N3/063

    Abstract: The improved neuron is connected to input buses which transport input data and control signals. It basically consists of a computation block, a register block, an evaluation block and a daisy chain block. All these blocks, except the computation block substantially have a symmetric construction. Registers are used to store data: the local norm and context, the distance, the AIF value and the category. The improved neuron further needs some R/W memory capacity which may be placed either in the neuron or outside. The evaluation circuit is connected to an output bus to generate global signals thereon. The daisy chain block allows to chain the improved neuron with others to form an artificial neural network (ANN). The improved neuron may work either as a single neuron (single mode) or as two independent neurons (dual mode). In the latter case, the computation block, which is common to the two dual neurons, must operate sequentially to service one neuron after the other. The selection between the two modes (single/dual) is made by the user which stores a specific logic value in a dedicated register of the control logic circuitry in each improved neuron.

    Abstract translation: 改进的神经元连接到传输输入数据和控制信号的输入总线。 它基本上由计算块,寄存器块,评估块和菊花链块组成。 除了计算块之外,所有这些块基本上具有对称结构。 寄存器用于存储数据:本地规范和上下文,距离,AIF值和类别。 改进的神经元还需要一些R / W记忆容量,这可能被放置在神经元或外部。 评估电路连接到输出总线,以在其上产生全局信号。 菊花链块允许与其他人链接改进的神经元以形成人造神经网络(ANN)。 改善的神经元可以作为单个神经元(单个模式)或两个独立的神经元(双模式)起作用。 在后一种情况下,两个双重神经元共同的计算块必须依次操作,以便在一个神经元之后进行服务。 两种模式之间的选择(单/双)由在每个改进的神经元中的控制逻辑电路的专用寄存器中存储特定逻辑值的用户进行。

    Implementing automatic learning according to the K nearest neighbor mode in artificial neural networks
    5.
    发明授权
    Implementing automatic learning according to the K nearest neighbor mode in artificial neural networks 有权
    根据人工神经网络中的K最近邻模式实现自动学习

    公开(公告)号:US06377941B1

    公开(公告)日:2002-04-23

    申请号:US09338450

    申请日:1999-06-22

    CPC classification number: G06K9/6271 G06N3/063 G06N3/08

    Abstract: A method of achieving automatic learning of an input vector presented to an artificial neural network (ANN) formed by a plurality of neurons, using the K nearest neighbor (KNN) mode. Upon providing an input vector to be learned to the ANN, a Write component operation is performed to store the input vector components in the first available free neuron of the ANN. Then, a Write category operation is performed by assigning a category defined by the user to the input vector. Next, a test is performed to determine whether this category matches the categories of the nearest prototypes, i.e. which are located at the minimum distance. If it matches, this first free neuron is not engaged. Otherwise, it is engaged by assigning the matching category to it. As a result, the input vector becomes the new prototype with the matching category associated thereto. Further described is a circuit which automatically retains the first free neuron of the ANN for learning.

    Abstract translation: 使用K个最近邻(KNN)模式,实现由多个神经元形成的人造神经网络(ANN)的输入向量的自动学习的方法。 在向ANN提供要学习的输入向量时,执行写分量操作以将输入矢量分量存储在ANN的第一可用游离神经元中。 然后,通过将由用户定义的类别分配给输入向量来执行写类别操作。 接下来,执行测试以确定该类别是否与最近的原型的类别匹配,即位于最小距离的类别。 如果它匹配,这个第一个自由神经元没有被使用。 否则,通过将匹配类别分配给它来进行。 结果,输入向量成为与其相关联的匹配类别的新原型。 进一步描述了自动保留ANN的第一自由神经元进行学习的电路。

    Method and circuits to virtually increase the number of prototypes in artificial neural networks
    6.
    发明授权
    Method and circuits to virtually increase the number of prototypes in artificial neural networks 失效
    实际增加人造神经网络中原型数量的方法和电路

    公开(公告)号:US07254565B2

    公开(公告)日:2007-08-07

    申请号:US10137969

    申请日:2002-05-03

    CPC classification number: G06K9/6276 G06N3/063

    Abstract: An improved Artificial Neural Network (ANN) is disclosed that comprises a conventional ANN, a database block, and a compare and update circuit. The conventional ANN is formed by a plurality of neurons, each neuron having a prototype memory dedicated to store a prototype and a distance evaluator to evaluate the distance between the input pattern presented to the ANN and the prototype stored therein. The database block has: all the prototypes arranged in slices, each slice being capable to store up to a maximum number of prototypes; the input patterns or queries to be presented to the ANN; and the distances resulting of the evaluation performed during the recognition/classification phase. The compare and update circuit compares the distance with the distance previously found for the same input pattern updates or not the distance previously stored.

    Abstract translation: 公开了一种改进的人造神经网络(ANN),其包括常规ANN,数据库块以及比较和更新电路。 常规ANN由多个神经元形成,每个神经元具有专用于存储原型的原型存储器和距离评估器,以评估呈现给ANN的输入模式与存储在其中的原型之间的距离。 数据库块具有:所有原型以切片排列,每个切片能够存储最多数量的原型; 要呈现给ANN的输入模式或查询; 以及在识别/分类阶段期间进行评估的距离。 比较和更新电路将距离与先前发现的相同输入模式更新的距离进行比较,或将之前存储的距离进行比较。

    Parallel Pattern Detection Engine

    公开(公告)号:US20070150621A1

    公开(公告)日:2007-06-28

    申请号:US11682547

    申请日:2007-03-06

    CPC classification number: G06K9/6202 G06K9/00986

    Abstract: A parallel pattern detection engine (PPDE) comprise multiple processing units (PUs) customized to do various modes of pattern recognition. The PUs are loaded with different patterns and the input data to be matched is provided to the PUs in parallel. Each pattern has an Opcode that defines what action to take when a particular data in the input data stream either matches or does not match the corresponding data being compared during a clock cycle. Each of the PUs communicate selected information so that PUs may be cascaded to enable longer patterns to be matched or to allow more patterns to be processed in parallel for a particular input data stream.

    Intrusion detection using a network processor and a parallel pattern detection engine
    8.
    发明申请
    Intrusion detection using a network processor and a parallel pattern detection engine 有权
    使用网络处理器和并行模式检测引擎的入侵检测

    公开(公告)号:US20050154916A1

    公开(公告)日:2005-07-14

    申请号:US10756904

    申请日:2004-01-14

    CPC classification number: H04L63/1416 H04L63/1441

    Abstract: An intrusion detection system (IDS) comprises a network processor (NP) coupled to a memory unit for storing programs and data. The NP is also coupled to one or more parallel pattern detection engines (PPDE) which provide high speed parallel detection of patterns in an input data stream. Each PPDE comprises many processing units (PUs) each designed to store intrusion signatures as a sequence of data with selected operation codes. The PUs have configuration registers for selecting modes of pattern recognition. Each PU compares a byte at each clock cycle. If a sequence of bytes from the input pattern match a stored pattern, the identification of the PU detecting the pattern is outputted with any applicable comparison data. By storing intrusion signatures in many parallel PUs, the IDS can process network data at the NP processing speed. PUs may be cascaded to increase intrusion coverage or to detect long intrusion signatures.

    Abstract translation: 入侵检测系统(IDS)包括耦合到用于存储程序和数据的存储器单元的网络处理器(NP)。 NP还耦合到一个或多个并行模式检测引擎(PPDE),其提供对输入数据流中的模式的高速并行检测。 每个PPDE包括许多处理单元(PU),每个处理单元被设计为将入侵签名存储为具有所选操作码的数据序列。 PU具有用于选择模式识别模式的配置寄存器。 每个PU在每个时钟周期比较一个字节。 如果来自输入模式的字节序列与存储的模式匹配,则用任何适用的比较数据输出检测模式的PU的识别。 通过在多个并行PU中存储入侵签名,IDS可以以NP处理速度处理网络数据。 PU可以级联以增加入侵覆盖或检测长入侵签名。

    Circuits and method for shaping the influence field of neurons and neural networks resulting therefrom
    9.
    发明授权
    Circuits and method for shaping the influence field of neurons and neural networks resulting therefrom 失效
    用于形成由此产生的神经元和神经网络的影响场的电路和方法

    公开(公告)号:US06347309B1

    公开(公告)日:2002-02-12

    申请号:US09223478

    申请日:1998-12-30

    CPC classification number: G06K9/6271 G06N3/063

    Abstract: The improved neural network of the present invention results from the combination of a dedicated logic block with a conventional neural network based upon a mapping of the input space usually employed to classify an input data by computing the distance between said input data and prototypes memorized therein. The improved neural network is able to classify an input data, for instance, represented by a vector A even when some of its components are noisy or unknown during either the learning or the recognition phase. To that end, influence fields of various and different shapes are created for each neuron of the conventional neural network. The logic block transforms at least some of the n components (A1, . . . , An) of the input vector A into the m components (V1, . . . , Vm) of a network input vector V according to a linear or non-linear transform function F. In turn, vector V is applied as the input data to said conventional neural network. The transform function F is such that certain components of vector V are not modified, e.g. Vk=Aj, while other components are transformed as mentioned above, e.g. Vi=Fi(A1, . . . , An). In addition, one (or more) component of vector V can be used to compensate an offset that is present in the distance evaluation of vector V. Because, the logic block is placed in front of the said conventional neural network any modification thereof is avoided.

    Abstract translation: 本发明的改进的神经网络是基于通常用于通过计算所述输入数据与其中存储的原型之间的距离来对输入数据进行分类的输入空间的映射,将专用逻辑块与传统神经网络的组合。 改进的神经网络能够对例如由向量A表示的输入数据进行分类,即使在学习或识别阶段期间,其一些组件是噪声或未知的。 为此,为传统神经网络的每个神经元创建各种不同形状的影响场。 逻辑块根据线性或非线性将输入矢量A的n个分量(A1,...,An)中的至少一些变换成网络输入矢量V的m个分量(V1,...,Vm) 然后将矢量V作为输入数据施加到所述常规神经网络。 变换函数F使得向量V的某些分量不被修改,例如, Vk = Aj,而其它组分如上所述被转化,例如。 Vi = Fi(A1,...,An)。 另外,矢量V的一个(或多个)分量可以用于补偿矢量V的距离评估中存在的偏移。因为逻辑块被放置在所述传统神经网络的前面,所以避免了其任何修改 。

    METHOD FOR FORMING A THREE-DIMENSIONAL STRUCTURE OF METAL-INSULATOR-METAL TYPE
    10.
    发明申请
    METHOD FOR FORMING A THREE-DIMENSIONAL STRUCTURE OF METAL-INSULATOR-METAL TYPE 有权
    形成金属绝缘体金属型三维结构的方法

    公开(公告)号:US20110227194A1

    公开(公告)日:2011-09-22

    申请号:US13052262

    申请日:2011-03-21

    CPC classification number: H01L23/5223 H01L28/60 H01L2924/0002 H01L2924/00

    Abstract: A method for forming a capacitive structure in a metal level of an interconnection stack including a succession of metal levels and of via levels, including the steps of: forming, in the metal level, at least one conductive track in which a trench is defined; conformally forming an insulating layer on the structure; forming, in the trench, a conductive material; and planarizing the structure.

    Abstract translation: 一种用于在包括一系列金属水平和通孔级别的互连堆叠的金属层中形成电容结构的方法,包括以下步骤:在金属层面形成至少一个其中限定沟槽的导电轨道; 在结构上保形地形成绝缘层; 在沟槽中形成导电材料; 并平坦化结构。

Patent Agency Ranking