Method and circuits for scaling images using neural networks
    11.
    发明授权
    Method and circuits for scaling images using neural networks 有权
    使用神经网络缩放图像的方法和电路

    公开(公告)号:US07352918B2

    公开(公告)日:2008-04-01

    申请号:US10321166

    申请日:2002-12-17

    CPC classification number: G06T3/4046

    Abstract: An artificial neural network (ANN) based system that is adapted to process an input pattern to generate an output pattern related thereto having a different number of components than the input pattern. The system (26) is comprised of an ANN (27) and a memory (28), such as a DRAM memory, that are serially connected. The input pattern (23) is applied to a processor (22), where it can be processed or not (the most general case), before it is applied to the ANN and stored therein as a prototype (if learned). A category is associated with each stored prototype. The processor computes the coefficients that allow the determination of the estimated values of the output pattern, these coefficients are the components of a so-called intermediate pattern (24). Assuming the ANN has already learned a number of input patterns, when a new input pattern is presented to the ANN in the recognition phase, the category of the closest prototype is output therefrom and is used as a pointer to the memory. In turn, the memory outputs the corresponding intermediate pattern. The input pattern and the intermediate pattern are applied to the processor to construct the output pattern (25) using the coefficients. Typically, the input pattern is a block of pixels in the field of scaling images.

    Abstract translation: 一种基于人造神经网络(ANN)的系统,其适于处理输入模式以产生与其相关的输出模式,该输出模式具有与输入模式不同数量的分量。 系统(26)由串联连接的ANN(27)和存储器(28)(诸如DRAM存储器)组成。 将输入模式(23)应用于处理器(22),在处理器(22)被应用于ANN并作为原型存储(如果被学习)之前)处理器(22),其可被处理(最常见的情况))。 类别与每个存储的原型相关联。 处理器计算允许确定输出图案的估计值的系数,这些系数是所谓的中间图案的分量(24)。 假设ANN已经学习了许多输入模式,当在识别阶段向ANN呈现新的输入模式时,最近的原型的类别从其输出并被用作指向存储器的指针。 反过来,存储器输出相应的中间模式。 将输入图案和中间图案应用于处理器,以使用系数构造输出图案(25)。 通常,输入图案是缩放图像领域的像素块。

    Neural semiconductor chip and neural networks incorporated therein
    12.
    发明授权
    Neural semiconductor chip and neural networks incorporated therein 失效
    纳入其中的神经半导体芯片和神经网络

    公开(公告)号:US5717832A

    公开(公告)日:1998-02-10

    申请号:US488443

    申请日:1995-06-07

    CPC classification number: G06K9/00986 G06K9/6271 G06N3/063

    Abstract: A base neural semiconductor chip (10) including a neural network or unit (11(#)). The neural network (11(#)) has a plurality of neuron circuits fed by different buses transporting data such as the input vector data, set-up parameters, and control signals. Each neuron circuit (11) includes logic for generating local result signals of the "fire" type (F) and a local output signal (NOUT) of the distance or category type on respective buses (NR-BUS, NOUT-BUS). An OR circuit (12) performs an OR function for all corresponding local result and output signals to generate respective first global result (R*) and output (OUT*) signals on respective buses (R*-BUS, OUT*-BUS) that are merged in an on-chip common communication bus (COM*-BUS) shared by all neuron circuits of the chip. In a multi-chip network, an additional OR function is performed between all corresponding first global result and output signals (which are intermediate signals) to generate second global result (R**) and output (OUT**) signals, preferably by dotting onto an off-chip common communication bus (COM**-BUS) in the chip's driver block (19). This latter bus is shared by all the base neural network chips that are connected to it in order to incorporate a neural network of the desired size. In the chip, a multiplexer (21) may select either the intermediate output or the global output signal to be fed back to all neuron circuits of the neural network, depending on whether the chip is used in a single or multi-chip environment via a feed-back bus (OR-BUS). The feedback signal is the result of a collective processing of all the local output signals.

    Abstract translation: 一种包括神经网络或单元(11(#))的基本神经半导体芯片(10)。 神经网络(11(#))具有由不同总线馈送的多个神经元电路,其传送诸如输入矢量数据,设置参数和控制信号的数据。 每个神经元电路(11)包括用于在相应总线(NR-BUS,NOUT-BUS)上产生“火”类型(F)的本地结果信号和距离或类别类型的本地输出信号(NOUT)的逻辑。 OR电路(12)对所有对应的本地结果和输出信号执行OR功能,以在相应总线(R * -BUS,OUT * -BUS)上产生相应的第一全局结果(R *)和输出(OUT *)信号, 被合并在由芯片的所有神经元电路共享的片上公共通信总线(COM * -BUS)中。 在多芯片网络中,在所有对应的第一全局结果和输出信号(它们是中间信号)之间执行附加OR功能,以产生第二全局结果(R **)和输出(OUT **)信号,优选地通过点划线 在芯片的驱动器块(19)中的片外公共通信总线(COM ** - BUS)上。 该后一个总线由连接到它的所有基本神经网络芯片共享以便并入所需大小的神经网络。 在芯片中,多路复用器(21)可以选择要反馈给神经网络的所有神经元电路的中间输出或全局输出信号,这取决于芯片是经由单芯片还是多芯片环境使用 反馈总线(OR-BUS)。 反馈信号是对所有局部输出信号的集中处理的结果。

    Daisy chain circuit for serial connection of neuron circuits
    13.
    发明授权
    Daisy chain circuit for serial connection of neuron circuits 失效
    用于串联连接神经元电路的菊花链电路

    公开(公告)号:US5710869A

    公开(公告)日:1998-01-20

    申请号:US485337

    申请日:1995-06-07

    CPC classification number: G06N3/063

    Abstract: Each daisy chain circuit is serially connected to the two adjacent neuron circuits, so that all the neuron circuits form a chain. The daisy chain circuit distinguishes between the two possible states of the neuron circuit (engaged or free) and identifies the first free "or ready to learn" neuron circuit in the chain, based on the respective values of the input (DCI) and output (DCO) signals of the daisy chain circuit. The ready to learn neuron circuit is the only neuron circuit of the neural network having daisy chain input and output signals complementary to each other. The daisy chain circuit includes a 1-bit register (601) controlled by a store enable signal (ST) which is active at initialization or, during the learning phase when a new neuron circuit is engaged. At initialization, all the Daisy registers of the chain are forced to a first logic value. The DCI input of the first daisy chain circuit in the chain is connected to a second logic value, such that after initialization, it is the ready to learn neuron circuit. In the learning phase, the ready to learn neuron's 1-bit daisy register contents are set to the second logic value by the store enable signal, it is said "engaged". As neurons are engaged, each subsequent neuron circuit in the chain then becomes the next ready to learn neuron circuit.

    Abstract translation: 每个菊花链电路串联连接到两个相邻的神经元电路,使得所有的神经元电路形成链。 菊花链电路基于输入(DCI)和输出(DCI)的相应值来区分神经元电路的两种可能状态(被接合或自由)并且识别链中的第一个“准备学习”神经元电路 DCO)信号。 准备学习神经元电路是具有菊花链输入和输出信号彼此互补的神经网络的唯一神经元电路。 菊花链电路包括由初始化时有效的存储使能信号(ST)控制的1位寄存器(601),或者在新的神经元电路被接合时的学习阶段。 在初始化时,链的所有Daisy寄存器都被强制为第一个逻辑值。 链中第一个菊花链电路的DCI输入连接到第二个逻辑值,这样在初始化之后就可以学习神经元电路了。 在学习阶段,准备学习神经元的1位菊花寄存器内容通过存储使能信号设置为第二个逻辑值,它被称为“被接合”。 随着神经元的啮合,链中随后的每个神经元电路就成为下一个准备学习神经元电路的准备。

    Method and circuits for associating a complex operator to each component of an input pattern presented to an artificial neural network
    14.
    发明授权
    Method and circuits for associating a complex operator to each component of an input pattern presented to an artificial neural network 失效
    用于将复杂算子与呈现给人造神经网络的输入模式的每个分量相关联的方法和电路

    公开(公告)号:US08027942B2

    公开(公告)日:2011-09-27

    申请号:US09951786

    申请日:2001-09-12

    CPC classification number: G06K9/6215 G06K9/6276 G06N3/063

    Abstract: The method and circuits of the present invention aim to associate a complex component operator (CC_op) to each component of an input pattern presented to an input space mapping algorithm based artificial neural network (ANN) during the distance evaluation process. A complex operator consists in the description of a function and a set of parameters attached thereto. The function is a mathematical entity (either a logic operator e.g. match(Ai,Bi), abs(Ai−Bi), . . . or an arithmetic operator, e.g. >,

    Abstract translation: 本发明的方法和电路旨在将复杂分量算子(CC_op)与在距离评估过程中呈现给基于输入空间映射算法的人造神经网络(ANN)的输入模式的每个分量相关联。 复杂的运算符在于对附加到其上的函数和一组参数的描述。 该函数是数学实体(逻辑运算符,例如匹配(Ai,Bi),abs(Ai-Bi),...或算术运算符,例如>,<,...。)或一组软件指令 有条件。 在第一实施例中,ANN被提供有存储所有CC_ops的ANN的所有神经元共用的全局存储器。 在另一个实施例中,CC_ops的集合存储在神经元的原型存储器中,使得全局存储器不再是物理上必需的。 根据本发明,存储的原型的组件现在可以指定不同性质的对象。 另外,这两种实现都显着减少神经元所需的组件数量,从而在ANN集成在硅芯片中时节省空间。

    Neuron architecture having a dual structure and neural networks incorporating the same
    15.
    发明授权
    Neuron architecture having a dual structure and neural networks incorporating the same 失效
    具有双重结构的神经元结构和包含其的神经网络

    公开(公告)号:US06502083B1

    公开(公告)日:2002-12-31

    申请号:US09470458

    申请日:1999-12-22

    CPC classification number: G06K9/6276 G06N3/063

    Abstract: The improved neuron is connected to input buses which transport input data and control signals. It basically consists of a computation block, a register block, an evaluation block and a daisy chain block. All these blocks, except the computation block substantially have a symmetric construction. Registers are used to store data: the local norm and context, the distance, the AIF value and the category. The improved neuron further needs some R/W memory capacity which may be placed either in the neuron or outside. The evaluation circuit is connected to an output bus to generate global signals thereon. The daisy chain block allows to chain the improved neuron with others to form an artificial neural network (ANN). The improved neuron may work either as a single neuron (single mode) or as two independent neurons (dual mode). In the latter case, the computation block, which is common to the two dual neurons, must operate sequentially to service one neuron after the other. The selection between the two modes (single/dual) is made by the user which stores a specific logic value in a dedicated register of the control logic circuitry in each improved neuron.

    Abstract translation: 改进的神经元连接到传输输入数据和控制信号的输入总线。 它基本上由计算块,寄存器块,评估块和菊花链块组成。 除了计算块之外,所有这些块基本上具有对称结构。 寄存器用于存储数据:本地规范和上下文,距离,AIF值和类别。 改进的神经元还需要一些R / W记忆容量,这可能被放置在神经元或外部。 评估电路连接到输出总线,以在其上产生全局信号。 菊花链块允许与其他人链接改进的神经元以形成人造神经网络(ANN)。 改善的神经元可以作为单个神经元(单个模式)或两个独立的神经元(双模式)起作用。 在后一种情况下,两个双重神经元共同的计算块必须依次操作,以便在一个神经元之后进行服务。 两种模式之间的选择(单/双)由在每个改进的神经元中的控制逻辑电路的专用寄存器中存储特定逻辑值的用户进行。

    GaAs MESFET logic circuits including push pull output buffers
    16.
    发明授权
    GaAs MESFET logic circuits including push pull output buffers 失效
    GaAs MESFET逻辑电路包括推挽输出缓冲器

    公开(公告)号:US4922135A

    公开(公告)日:1990-05-01

    申请号:US271124

    申请日:1988-11-14

    CPC classification number: H03K19/09436 H03K19/01721

    Abstract: The present invention relates to a family of new GaAs MESFET logic circuits including push pull output buffers, which exhibits very strong output driving capability and very low power consumption at fast switching speeds. A 3 way OR/NOR circuit of this invention includes a standard differential amplifier, the first branch of which is controlled by logic input signals. The second branch includes a current switch controlled by a reference voltage. The differential amplifier provides first and second output signals simultaneously and complementary each other. The circuit further includes two push pull output buffers. The first output buffer comprises an active pull up device connected in series with an active pull down device, and the first circuit output signal is available at their common node or at the output terminal. The active pull up device is controlled by a first output signal of the differential amplifier, and the active pull down device is preferably controlled by the second output signal through an intermediate source follower buffer. The second output buffer is of similar structure. The depicted circuit is of the dual phase type. However, if only one phase of the circuit output signal is needed, the output buffer and the intermediate buffer can be eliminated. The number of devices can be even further reduced by eliminating the other remaining intermediate buffer.

    Parallel Pattern Detection Engine
    17.
    发明申请

    公开(公告)号:US20070150623A1

    公开(公告)日:2007-06-28

    申请号:US11682623

    申请日:2007-03-06

    CPC classification number: G06K9/6202 G06K9/00986

    Abstract: A parallel pattern detection engine (PPDE) comprise multiple processing units (PUs) customized to do various modes of pattern recognition. The PUs are loaded with different patterns and the input data to be matched is provided to the PUs in parallel. Each pattern has an Opcode that defines what action to take when a particular data in the input data stream either matches or does not match the corresponding data being compared during a clock cycle. Each of the PUs communicate selected information so that PUs may be cascaded to enable longer patterns to be matched or to allow more patterns to be processed in parallel for a particular input data stream.

    Method and circuits for encoding an input pattern using a normalizer and a classifier

    公开(公告)号:US07133854B2

    公开(公告)日:2006-11-07

    申请号:US10014166

    申请日:2001-12-11

    CPC classification number: G06T9/00

    Abstract: Let us consider a plurality of input patterns having an essential characteristic in common but which differ on at least one parameter (this parameter modifies the input pattern in some extent but not this essential characteristic for a specific application). During the learning phase, each input pattern is normalized in a normalizer, before it is presented to a classifier. If not recognized, it is learned, i.e. the normalized pattern is stored in the classifier as a prototype with its category associated thereto. From a predetermined reference value of that parameter, the normalizer computes an element related to said parameter which allows to set the normalized pattern from the input pattern and vice versa to retrieve the input pattern from the normalized pattern. As a result, all these input patterns are represented by the same normalized pattern. The above method and circuits allow to reduce the number of required prototypes in the classifier, improving thereby its response quality.

    Method to improve the data transfer rate between a computer and a neural network
    19.
    发明授权
    Method to improve the data transfer rate between a computer and a neural network 失效
    提高计算机和神经网络之间数据传输速率的方法

    公开(公告)号:US06983265B2

    公开(公告)日:2006-01-03

    申请号:US10316250

    申请日:2002-12-10

    CPC classification number: G06K9/6276 G06N3/04

    Abstract: A method is described to improve the data transfer rate between a personal computer or a host computer and a neural network implemented in hardware by merging a plurality of input patterns into a single input pattern configured to globally represent the set of input patterns. A base consolidated vector (U′*n) representing the input pattern is defined to describe all the vectors (Un, . . . , Un+6) representing the input patterns derived thereof (U′n, . . . , U′n+6) by combining components having fixed and ‘don't care’ values. The base consolidated vector is provided only once with all the components of the vectors. An artificial neural network (ANN) is then configured as a combination of sub-networks operating in parallel. In order to compute the distances with an adequate number of components, the prototypes are to include also components having a definite value and ‘don't care’ conditions. During the learning phase, the consolidated vectors are stored as prototypes. During the recognition phase, when a new base consolidated vector is provided to ANN, each sub-network analyses a portion thereof After computing all the distances, they are sorted one sub-network at a time to obtain the distances associated to each vector.

    Abstract translation: 描述了一种方法,以通过将多个输入模式合并为被配置为全局地表示该组输入模式的单个输入模式来改善个人计算机或主机计算机与硬件中实现的神经网络之间的数据传输速率。 定义表示输入模式的基本合并向量(U'* N n N)来描述所有向量(U N,N,N,N,N) 代表其导出的输入模式(U',N“,...,U”n + 6)的组合,通过组合具有固定的“不” 护理价值观。 基本合并向量仅与向量的所有分量一起提供。 然后将人造神经网络(ANN)配置为并行操作的子网络的组合。 为了用足够数量的组件计算距离,原型还包括具有确定值和“无关紧要”条件的组件。 在学习阶段,合并的向量存储为原型。 在识别阶段,当向ANN提供新的基本合并向量时,每个子网络分析其一部分。在计算所有距离之后,它们一次对一个子网进行排序,以获得与每个向量相关联的距离。

    Circuit for searching/sorting data in neural networks
    20.
    发明授权
    Circuit for searching/sorting data in neural networks 失效
    用于在神经网络中搜索/排序数据的电路

    公开(公告)号:US5740326A

    公开(公告)日:1998-04-14

    申请号:US486658

    申请日:1995-06-07

    CPC classification number: G06K9/6271 G06F7/544 G06N3/063 G06K2209/01

    Abstract: In a neural network of N neuron circuits, having an engaged neuron's calculated p bit wide distance between an input vector and a prototype vector and stored in the weight memory thereof, an aggregate search/sort circuit (517) of N engaged neurons' search/sort circuits. The aggregate search/sort circuit determines the minimum distance among the calculated distances. Each search/sort circuit (502-1) has p elementary search/sort units connected in series to form a column, such that the aggregate circuit is a matrix of elementary search/sort units. The distance bit signals of the same bit rank are applied to search/sort units in each row. A feedback signal is generated by ORing in an OR gate (12.1) all local search/sort output signals from the elementary search/sort units of the same row. The search process is based on identifying zeroes in the distance bit signals, from the MSB's to the LSB's. As a zero is found in a row, all the columns with a one in that row are excluded from the subsequent row search. The search process continues until only one distance, the minimum distance, remains and is available at the output of the OR circuit. The above described search/sort circuit may further include a latch allowing the aggregate circuit to sort remaining distances in increasing order.

    Abstract translation: 在N个神经元电路的神经网络中,在输入向量和原型向量之间具有被约束的神经元计算的p位宽的距离并存储在其权重存储器中,N个接收的神经元的搜索/ 排序电路。 聚合搜索/分类电路确定计算出的距离之间的最小距离。 每个搜索/分类电路(502-1)具有串联连接的p个基本搜索/分类单元以形成列,使得聚合电路是基本搜索/分类单元的矩阵。 相同位等级的距离位信号被应用于每行的搜索/排序单元。 通过在或门(12.1)中对来自同一行的基本搜索/排序单元的所有本地搜索/排序输出信号进行“或”生成反馈信号。 搜索过程基于识别距离位信号中的零,从MSB到LSB。 由于在一行中找到零,所以在该行中具有一个列的列将从后续行搜索中排除。 搜索过程继续,直到只有一个距离,最小距离保持,并且在OR电路的输出端可用。 上述搜索/分类电路还可以包括允许聚合电路以增加的顺序对剩余距离进行排序的锁存器。

Patent Agency Ranking