System for scaling images using neural networks
    11.
    发明授权
    System for scaling images using neural networks 有权
    使用神经网络缩放图像的系统

    公开(公告)号:US07734117B2

    公开(公告)日:2010-06-08

    申请号:US12021511

    申请日:2008-01-29

    CPC classification number: G06T3/4046

    Abstract: An artificial neural network (ANN) based system that is adapted to process an input pattern to generate an output pattern related thereto having a different number of components than the input pattern. The system (26) is comprised of an ANN (27) and a memory (28), such as a DRAM memory, that are serially connected. The input pattern (23) is applied to a processor (22), where it can be processed or not (the most general case), before it is applied to the ANN and stored therein as a prototype (if learned). A category is associated with each stored prototype. The processor computes the coefficients that allow the determination of the estimated values of the output pattern, these coefficients are the components of a so-called intermediate pattern (24). Assuming the ANN has already learned a number of input patterns, when a new input pattern is presented to the ANN in the recognition phase, the category of the closest prototype is output therefrom and is used as a pointer to the memory. In turn, the memory outputs the corresponding intermediate pattern. The input pattern and the intermediate pattern are applied to the processor to construct the output pattern (25) using the coefficients. Typically, the input pattern is a block of pixels in the field of scaling images.

    Abstract translation: 一种基于人造神经网络(ANN)的系统,其适于处理输入模式以产生与其相关的输出模式,该输出模式具有与输入模式不同数量的分量。 系统(26)由串联连接的ANN(27)和存储器(28)(诸如DRAM存储器)组成。 将输入模式(23)应用于处理器(22),在处理器(22)被应用于ANN并作为原型存储(如果被学习)之前)处理器(22),其可被处理(最常见的情况))。 类别与每个存储的原型相关联。 处理器计算允许确定输出图案的估计值的系数,这些系数是所谓的中间图案的分量(24)。 假设ANN已经学习了许多输入模式,当在识别阶段向ANN呈现新的输入模式时,最近的原型的类别从其输出并被用作指向存储器的指针。 反过来,存储器输出相应的中间模式。 将输入图案和中间图案应用于处理器,以使用系数构造输出图案(25)。 通常,输入图案是缩放图像领域的像素块。

    Method and circuits for scaling images using neural networks
    12.
    发明授权
    Method and circuits for scaling images using neural networks 有权
    使用神经网络缩放图像的方法和电路

    公开(公告)号:US07352918B2

    公开(公告)日:2008-04-01

    申请号:US10321166

    申请日:2002-12-17

    CPC classification number: G06T3/4046

    Abstract: An artificial neural network (ANN) based system that is adapted to process an input pattern to generate an output pattern related thereto having a different number of components than the input pattern. The system (26) is comprised of an ANN (27) and a memory (28), such as a DRAM memory, that are serially connected. The input pattern (23) is applied to a processor (22), where it can be processed or not (the most general case), before it is applied to the ANN and stored therein as a prototype (if learned). A category is associated with each stored prototype. The processor computes the coefficients that allow the determination of the estimated values of the output pattern, these coefficients are the components of a so-called intermediate pattern (24). Assuming the ANN has already learned a number of input patterns, when a new input pattern is presented to the ANN in the recognition phase, the category of the closest prototype is output therefrom and is used as a pointer to the memory. In turn, the memory outputs the corresponding intermediate pattern. The input pattern and the intermediate pattern are applied to the processor to construct the output pattern (25) using the coefficients. Typically, the input pattern is a block of pixels in the field of scaling images.

    Abstract translation: 一种基于人造神经网络(ANN)的系统,其适于处理输入模式以产生与其相关的输出模式,该输出模式具有与输入模式不同数量的分量。 系统(26)由串联连接的ANN(27)和存储器(28)(诸如DRAM存储器)组成。 将输入模式(23)应用于处理器(22),在处理器(22)被应用于ANN并作为原型存储(如果被学习)之前)处理器(22),其可被处理(最常见的情况))。 类别与每个存储的原型相关联。 处理器计算允许确定输出图案的估计值的系数,这些系数是所谓的中间图案的分量(24)。 假设ANN已经学习了许多输入模式,当在识别阶段向ANN呈现新的输入模式时,最近的原型的类别从其输出并被用作指向存储器的指针。 反过来,存储器输出相应的中间模式。 将输入图案和中间图案应用于处理器,以使用系数构造输出图案(25)。 通常,输入图案是缩放图像领域的像素块。

    Method and circuits for associating a complex operator to each component of an input pattern presented to an artificial neural network
    13.
    发明授权
    Method and circuits for associating a complex operator to each component of an input pattern presented to an artificial neural network 失效
    用于将复杂算子与呈现给人造神经网络的输入模式的每个分量相关联的方法和电路

    公开(公告)号:US08027942B2

    公开(公告)日:2011-09-27

    申请号:US09951786

    申请日:2001-09-12

    CPC classification number: G06K9/6215 G06K9/6276 G06N3/063

    Abstract: The method and circuits of the present invention aim to associate a complex component operator (CC_op) to each component of an input pattern presented to an input space mapping algorithm based artificial neural network (ANN) during the distance evaluation process. A complex operator consists in the description of a function and a set of parameters attached thereto. The function is a mathematical entity (either a logic operator e.g. match(Ai,Bi), abs(Ai−Bi), . . . or an arithmetic operator, e.g. >,

    Abstract translation: 本发明的方法和电路旨在将复杂分量算子(CC_op)与在距离评估过程中呈现给基于输入空间映射算法的人造神经网络(ANN)的输入模式的每个分量相关联。 复杂的运算符在于对附加到其上的函数和一组参数的描述。 该函数是数学实体(逻辑运算符,例如匹配(Ai,Bi),abs(Ai-Bi),...或算术运算符,例如>,<,...。)或一组软件指令 有条件。 在第一实施例中,ANN被提供有存储所有CC_ops的ANN的所有神经元共用的全局存储器。 在另一个实施例中,CC_ops的集合存储在神经元的原型存储器中,使得全局存储器不再是物理上必需的。 根据本发明,存储的原型的组件现在可以指定不同性质的对象。 另外,这两种实现都显着减少神经元所需的组件数量,从而在ANN集成在硅芯片中时节省空间。

    Parallel Pattern Detection Engine
    14.
    发明申请
    Parallel Pattern Detection Engine 有权
    并行模式检测引擎

    公开(公告)号:US20070150622A1

    公开(公告)日:2007-06-28

    申请号:US11682576

    申请日:2007-03-06

    CPC classification number: G06K9/6202 G06K9/00986

    Abstract: A parallel pattern detection engine (PPDE) comprise multiple processing units (PUs) customized to do various modes of pattern recognition. The PUs are loaded with different patterns and the input data to be matched is provided to the PUs in parallel. Each pattern has an Opcode that defines what action to take when a particular data in the input data stream either matches or does not match the corresponding data being compared during a clock cycle. Each of the PUs communicate selected information so that PUs may be cascaded to enable longer patterns to be matched or to allow more patterns to be processed in parallel for a particular input data stream.

    Abstract translation: 并行模式检测引擎(PPDE)包括定制的多个处理单元(PU),以执行各种模式识别模式。 PU装载有不同的图案,并且要匹配的输入数据并行提供给PU。 每个模式都有一个操作码,定义当输入数据流中的特定数据与时钟周期中正在比较的对应数据匹配或不匹配时要执行的操作。 每个PU通信所选择的信息,使得PU可以被级联以使得能够匹配更长的模式或允许针对特定的输入数据流并行地处理更多的模式。

    Configurable bi-directional bus for communicating between autonomous units
    15.
    发明申请
    Configurable bi-directional bus for communicating between autonomous units 失效
    可配置双向总线,用于在自主单元之间进行通信

    公开(公告)号:US20050154858A1

    公开(公告)日:2005-07-14

    申请号:US10757673

    申请日:2004-01-14

    CPC classification number: G06F13/4027

    Abstract: Processing units (PUs) are coupled with a gated bi-directional bus structure that allows the PUs to be cascaded. Each PUn has communication logic and function logic. Each PUn is physically coupled to two other PUs, a PUp and a PUf. The communication logic receives Link Out data from a PUp and sends Link In data to a PUf. The communication logic has register bits for enabling and disabling the data transmission. The communication logic couples the Link Out data from a PUp to the function logic and couples Link In data to the PUp from the function logic in response to the register bits. The function logic receives output data from the PUn and Link In data from the communication logic and forms Link Out data which is coupled to the PUf. The function logic couples Link In data from the PUf to the PUn and to the communication logic.

    Abstract translation: 处理单元(PU)与门控双向总线结构耦合,允许将PU级联。 每个PUn具有通信逻辑和功能逻辑。 每个PUn物理耦合到另外两个PU,PUp和PUf。 通信逻辑从PUp接收Link Out数据,并将Link In数据发送到PUf。 通信逻辑具有用于启用和禁用数据传输的寄存器位。 通信逻辑将链路输出数据从PUp耦合到功能逻辑,并且响应于寄存器位将Link In数据从功能逻辑耦合到PUp。 功能逻辑从通信逻辑的PUn和Link In数据接收输出数据,并形成耦合到PUf的Link Out数据。 功能逻辑将来自PUf的链接数据耦合到PUn和通信逻辑。

    GaAs MESFET logic circuits including push pull output buffers
    16.
    发明授权
    GaAs MESFET logic circuits including push pull output buffers 失效
    GaAs MESFET逻辑电路包括推挽输出缓冲器

    公开(公告)号:US4922135A

    公开(公告)日:1990-05-01

    申请号:US271124

    申请日:1988-11-14

    CPC classification number: H03K19/09436 H03K19/01721

    Abstract: The present invention relates to a family of new GaAs MESFET logic circuits including push pull output buffers, which exhibits very strong output driving capability and very low power consumption at fast switching speeds. A 3 way OR/NOR circuit of this invention includes a standard differential amplifier, the first branch of which is controlled by logic input signals. The second branch includes a current switch controlled by a reference voltage. The differential amplifier provides first and second output signals simultaneously and complementary each other. The circuit further includes two push pull output buffers. The first output buffer comprises an active pull up device connected in series with an active pull down device, and the first circuit output signal is available at their common node or at the output terminal. The active pull up device is controlled by a first output signal of the differential amplifier, and the active pull down device is preferably controlled by the second output signal through an intermediate source follower buffer. The second output buffer is of similar structure. The depicted circuit is of the dual phase type. However, if only one phase of the circuit output signal is needed, the output buffer and the intermediate buffer can be eliminated. The number of devices can be even further reduced by eliminating the other remaining intermediate buffer.

    Parallel Pattern Detection Engine
    17.
    发明申请

    公开(公告)号:US20070150623A1

    公开(公告)日:2007-06-28

    申请号:US11682623

    申请日:2007-03-06

    CPC classification number: G06K9/6202 G06K9/00986

    Abstract: A parallel pattern detection engine (PPDE) comprise multiple processing units (PUs) customized to do various modes of pattern recognition. The PUs are loaded with different patterns and the input data to be matched is provided to the PUs in parallel. Each pattern has an Opcode that defines what action to take when a particular data in the input data stream either matches or does not match the corresponding data being compared during a clock cycle. Each of the PUs communicate selected information so that PUs may be cascaded to enable longer patterns to be matched or to allow more patterns to be processed in parallel for a particular input data stream.

    Method and circuits for encoding an input pattern using a normalizer and a classifier

    公开(公告)号:US07133854B2

    公开(公告)日:2006-11-07

    申请号:US10014166

    申请日:2001-12-11

    CPC classification number: G06T9/00

    Abstract: Let us consider a plurality of input patterns having an essential characteristic in common but which differ on at least one parameter (this parameter modifies the input pattern in some extent but not this essential characteristic for a specific application). During the learning phase, each input pattern is normalized in a normalizer, before it is presented to a classifier. If not recognized, it is learned, i.e. the normalized pattern is stored in the classifier as a prototype with its category associated thereto. From a predetermined reference value of that parameter, the normalizer computes an element related to said parameter which allows to set the normalized pattern from the input pattern and vice versa to retrieve the input pattern from the normalized pattern. As a result, all these input patterns are represented by the same normalized pattern. The above method and circuits allow to reduce the number of required prototypes in the classifier, improving thereby its response quality.

    Method to improve the data transfer rate between a computer and a neural network
    19.
    发明授权
    Method to improve the data transfer rate between a computer and a neural network 失效
    提高计算机和神经网络之间数据传输速率的方法

    公开(公告)号:US06983265B2

    公开(公告)日:2006-01-03

    申请号:US10316250

    申请日:2002-12-10

    CPC classification number: G06K9/6276 G06N3/04

    Abstract: A method is described to improve the data transfer rate between a personal computer or a host computer and a neural network implemented in hardware by merging a plurality of input patterns into a single input pattern configured to globally represent the set of input patterns. A base consolidated vector (U′*n) representing the input pattern is defined to describe all the vectors (Un, . . . , Un+6) representing the input patterns derived thereof (U′n, . . . , U′n+6) by combining components having fixed and ‘don't care’ values. The base consolidated vector is provided only once with all the components of the vectors. An artificial neural network (ANN) is then configured as a combination of sub-networks operating in parallel. In order to compute the distances with an adequate number of components, the prototypes are to include also components having a definite value and ‘don't care’ conditions. During the learning phase, the consolidated vectors are stored as prototypes. During the recognition phase, when a new base consolidated vector is provided to ANN, each sub-network analyses a portion thereof After computing all the distances, they are sorted one sub-network at a time to obtain the distances associated to each vector.

    Abstract translation: 描述了一种方法,以通过将多个输入模式合并为被配置为全局地表示该组输入模式的单个输入模式来改善个人计算机或主机计算机与硬件中实现的神经网络之间的数据传输速率。 定义表示输入模式的基本合并向量(U'* N n N)来描述所有向量(U N,N,N,N,N) 代表其导出的输入模式(U',N“,...,U”n + 6)的组合,通过组合具有固定的“不” 护理价值观。 基本合并向量仅与向量的所有分量一起提供。 然后将人造神经网络(ANN)配置为并行操作的子网络的组合。 为了用足够数量的组件计算距离,原型还包括具有确定值和“无关紧要”条件的组件。 在学习阶段,合并的向量存储为原型。 在识别阶段,当向ANN提供新的基本合并向量时,每个子网络分析其一部分。在计算所有距离之后,它们一次对一个子网进行排序,以获得与每个向量相关联的距离。

    Circuit for searching/sorting data in neural networks
    20.
    发明授权
    Circuit for searching/sorting data in neural networks 失效
    用于在神经网络中搜索/排序数据的电路

    公开(公告)号:US5740326A

    公开(公告)日:1998-04-14

    申请号:US486658

    申请日:1995-06-07

    CPC classification number: G06K9/6271 G06F7/544 G06N3/063 G06K2209/01

    Abstract: In a neural network of N neuron circuits, having an engaged neuron's calculated p bit wide distance between an input vector and a prototype vector and stored in the weight memory thereof, an aggregate search/sort circuit (517) of N engaged neurons' search/sort circuits. The aggregate search/sort circuit determines the minimum distance among the calculated distances. Each search/sort circuit (502-1) has p elementary search/sort units connected in series to form a column, such that the aggregate circuit is a matrix of elementary search/sort units. The distance bit signals of the same bit rank are applied to search/sort units in each row. A feedback signal is generated by ORing in an OR gate (12.1) all local search/sort output signals from the elementary search/sort units of the same row. The search process is based on identifying zeroes in the distance bit signals, from the MSB's to the LSB's. As a zero is found in a row, all the columns with a one in that row are excluded from the subsequent row search. The search process continues until only one distance, the minimum distance, remains and is available at the output of the OR circuit. The above described search/sort circuit may further include a latch allowing the aggregate circuit to sort remaining distances in increasing order.

    Abstract translation: 在N个神经元电路的神经网络中,在输入向量和原型向量之间具有被约束的神经元计算的p位宽的距离并存储在其权重存储器中,N个接收的神经元的搜索/ 排序电路。 聚合搜索/分类电路确定计算出的距离之间的最小距离。 每个搜索/分类电路(502-1)具有串联连接的p个基本搜索/分类单元以形成列,使得聚合电路是基本搜索/分类单元的矩阵。 相同位等级的距离位信号被应用于每行的搜索/排序单元。 通过在或门(12.1)中对来自同一行的基本搜索/排序单元的所有本地搜索/排序输出信号进行“或”生成反馈信号。 搜索过程基于识别距离位信号中的零,从MSB到LSB。 由于在一行中找到零,所以在该行中具有一个列的列将从后续行搜索中排除。 搜索过程继续,直到只有一个距离,最小距离保持,并且在OR电路的输出端可用。 上述搜索/分类电路还可以包括允许聚合电路以增加的顺序对剩余距离进行排序的锁存器。

Patent Agency Ranking