METHOD FOR FORMING A THREE-DIMENSIONAL STRUCTURE OF METAL-INSULATOR-METAL TYPE
    1.
    发明申请
    METHOD FOR FORMING A THREE-DIMENSIONAL STRUCTURE OF METAL-INSULATOR-METAL TYPE 有权
    形成金属绝缘体金属型三维结构的方法

    公开(公告)号:US20110227194A1

    公开(公告)日:2011-09-22

    申请号:US13052262

    申请日:2011-03-21

    CPC classification number: H01L23/5223 H01L28/60 H01L2924/0002 H01L2924/00

    Abstract: A method for forming a capacitive structure in a metal level of an interconnection stack including a succession of metal levels and of via levels, including the steps of: forming, in the metal level, at least one conductive track in which a trench is defined; conformally forming an insulating layer on the structure; forming, in the trench, a conductive material; and planarizing the structure.

    Abstract translation: 一种用于在包括一系列金属水平和通孔级别的互连堆叠的金属层中形成电容结构的方法,包括以下步骤:在金属层面形成至少一个其中限定沟槽的导电轨道; 在结构上保形地形成绝缘层; 在沟槽中形成导电材料; 并平坦化结构。

    System for scaling images using neural networks
    2.
    发明授权
    System for scaling images using neural networks 有权
    使用神经网络缩放图像的系统

    公开(公告)号:US07734117B2

    公开(公告)日:2010-06-08

    申请号:US12021511

    申请日:2008-01-29

    CPC classification number: G06T3/4046

    Abstract: An artificial neural network (ANN) based system that is adapted to process an input pattern to generate an output pattern related thereto having a different number of components than the input pattern. The system (26) is comprised of an ANN (27) and a memory (28), such as a DRAM memory, that are serially connected. The input pattern (23) is applied to a processor (22), where it can be processed or not (the most general case), before it is applied to the ANN and stored therein as a prototype (if learned). A category is associated with each stored prototype. The processor computes the coefficients that allow the determination of the estimated values of the output pattern, these coefficients are the components of a so-called intermediate pattern (24). Assuming the ANN has already learned a number of input patterns, when a new input pattern is presented to the ANN in the recognition phase, the category of the closest prototype is output therefrom and is used as a pointer to the memory. In turn, the memory outputs the corresponding intermediate pattern. The input pattern and the intermediate pattern are applied to the processor to construct the output pattern (25) using the coefficients. Typically, the input pattern is a block of pixels in the field of scaling images.

    Abstract translation: 一种基于人造神经网络(ANN)的系统,其适于处理输入模式以产生与其相关的输出模式,该输出模式具有与输入模式不同数量的分量。 系统(26)由串联连接的ANN(27)和存储器(28)(诸如DRAM存储器)组成。 将输入模式(23)应用于处理器(22),在处理器(22)被应用于ANN并作为原型存储(如果被学习)之前)处理器(22),其可被处理(最常见的情况))。 类别与每个存储的原型相关联。 处理器计算允许确定输出图案的估计值的系数,这些系数是所谓的中间图案的分量(24)。 假设ANN已经学习了许多输入模式,当在识别阶段向ANN呈现新的输入模式时,最近的原型的类别从其输出并被用作指向存储器的指针。 反过来,存储器输出相应的中间模式。 将输入图案和中间图案应用于处理器,以使用系数构造输出图案(25)。 通常,输入图案是缩放图像领域的像素块。

    SYSTEM FOR SCALING IMAGES USING NEURAL NETWORKS
    3.
    发明申请
    SYSTEM FOR SCALING IMAGES USING NEURAL NETWORKS 有权
    使用神经网络对图像进行缩放的系统

    公开(公告)号:US20080140594A1

    公开(公告)日:2008-06-12

    申请号:US12021511

    申请日:2008-01-29

    CPC classification number: G06T3/4046

    Abstract: An artificial neural network (ANN) based system that is adapted to process an input pattern to generate an output pattern related thereto having a different number of components than the input pattern. The system (26) is comprised of an ANN (27) and a memory (28), such as a DRAM memory, that are serially connected. The input pattern (23) is applied to a processor (22), where it can be processed or not (the most general case), before it is applied to the ANN and stored therein as a prototype (if learned). A category is associated with each stored prototype. The processor computes the coefficients that allow the determination of the estimated values of the output pattern, these coefficients are the components of a so-called intermediate pattern (24). Assuming the ANN has already learned a number of input patterns, when a new input pattern is presented to the ANN in the recognition phase, the category of the closest prototype is output therefrom and is used as a pointer to the memory. In turn, the memory outputs the corresponding intermediate pattern. The input pattern and the intermediate pattern are applied to the processor to construct the output pattern (25) using the coefficients. Typically, the input pattern is a block of pixels in the field of scaling images.

    Abstract translation: 一种基于人造神经网络(ANN)的系统,其适于处理输入模式以产生与其相关的输出模式,该输出模式具有与输入模式不同数量的分量。 系统(26)由串联连接的ANN(27)和存储器(28)(诸如DRAM存储器)组成。 将输入模式(23)应用于处理器(22),在处理器(22)被应用于ANN并作为原型存储(如果被学习)之前)处理器(22),其可被处理(最常见的情况))。 类别与每个存储的原型相关联。 处理器计算允许确定输出图案的估计值的系数,这些系数是所谓的中间图案的分量(24)。 假设ANN已经学习了许多输入模式,当在识别阶段向ANN呈现新的输入模式时,最近的原型的类别从其输出并被用作指向存储器的指针。 反过来,存储器输出相应的中间模式。 将输入图案和中间图案应用于处理器,以使用系数构造输出图案(25)。 通常,输入图案是缩放图像领域的像素块。

    Method and circuits for scaling images using neural networks
    4.
    发明授权
    Method and circuits for scaling images using neural networks 有权
    使用神经网络缩放图像的方法和电路

    公开(公告)号:US07352918B2

    公开(公告)日:2008-04-01

    申请号:US10321166

    申请日:2002-12-17

    CPC classification number: G06T3/4046

    Abstract: An artificial neural network (ANN) based system that is adapted to process an input pattern to generate an output pattern related thereto having a different number of components than the input pattern. The system (26) is comprised of an ANN (27) and a memory (28), such as a DRAM memory, that are serially connected. The input pattern (23) is applied to a processor (22), where it can be processed or not (the most general case), before it is applied to the ANN and stored therein as a prototype (if learned). A category is associated with each stored prototype. The processor computes the coefficients that allow the determination of the estimated values of the output pattern, these coefficients are the components of a so-called intermediate pattern (24). Assuming the ANN has already learned a number of input patterns, when a new input pattern is presented to the ANN in the recognition phase, the category of the closest prototype is output therefrom and is used as a pointer to the memory. In turn, the memory outputs the corresponding intermediate pattern. The input pattern and the intermediate pattern are applied to the processor to construct the output pattern (25) using the coefficients. Typically, the input pattern is a block of pixels in the field of scaling images.

    Abstract translation: 一种基于人造神经网络(ANN)的系统,其适于处理输入模式以产生与其相关的输出模式,该输出模式具有与输入模式不同数量的分量。 系统(26)由串联连接的ANN(27)和存储器(28)(诸如DRAM存储器)组成。 将输入模式(23)应用于处理器(22),在处理器(22)被应用于ANN并作为原型存储(如果被学习)之前)处理器(22),其可被处理(最常见的情况))。 类别与每个存储的原型相关联。 处理器计算允许确定输出图案的估计值的系数,这些系数是所谓的中间图案的分量(24)。 假设ANN已经学习了许多输入模式,当在识别阶段向ANN呈现新的输入模式时,最近的原型的类别从其输出并被用作指向存储器的指针。 反过来,存储器输出相应的中间模式。 将输入图案和中间图案应用于处理器,以使用系数构造输出图案(25)。 通常,输入图案是缩放图像领域的像素块。

    Neural semiconductor chip and neural networks incorporated therein
    5.
    发明授权
    Neural semiconductor chip and neural networks incorporated therein 失效
    纳入其中的神经半导体芯片和神经网络

    公开(公告)号:US5717832A

    公开(公告)日:1998-02-10

    申请号:US488443

    申请日:1995-06-07

    CPC classification number: G06K9/00986 G06K9/6271 G06N3/063

    Abstract: A base neural semiconductor chip (10) including a neural network or unit (11(#)). The neural network (11(#)) has a plurality of neuron circuits fed by different buses transporting data such as the input vector data, set-up parameters, and control signals. Each neuron circuit (11) includes logic for generating local result signals of the "fire" type (F) and a local output signal (NOUT) of the distance or category type on respective buses (NR-BUS, NOUT-BUS). An OR circuit (12) performs an OR function for all corresponding local result and output signals to generate respective first global result (R*) and output (OUT*) signals on respective buses (R*-BUS, OUT*-BUS) that are merged in an on-chip common communication bus (COM*-BUS) shared by all neuron circuits of the chip. In a multi-chip network, an additional OR function is performed between all corresponding first global result and output signals (which are intermediate signals) to generate second global result (R**) and output (OUT**) signals, preferably by dotting onto an off-chip common communication bus (COM**-BUS) in the chip's driver block (19). This latter bus is shared by all the base neural network chips that are connected to it in order to incorporate a neural network of the desired size. In the chip, a multiplexer (21) may select either the intermediate output or the global output signal to be fed back to all neuron circuits of the neural network, depending on whether the chip is used in a single or multi-chip environment via a feed-back bus (OR-BUS). The feedback signal is the result of a collective processing of all the local output signals.

    Abstract translation: 一种包括神经网络或单元(11(#))的基本神经半导体芯片(10)。 神经网络(11(#))具有由不同总线馈送的多个神经元电路,其传送诸如输入矢量数据,设置参数和控制信号的数据。 每个神经元电路(11)包括用于在相应总线(NR-BUS,NOUT-BUS)上产生“火”类型(F)的本地结果信号和距离或类别类型的本地输出信号(NOUT)的逻辑。 OR电路(12)对所有对应的本地结果和输出信号执行OR功能,以在相应总线(R * -BUS,OUT * -BUS)上产生相应的第一全局结果(R *)和输出(OUT *)信号, 被合并在由芯片的所有神经元电路共享的片上公共通信总线(COM * -BUS)中。 在多芯片网络中,在所有对应的第一全局结果和输出信号(它们是中间信号)之间执行附加OR功能,以产生第二全局结果(R **)和输出(OUT **)信号,优选地通过点划线 在芯片的驱动器块(19)中的片外公共通信总线(COM ** - BUS)上。 该后一个总线由连接到它的所有基本神经网络芯片共享以便并入所需大小的神经网络。 在芯片中,多路复用器(21)可以选择要反馈给神经网络的所有神经元电路的中间输出或全局输出信号,这取决于芯片是经由单芯片还是多芯片环境使用 反馈总线(OR-BUS)。 反馈信号是对所有局部输出信号的集中处理的结果。

    Daisy chain circuit for serial connection of neuron circuits
    6.
    发明授权
    Daisy chain circuit for serial connection of neuron circuits 失效
    用于串联连接神经元电路的菊花链电路

    公开(公告)号:US5710869A

    公开(公告)日:1998-01-20

    申请号:US485337

    申请日:1995-06-07

    CPC classification number: G06N3/063

    Abstract: Each daisy chain circuit is serially connected to the two adjacent neuron circuits, so that all the neuron circuits form a chain. The daisy chain circuit distinguishes between the two possible states of the neuron circuit (engaged or free) and identifies the first free "or ready to learn" neuron circuit in the chain, based on the respective values of the input (DCI) and output (DCO) signals of the daisy chain circuit. The ready to learn neuron circuit is the only neuron circuit of the neural network having daisy chain input and output signals complementary to each other. The daisy chain circuit includes a 1-bit register (601) controlled by a store enable signal (ST) which is active at initialization or, during the learning phase when a new neuron circuit is engaged. At initialization, all the Daisy registers of the chain are forced to a first logic value. The DCI input of the first daisy chain circuit in the chain is connected to a second logic value, such that after initialization, it is the ready to learn neuron circuit. In the learning phase, the ready to learn neuron's 1-bit daisy register contents are set to the second logic value by the store enable signal, it is said "engaged". As neurons are engaged, each subsequent neuron circuit in the chain then becomes the next ready to learn neuron circuit.

    Abstract translation: 每个菊花链电路串联连接到两个相邻的神经元电路,使得所有的神经元电路形成链。 菊花链电路基于输入(DCI)和输出(DCI)的相应值来区分神经元电路的两种可能状态(被接合或自由)并且识别链中的第一个“准备学习”神经元电路 DCO)信号。 准备学习神经元电路是具有菊花链输入和输出信号彼此互补的神经网络的唯一神经元电路。 菊花链电路包括由初始化时有效的存储使能信号(ST)控制的1位寄存器(601),或者在新的神经元电路被接合时的学习阶段。 在初始化时,链的所有Daisy寄存器都被强制为第一个逻辑值。 链中第一个菊花链电路的DCI输入连接到第二个逻辑值,这样在初始化之后就可以学习神经元电路了。 在学习阶段,准备学习神经元的1位菊花寄存器内容通过存储使能信号设置为第二个逻辑值,它被称为“被接合”。 随着神经元的啮合,链中随后的每个神经元电路就成为下一个准备学习神经元电路的准备。

    Method and circuits for associating a complex operator to each component of an input pattern presented to an artificial neural network
    7.
    发明授权
    Method and circuits for associating a complex operator to each component of an input pattern presented to an artificial neural network 失效
    用于将复杂算子与呈现给人造神经网络的输入模式的每个分量相关联的方法和电路

    公开(公告)号:US08027942B2

    公开(公告)日:2011-09-27

    申请号:US09951786

    申请日:2001-09-12

    CPC classification number: G06K9/6215 G06K9/6276 G06N3/063

    Abstract: The method and circuits of the present invention aim to associate a complex component operator (CC_op) to each component of an input pattern presented to an input space mapping algorithm based artificial neural network (ANN) during the distance evaluation process. A complex operator consists in the description of a function and a set of parameters attached thereto. The function is a mathematical entity (either a logic operator e.g. match(Ai,Bi), abs(Ai−Bi), . . . or an arithmetic operator, e.g. >,

    Abstract translation: 本发明的方法和电路旨在将复杂分量算子(CC_op)与在距离评估过程中呈现给基于输入空间映射算法的人造神经网络(ANN)的输入模式的每个分量相关联。 复杂的运算符在于对附加到其上的函数和一组参数的描述。 该函数是数学实体(逻辑运算符,例如匹配(Ai,Bi),abs(Ai-Bi),...或算术运算符,例如>,<,...。)或一组软件指令 有条件。 在第一实施例中,ANN被提供有存储所有CC_ops的ANN的所有神经元共用的全局存储器。 在另一个实施例中,CC_ops的集合存储在神经元的原型存储器中,使得全局存储器不再是物理上必需的。 根据本发明,存储的原型的组件现在可以指定不同性质的对象。 另外,这两种实现都显着减少神经元所需的组件数量,从而在ANN集成在硅芯片中时节省空间。

    Parallel Pattern Detection Engine
    8.
    发明申请
    Parallel Pattern Detection Engine 有权
    并行模式检测引擎

    公开(公告)号:US20070150622A1

    公开(公告)日:2007-06-28

    申请号:US11682576

    申请日:2007-03-06

    CPC classification number: G06K9/6202 G06K9/00986

    Abstract: A parallel pattern detection engine (PPDE) comprise multiple processing units (PUs) customized to do various modes of pattern recognition. The PUs are loaded with different patterns and the input data to be matched is provided to the PUs in parallel. Each pattern has an Opcode that defines what action to take when a particular data in the input data stream either matches or does not match the corresponding data being compared during a clock cycle. Each of the PUs communicate selected information so that PUs may be cascaded to enable longer patterns to be matched or to allow more patterns to be processed in parallel for a particular input data stream.

    Abstract translation: 并行模式检测引擎(PPDE)包括定制的多个处理单元(PU),以执行各种模式识别模式。 PU装载有不同的图案,并且要匹配的输入数据并行提供给PU。 每个模式都有一个操作码,定义当输入数据流中的特定数据与时钟周期中正在比较的对应数据匹配或不匹配时要执行的操作。 每个PU通信所选择的信息,使得PU可以被级联以使得能够匹配更长的模式或允许针对特定的输入数据流并行地处理更多的模式。

    Configurable bi-directional bus for communicating between autonomous units
    9.
    发明申请
    Configurable bi-directional bus for communicating between autonomous units 失效
    可配置双向总线,用于在自主单元之间进行通信

    公开(公告)号:US20050154858A1

    公开(公告)日:2005-07-14

    申请号:US10757673

    申请日:2004-01-14

    CPC classification number: G06F13/4027

    Abstract: Processing units (PUs) are coupled with a gated bi-directional bus structure that allows the PUs to be cascaded. Each PUn has communication logic and function logic. Each PUn is physically coupled to two other PUs, a PUp and a PUf. The communication logic receives Link Out data from a PUp and sends Link In data to a PUf. The communication logic has register bits for enabling and disabling the data transmission. The communication logic couples the Link Out data from a PUp to the function logic and couples Link In data to the PUp from the function logic in response to the register bits. The function logic receives output data from the PUn and Link In data from the communication logic and forms Link Out data which is coupled to the PUf. The function logic couples Link In data from the PUf to the PUn and to the communication logic.

    Abstract translation: 处理单元(PU)与门控双向总线结构耦合,允许将PU级联。 每个PUn具有通信逻辑和功能逻辑。 每个PUn物理耦合到另外两个PU,PUp和PUf。 通信逻辑从PUp接收Link Out数据,并将Link In数据发送到PUf。 通信逻辑具有用于启用和禁用数据传输的寄存器位。 通信逻辑将链路输出数据从PUp耦合到功能逻辑,并且响应于寄存器位将Link In数据从功能逻辑耦合到PUp。 功能逻辑从通信逻辑的PUn和Link In数据接收输出数据,并形成耦合到PUf的Link Out数据。 功能逻辑将来自PUf的链接数据耦合到PUn和通信逻辑。

    Neuron architecture having a dual structure and neural networks incorporating the same
    10.
    发明授权
    Neuron architecture having a dual structure and neural networks incorporating the same 失效
    具有双重结构的神经元结构和包含其的神经网络

    公开(公告)号:US06502083B1

    公开(公告)日:2002-12-31

    申请号:US09470458

    申请日:1999-12-22

    CPC classification number: G06K9/6276 G06N3/063

    Abstract: The improved neuron is connected to input buses which transport input data and control signals. It basically consists of a computation block, a register block, an evaluation block and a daisy chain block. All these blocks, except the computation block substantially have a symmetric construction. Registers are used to store data: the local norm and context, the distance, the AIF value and the category. The improved neuron further needs some R/W memory capacity which may be placed either in the neuron or outside. The evaluation circuit is connected to an output bus to generate global signals thereon. The daisy chain block allows to chain the improved neuron with others to form an artificial neural network (ANN). The improved neuron may work either as a single neuron (single mode) or as two independent neurons (dual mode). In the latter case, the computation block, which is common to the two dual neurons, must operate sequentially to service one neuron after the other. The selection between the two modes (single/dual) is made by the user which stores a specific logic value in a dedicated register of the control logic circuitry in each improved neuron.

    Abstract translation: 改进的神经元连接到传输输入数据和控制信号的输入总线。 它基本上由计算块,寄存器块,评估块和菊花链块组成。 除了计算块之外,所有这些块基本上具有对称结构。 寄存器用于存储数据:本地规范和上下文,距离,AIF值和类别。 改进的神经元还需要一些R / W记忆容量,这可能被放置在神经元或外部。 评估电路连接到输出总线,以在其上产生全局信号。 菊花链块允许与其他人链接改进的神经元以形成人造神经网络(ANN)。 改善的神经元可以作为单个神经元(单个模式)或两个独立的神经元(双模式)起作用。 在后一种情况下,两个双重神经元共同的计算块必须依次操作,以便在一个神经元之后进行服务。 两种模式之间的选择(单/双)由在每个改进的神经元中的控制逻辑电路的专用寄存器中存储特定逻辑值的用户进行。

Patent Agency Ranking