Abstract:
In a neural network comprised of a plurality of neuron circuits, an improved neuron circuit that generates local result signals, e.g. of the fire type, and a local output signal of the distance or category type. The neuron circuit which is connected to buses that transport input data (e.g. the input category) and control signals. A multi-norm distance evaluation circuit calculates the distance D between the input vector and a prototype vector stored in a R/W memory circuit. A distance compare circuit compares this distance D with either the stored prototype vector's actual influence field or the lower limit thereof to generate first and second comparison signals. An identification circuit processes the comparison signals, the input category signal, the local category signal and a feedback signal to generate local result signals that represent the neuron circuit's response to the input vector. A minimum distance determination circuit determines the minimum distance Dmin among all the calculated distances from all of the neuron circuits of the neural network and generates a local output signal of the distance type. The circuit may be used to search and sort categories. The feed-back signal is collectively generated by all the neuron circuits by ORing all the local distances/categories. A daisy chain circuit is serially connected to corresponding daisy chain circuits of two adjacent neuron circuits to chain the neurons together. The daisy chain circuit also determines the neuron circuit state as free or engaged. Finally, a context circuitry enables or inhibits neuron participation with other neuron circuits in generation of the feedback signal.
Abstract:
A vertical isolated-collector PNP transistor structure (58) comprises a P+ region (45), a N region (44) and a P- well region (46) which form the emitter, the base and the collector, respectively. The P- well region is enclosed in a N type pocket comprised of a N+ buried layer (48) and a N reach-through region (47) in contact therewith. The contact regions (46-1, 47-1) to the P- well region (46) and to the N reach-through region (47) are shorted to define a common collector contact (59). In addition, the thickness W of the P- well region (46) is so minimized to allow transistor action of the parasitic NPN transistor formed by N PNP base region (44), P- well region (46) and the N+ buried layer, (48) respectively as the collector, the base and the emitter of said PNP transistor. The PNP transistor structure (67) may be combined with a conventional NPN transistor structure (61).
Abstract:
The base circuit comprises a self-referenced preamplifier (31) of the differential type connected between first and second supply voltages and a push-pull output buffer stage connected between second and third supply voltages. The push-pull output buffer stage comprises a pull-up transistor and a pull-down transistor connected in series with the circuit output node coupled therebetween. These transistors are driven by complementary and substantially simultaneous signals S and S supplied by the preamplifier. Both branches of the preamplifier are tied at a first output node (M). The first branch comprises a logic block performing the desired logic function of the base circuit that is connected through a load rsistor to the second supply voltage. The logic block consists of three parallel-connected input NPN transistors, whose emitters are coupled together at the first output node for NOR operation. The second branch is comprised of a biasing/coupling block connected to the second supply voltage and coupled to the first output node and to the base (B) node of the pull-down transistor. This block ensures both the appropriate polarization of the nodes in DC without the need of external reference voltage generators and a low impedance path for fast signal transmission of the output signal from node M to node B in AC, when input transistors of the logic block are ON. and base nodes. An anti-saturation block (AB), consisting typically of a Schottky Barrier Diode (SBD), is useful to prevent saturation of the pull down transistor (TDN) to further speed up the circuit.
Abstract:
According to the present invention, a CMOS interface circuit (C2) similar to a latch made by two CMOS cross coupled inverters (INV1, INV2) is placed directly on the output node (14) of conventional BICMOS logic circuit (11) operating alone in a partial swing mode. This latch is made of four FETs P5, P6, N8, N9 cross-coupled in a conventional way with the feedback loop connected to said output node (14). The partial voltage swing (VBE to VH-VBE) naturally given by the output bipolar transistors (T1, T2) mounted in a push pull configuration is reinforced to full swing (GND to VH) by the latch at the end of each transition. The state of the output node if forced by the latch because of the high driving capability due to the presence of said output bipolar transistors (T1, T2). As a result, the improved BICMOS logic circuit (D2) has an output signal (S) that ranges within the desired full swing voltage at the output terminal (15). It is a characteristic of this embodiment that the structure of CMOS interface (C2) is always independent of the logic function implemented in the conventional BICMOS logic circuit (11). More generally, the CMOS interface circuit may have various physical implementations, however, it is always comprised of CMOS FETs and it becomes active at least in one of the GND to VBE or (VH-BE) to VH range.
Abstract:
A method for forming a capacitive structure in a metal level of an interconnection stack including a succession of metal levels and of via levels, including the steps of: forming, in the metal level, at least one conductive track in which a trench is defined; conformally forming an insulating layer on the structure; forming, in the trench, a conductive material; and planarizing the structure.
Abstract:
The method and circuits of the present invention aim to associate a norm to each component of an input pattern presented to an input space mapping algorithm based artificial neural network (ANN) during the distance evaluation process. The set of norms, referred to as the “component” norms is memorized in specific memorization means in the ANN. In a first embodiment, the ANN is provided with a global memory, common for all the neurons of the ANN, that memorizes all the component norms. For each component of the input pattern, all the neurons perform the elementary (or partial) distance calculation with the corresponding prototype components stored therein during the distance evaluation process using the associated component norm. The distance elementary calculations are then combined using a “distance” norm to determine the final distance between the input pattern and the prototypes stored in the neurons. In another embodiment, the set of component norms is memorized in the neurons themselves in the prototype memorization means, so that the global memory is no longer physically necessary. This implementation allows to significantly optimize the consumed silicon area when the ANN is integrated in a silicon chip.
Abstract:
In the search of the minimum value among a set of p Numbers coded on q bits, each Number is split into K sub-values coded on n bits (q>=K×n). Parameter K thus assigns a rank to each sub-value so that K slices of bits are formed wherein each slice is composed of sub-values of the same rank. Each sub-value is then encoded on m bits (m>n) using a “thermometric” coding technique. A parallel search is then performed on the first slice of encoded sub-values (MSBs) to determine the minimum sub-value of that slice. All the Numbers associated to sub-values that are greater than the minimum sub-value that has been evaluated are deselected. The evaluation process is continued the same way until the last slice (LSBs) has been processed. At the end of the evaluation process, the Number which remains selected has the minimum value. The response time (i.e. the number of processing steps) now only depends upon the number K of sub-values in which the Numbers have been split up. The method applies to search the maximum as well.
Abstract:
To avoid the problem of category assignment in artificial neural networks (ANNs) based upon a mapping of the input space (like ROI and KNN algorithms), the present method uses “probabilities”. Now patterns memorized as prototypes do not represent categories any longer but the “probabilities” to belong to categories. Thus, after having memorized the most representative patterns in a first step of the learning phase, the second step consists of an evaluation of these probabilities. To that end, several counters are associated with each prototype and are used to evaluate the response frequency and accuracy for each neuron of the ANN. These counters are dynamically incremented during this second step using distances evaluation (between the input vectors and the prototypes) and error criteria (for example the differences between the desired responses and the response given by the ANN). At the end of the learning phase, a function of the contents of these counters allows an evaluation of these probabilities for each neuron to belong to predetermined categories. During the recognition phase, the probabilities associated with the neurons selected by the algorithm permit the characterization of new input vectors and more generally any kind of input (images, signals, sets of data) to detect and classify anomalies. The method allows a significant reduction in the number of neurons that are required in the ANN while improving its overall response accuracy.
Abstract:
The neural semiconductor chip first includes: a global register and control logic circuit block, a R/W memory block and a plurality of neurons fed by buses transporting data such as the input vector data, set-up parameters, etc., and signals such as the feed back and control signals. The R/W memory block, typically a RAM, is common to all neurons to avoid circuit duplication, increasing thereby the number of neurons integrated in the chip. The R/W memory stores the prototype components. Each neuron comprises a computation block, a register block, an evaluation block and a daisy chain block to chain the neurons. All these blocks (except the computation block) have a symmetric structure and are designed so that each neuron may operate in a dual manner, i.e. either as a single neuron (single mode) or as two independent neurons (dual mode). Each neuron generates local signals. The neural chip further includes an OR circuit which performs an OR function for all corresponding local signals to generate global signals that are merged in an on-chip common communication bus shared by all neurons of the chip. The R/W memory block, the neurons and the OR circuit form an artificial neural network having high flexibility due to this dual mode feature which allows to mix single and dual neurons in the ANN.
Abstract translation:神经半导体芯片首先包括:全局寄存器和控制逻辑电路块,R / W存储器块和由传送诸如输入向量数据,建立参数等的数据的总线馈送的多个神经元,以及诸如 作为反馈和控制信号。 R / W存储器块(通常为RAM)对于所有神经元是共同的,以避免电路重复,从而增加集成在芯片中的神经元的数量。 R / W存储器存储原型组件。 每个神经元包括计算块,寄存器块,评估块和菊花链块以链接神经元。 所有这些块(计算块除外)具有对称结构,并且被设计成使得每个神经元可以以双重方式操作,即作为单个神经元(单个模式)或两个独立神经元(双模式)操作。 每个神经元产生本地信号。 所述神经芯片还包括OR电路,其对所有相应的本地信号执行OR功能,以产生合并在由所述芯片的所有神经元共享的片上公共通信总线中的全局信号。 R / W存储器块,神经元和OR电路形成具有高灵活性的人造神经网络,由于这种双模式特征,其允许在ANN中混合单个和双重神经元。
Abstract:
A method of achieving automatic learning of an input vector presented to an artificial neural network (ANN) formed by a plurality of neurons, using the K nearest neighbor (KNN) mode. Upon providing an input vector to be learned to the ANN, a Write component operation is performed to store the input vector components in the first available free neuron of the ANN. Then, a Write category operation is performed by assigning a category defined by the user to the input vector. Next, a test is performed to determine whether this category matches the categories of the nearest prototypes, i.e. which are located at the minimum distance. If it matches, this first free neuron is not engaged. Otherwise, it is engaged by assigning the matching category to it. As a result, the input vector becomes the new prototype with the matching category associated thereto. Further described is a circuit which automatically retains the first free neuron of the ANN for learning.