Abstract:
In a neural network comprised of a plurality of neuron circuits, an improved neuron circuit that generates local result signals, e.g. of the fire type, and a local output signal of the distance or category type. The neuron circuit which is connected to buses that transport input data (e.g. the input category) and control signals. A multi-norm distance evaluation circuit calculates the distance D between the input vector and a prototype vector stored in a R/W memory circuit. A distance compare circuit compares this distance D with either the stored prototype vector's actual influence field or the lower limit thereof to generate first and second comparison signals. An identification circuit processes the comparison signals, the input category signal, the local category signal and a feedback signal to generate local result signals that represent the neuron circuit's response to the input vector. A minimum distance determination circuit determines the minimum distance Dmin among all the calculated distances from all of the neuron circuits of the neural network and generates a local output signal of the distance type. The circuit may be used to search and sort categories. The feed-back signal is collectively generated by all the neuron circuits by ORing all the local distances/categories. A daisy chain circuit is serially connected to corresponding daisy chain circuits of two adjacent neuron circuits to chain the neurons together. The daisy chain circuit also determines the neuron circuit state as free or engaged. Finally, a context circuitry enables or inhibits neuron participation with other neuron circuits in generation of the feedback signal.
Abstract:
A vertical isolated-collector PNP transistor structure (58) comprises a P+ region (45), a N region (44) and a P- well region (46) which form the emitter, the base and the collector, respectively. The P- well region is enclosed in a N type pocket comprised of a N+ buried layer (48) and a N reach-through region (47) in contact therewith. The contact regions (46-1, 47-1) to the P- well region (46) and to the N reach-through region (47) are shorted to define a common collector contact (59). In addition, the thickness W of the P- well region (46) is so minimized to allow transistor action of the parasitic NPN transistor formed by N PNP base region (44), P- well region (46) and the N+ buried layer, (48) respectively as the collector, the base and the emitter of said PNP transistor. The PNP transistor structure (67) may be combined with a conventional NPN transistor structure (61).
Abstract:
The base circuit comprises a self-referenced preamplifier (31) of the differential type connected between first and second supply voltages and a push-pull output buffer stage connected between second and third supply voltages. The push-pull output buffer stage comprises a pull-up transistor and a pull-down transistor connected in series with the circuit output node coupled therebetween. These transistors are driven by complementary and substantially simultaneous signals S and S supplied by the preamplifier. Both branches of the preamplifier are tied at a first output node (M). The first branch comprises a logic block performing the desired logic function of the base circuit that is connected through a load rsistor to the second supply voltage. The logic block consists of three parallel-connected input NPN transistors, whose emitters are coupled together at the first output node for NOR operation. The second branch is comprised of a biasing/coupling block connected to the second supply voltage and coupled to the first output node and to the base (B) node of the pull-down transistor. This block ensures both the appropriate polarization of the nodes in DC without the need of external reference voltage generators and a low impedance path for fast signal transmission of the output signal from node M to node B in AC, when input transistors of the logic block are ON. and base nodes. An anti-saturation block (AB), consisting typically of a Schottky Barrier Diode (SBD), is useful to prevent saturation of the pull down transistor (TDN) to further speed up the circuit.
Abstract:
According to the present invention, a CMOS interface circuit (C2) similar to a latch made by two CMOS cross coupled inverters (INV1, INV2) is placed directly on the output node (14) of conventional BICMOS logic circuit (11) operating alone in a partial swing mode. This latch is made of four FETs P5, P6, N8, N9 cross-coupled in a conventional way with the feedback loop connected to said output node (14). The partial voltage swing (VBE to VH-VBE) naturally given by the output bipolar transistors (T1, T2) mounted in a push pull configuration is reinforced to full swing (GND to VH) by the latch at the end of each transition. The state of the output node if forced by the latch because of the high driving capability due to the presence of said output bipolar transistors (T1, T2). As a result, the improved BICMOS logic circuit (D2) has an output signal (S) that ranges within the desired full swing voltage at the output terminal (15). It is a characteristic of this embodiment that the structure of CMOS interface (C2) is always independent of the logic function implemented in the conventional BICMOS logic circuit (11). More generally, the CMOS interface circuit may have various physical implementations, however, it is always comprised of CMOS FETs and it becomes active at least in one of the GND to VBE or (VH-BE) to VH range.
Abstract:
A method for forming a capacitive structure in a metal level of an interconnection stack including a succession of metal levels and of via levels, including the steps of: forming, in the metal level, at least one conductive track in which a trench is defined; conformally forming an insulating layer on the structure; forming, in the trench, a conductive material; and planarizing the structure.
Abstract:
A parallel pattern detection engine (PPDE) comprise multiple processing units (PUs) customized to do various modes of pattern recognition. The PUs are loaded with different patterns and the input data to be matched is provided to the PUs in parallel. Each pattern has an Opcode that defines what action to take when a particular data in the input data stream either matches or does not match the corresponding data being compared during a clock cycle. Each of the PUs communicate selected information so that PUs may be cascaded to enable longer patterns to be matched or to allow more patterns to be processed in parallel for a particular input data stream.
Abstract:
A processing unit having a dual channel bus architecture associated with a specific instruction set, configured to receive an input message and transmit an output message that is identical or derived therefrom. A message consists of one opcode, with or without associated data, used to control each processing unit depending on logic conditions stored in dedicated registers in each unit. Processing units are serially connected but can work simultaneously for a total pipelined operation. This dual architecture is organized around two channels labeled Channel 1 and Channel 2. Channel 1 mainly transmits an input message to all units while Channel 2 mainly transmits the results after processing in a unit as an output message. Depending on the logic conditions, an input message not processed in a processing unit may be transmitted to the next one without any change.
Abstract:
To avoid the problem of category assignment in artificial neural networks (ANNs) based upon a mapping of the input space (like ROI and KNN algorithms), the present method uses “probabilities”. Now patterns memorized as prototypes do not represent categories any longer but the “probabilities” to belong to categories. Thus, after having memorized the most representative patterns in a first step of the learning phase, the second step consists of an evaluation of these probabilities. To that end, several counters are associated with each prototype and are used to evaluate the response frequency and accuracy for each neuron of the ANN. These counters are dynamically incremented during this second step using distances evaluation (between the input vectors and the prototypes) and error criteria (for example the differences between the desired responses and the response given by the ANN). At the end of the learning phase, a function of the contents of these counters allows an evaluation of these probabilities for each neuron to belong to predetermined categories. During the recognition phase, the probabilities associated with the neurons selected by the algorithm permit the characterization of new input vectors and more generally any kind of input (images, signals, sets of data) to detect and classify anomalies. The method allows a significant reduction in the number of neurons that are required in the ANN while improving its overall response accuracy.
Abstract:
The neural semiconductor chip first includes: a global register and control logic circuit block, a R/W memory block and a plurality of neurons fed by buses transporting data such as the input vector data, set-up parameters, etc., and signals such as the feed back and control signals. The R/W memory block, typically a RAM, is common to all neurons to avoid circuit duplication, increasing thereby the number of neurons integrated in the chip. The R/W memory stores the prototype components. Each neuron comprises a computation block, a register block, an evaluation block and a daisy chain block to chain the neurons. All these blocks (except the computation block) have a symmetric structure and are designed so that each neuron may operate in a dual manner, i.e. either as a single neuron (single mode) or as two independent neurons (dual mode). Each neuron generates local signals. The neural chip further includes an OR circuit which performs an OR function for all corresponding local signals to generate global signals that are merged in an on-chip common communication bus shared by all neurons of the chip. The R/W memory block, the neurons and the OR circuit form an artificial neural network having high flexibility due to this dual mode feature which allows to mix single and dual neurons in the ANN.
Abstract translation:神经半导体芯片首先包括:全局寄存器和控制逻辑电路块,R / W存储器块和由传送诸如输入向量数据,建立参数等的数据的总线馈送的多个神经元,以及诸如 作为反馈和控制信号。 R / W存储器块(通常为RAM)对于所有神经元是共同的,以避免电路重复,从而增加集成在芯片中的神经元的数量。 R / W存储器存储原型组件。 每个神经元包括计算块,寄存器块,评估块和菊花链块以链接神经元。 所有这些块(计算块除外)具有对称结构,并且被设计成使得每个神经元可以以双重方式操作,即作为单个神经元(单个模式)或两个独立神经元(双模式)操作。 每个神经元产生本地信号。 所述神经芯片还包括OR电路,其对所有相应的本地信号执行OR功能,以产生合并在由所述芯片的所有神经元共享的片上公共通信总线中的全局信号。 R / W存储器块,神经元和OR电路形成具有高灵活性的人造神经网络,由于这种双模式特征,其允许在ANN中混合单个和双重神经元。
Abstract:
A method of achieving automatic learning of an input vector presented to an artificial neural network (ANN) formed by a plurality of neurons, using the K nearest neighbor (KNN) mode. Upon providing an input vector to be learned to the ANN, a Write component operation is performed to store the input vector components in the first available free neuron of the ANN. Then, a Write category operation is performed by assigning a category defined by the user to the input vector. Next, a test is performed to determine whether this category matches the categories of the nearest prototypes, i.e. which are located at the minimum distance. If it matches, this first free neuron is not engaged. Otherwise, it is engaged by assigning the matching category to it. As a result, the input vector becomes the new prototype with the matching category associated thereto. Further described is a circuit which automatically retains the first free neuron of the ANN for learning.