Parallel Pattern Detection Engine

    公开(公告)号:US20070150621A1

    公开(公告)日:2007-06-28

    申请号:US11682547

    申请日:2007-03-06

    CPC classification number: G06K9/6202 G06K9/00986

    Abstract: A parallel pattern detection engine (PPDE) comprise multiple processing units (PUs) customized to do various modes of pattern recognition. The PUs are loaded with different patterns and the input data to be matched is provided to the PUs in parallel. Each pattern has an Opcode that defines what action to take when a particular data in the input data stream either matches or does not match the corresponding data being compared during a clock cycle. Each of the PUs communicate selected information so that PUs may be cascaded to enable longer patterns to be matched or to allow more patterns to be processed in parallel for a particular input data stream.

    Intrusion detection using a network processor and a parallel pattern detection engine
    2.
    发明申请
    Intrusion detection using a network processor and a parallel pattern detection engine 有权
    使用网络处理器和并行模式检测引擎的入侵检测

    公开(公告)号:US20050154916A1

    公开(公告)日:2005-07-14

    申请号:US10756904

    申请日:2004-01-14

    CPC classification number: H04L63/1416 H04L63/1441

    Abstract: An intrusion detection system (IDS) comprises a network processor (NP) coupled to a memory unit for storing programs and data. The NP is also coupled to one or more parallel pattern detection engines (PPDE) which provide high speed parallel detection of patterns in an input data stream. Each PPDE comprises many processing units (PUs) each designed to store intrusion signatures as a sequence of data with selected operation codes. The PUs have configuration registers for selecting modes of pattern recognition. Each PU compares a byte at each clock cycle. If a sequence of bytes from the input pattern match a stored pattern, the identification of the PU detecting the pattern is outputted with any applicable comparison data. By storing intrusion signatures in many parallel PUs, the IDS can process network data at the NP processing speed. PUs may be cascaded to increase intrusion coverage or to detect long intrusion signatures.

    Abstract translation: 入侵检测系统(IDS)包括耦合到用于存储程序和数据的存储器单元的网络处理器(NP)。 NP还耦合到一个或多个并行模式检测引擎(PPDE),其提供对输入数据流中的模式的高速并行检测。 每个PPDE包括许多处理单元(PU),每个处理单元被设计为将入侵签名存储为具有所选操作码的数据序列。 PU具有用于选择模式识别模式的配置寄存器。 每个PU在每个时钟周期比较一个字节。 如果来自输入模式的字节序列与存储的模式匹配,则用任何适用的比较数据输出检测模式的PU的识别。 通过在多个并行PU中存储入侵签名,IDS可以以NP处理速度处理网络数据。 PU可以级联以增加入侵覆盖或检测长入侵签名。

    Parallel Pattern Detection Engine
    3.
    发明申请
    Parallel Pattern Detection Engine 有权
    并行模式检测引擎

    公开(公告)号:US20070150622A1

    公开(公告)日:2007-06-28

    申请号:US11682576

    申请日:2007-03-06

    CPC classification number: G06K9/6202 G06K9/00986

    Abstract: A parallel pattern detection engine (PPDE) comprise multiple processing units (PUs) customized to do various modes of pattern recognition. The PUs are loaded with different patterns and the input data to be matched is provided to the PUs in parallel. Each pattern has an Opcode that defines what action to take when a particular data in the input data stream either matches or does not match the corresponding data being compared during a clock cycle. Each of the PUs communicate selected information so that PUs may be cascaded to enable longer patterns to be matched or to allow more patterns to be processed in parallel for a particular input data stream.

    Abstract translation: 并行模式检测引擎(PPDE)包括定制的多个处理单元(PU),以执行各种模式识别模式。 PU装载有不同的图案,并且要匹配的输入数据并行提供给PU。 每个模式都有一个操作码,定义当输入数据流中的特定数据与时钟周期中正在比较的对应数据匹配或不匹配时要执行的操作。 每个PU通信所选择的信息,使得PU可以被级联以使得能够匹配更长的模式或允许针对特定的输入数据流并行地处理更多的模式。

    Configurable bi-directional bus for communicating between autonomous units
    4.
    发明申请
    Configurable bi-directional bus for communicating between autonomous units 失效
    可配置双向总线,用于在自主单元之间进行通信

    公开(公告)号:US20050154858A1

    公开(公告)日:2005-07-14

    申请号:US10757673

    申请日:2004-01-14

    CPC classification number: G06F13/4027

    Abstract: Processing units (PUs) are coupled with a gated bi-directional bus structure that allows the PUs to be cascaded. Each PUn has communication logic and function logic. Each PUn is physically coupled to two other PUs, a PUp and a PUf. The communication logic receives Link Out data from a PUp and sends Link In data to a PUf. The communication logic has register bits for enabling and disabling the data transmission. The communication logic couples the Link Out data from a PUp to the function logic and couples Link In data to the PUp from the function logic in response to the register bits. The function logic receives output data from the PUn and Link In data from the communication logic and forms Link Out data which is coupled to the PUf. The function logic couples Link In data from the PUf to the PUn and to the communication logic.

    Abstract translation: 处理单元(PU)与门控双向总线结构耦合,允许将PU级联。 每个PUn具有通信逻辑和功能逻辑。 每个PUn物理耦合到另外两个PU,PUp和PUf。 通信逻辑从PUp接收Link Out数据,并将Link In数据发送到PUf。 通信逻辑具有用于启用和禁用数据传输的寄存器位。 通信逻辑将链路输出数据从PUp耦合到功能逻辑,并且响应于寄存器位将Link In数据从功能逻辑耦合到PUp。 功能逻辑从通信逻辑的PUn和Link In数据接收输出数据,并形成耦合到PUf的Link Out数据。 功能逻辑将来自PUf的链接数据耦合到PUn和通信逻辑。

    Parallel pattern detection engine
    5.
    发明申请
    Parallel pattern detection engine 有权
    并行模式检测引擎

    公开(公告)号:US20050154802A1

    公开(公告)日:2005-07-14

    申请号:US10757187

    申请日:2004-01-14

    CPC classification number: G06K9/6202 G06K9/00986

    Abstract: A parallel pattern detection engine (PPDE) comprise multiple processing units (PUs) customized to do various modes of pattern recognition. The PUs are loaded with different patterns and the input data to be matched is provided to the PUs in parallel. Each pattern has an Opcode that defines what action to take when a particular data in the input data stream either matches or does not match the corresponding data being compared during a clock cycle. Each of the PUs communicate selected information so that PUs may be cascaded to enable longer patterns to be matched or to allow more patterns to be processed in parallel for a particular input data stream.

    Abstract translation: 并行模式检测引擎(PPDE)包括定制的多个处理单元(PU),以执行各种模式识别模式。 PU装载有不同的图案,并且要匹配的输入数据并行提供给PU。 每个模式都有一个操作码,定义当输入数据流中的特定数据与时钟周期中正在比较的对应数据匹配或不匹配时要执行的操作。 每个PU通信所选择的信息,使得PU可以被级联以使得能够匹配更长的模式或允许针对特定的输入数据流并行地处理更多的模式。

    PROCESSING UNIT HAVING A DUAL CHANNEL BUS ARCHITECTURE
    6.
    发明申请
    PROCESSING UNIT HAVING A DUAL CHANNEL BUS ARCHITECTURE 审中-公开
    具有双通道总线架构的处理单元

    公开(公告)号:US20050138324A1

    公开(公告)日:2005-06-23

    申请号:US10905100

    申请日:2004-12-15

    CPC classification number: G06F15/17368

    Abstract: A processing unit having a dual channel bus architecture associated with a specific instruction set, configured to receive an input message and transmit an output message that is identical or derived therefrom. A message consists of one opcode, with or without associated data, used to control each processing unit depending on logic conditions stored in dedicated registers in each unit. Processing units are serially connected but can work simultaneously for a total pipelined operation. This dual architecture is organized around two channels labeled Channel 1 and Channel 2. Channel 1 mainly transmits an input message to all units while Channel 2 mainly transmits the results after processing in a unit as an output message. Depending on the logic conditions, an input message not processed in a processing unit may be transmitted to the next one without any change.

    Abstract translation: 一种具有与特定指令集相关联的双通道总线架构的处理单元,其被配置为接收输入消息并发送相同或从其导出的输出消息。 消息由一个操作码组成,具有或不具有关联数据,用于根据存储在每个单元中的专用寄存器中的逻辑条件来控制每个处理单元。 处理单元串联连接,但可以同时工作进行总体流水线操作。 该双重架构围绕标记为通道1和通道2的两个通道组合。通道1主要向所有单元发送输入消息,而通道2主要在以单元处理之后将结果作为输出消息发送。 根据逻辑条件,在处理单元中未处理的输入消息可以被发送到下一个,而没有任何改变。

    Method and circuits to virtually increase the number of prototypes in artificial neural networks
    7.
    发明授权
    Method and circuits to virtually increase the number of prototypes in artificial neural networks 失效
    实际增加人造神经网络中原型数量的方法和电路

    公开(公告)号:US07254565B2

    公开(公告)日:2007-08-07

    申请号:US10137969

    申请日:2002-05-03

    CPC classification number: G06K9/6276 G06N3/063

    Abstract: An improved Artificial Neural Network (ANN) is disclosed that comprises a conventional ANN, a database block, and a compare and update circuit. The conventional ANN is formed by a plurality of neurons, each neuron having a prototype memory dedicated to store a prototype and a distance evaluator to evaluate the distance between the input pattern presented to the ANN and the prototype stored therein. The database block has: all the prototypes arranged in slices, each slice being capable to store up to a maximum number of prototypes; the input patterns or queries to be presented to the ANN; and the distances resulting of the evaluation performed during the recognition/classification phase. The compare and update circuit compares the distance with the distance previously found for the same input pattern updates or not the distance previously stored.

    Abstract translation: 公开了一种改进的人造神经网络(ANN),其包括常规ANN,数据库块以及比较和更新电路。 常规ANN由多个神经元形成,每个神经元具有专用于存储原型的原型存储器和距离评估器,以评估呈现给ANN的输入模式与存储在其中的原型之间的距离。 数据库块具有:所有原型以切片排列,每个切片能够存储最多数量的原型; 要呈现给ANN的输入模式或查询; 以及在识别/分类阶段期间进行评估的距离。 比较和更新电路将距离与先前发现的相同输入模式更新的距离进行比较,或将之前存储的距离进行比较。

    Circuits and method for shaping the influence field of neurons and neural networks resulting therefrom
    8.
    发明授权
    Circuits and method for shaping the influence field of neurons and neural networks resulting therefrom 失效
    用于形成由此产生的神经元和神经网络的影响场的电路和方法

    公开(公告)号:US06347309B1

    公开(公告)日:2002-02-12

    申请号:US09223478

    申请日:1998-12-30

    CPC classification number: G06K9/6271 G06N3/063

    Abstract: The improved neural network of the present invention results from the combination of a dedicated logic block with a conventional neural network based upon a mapping of the input space usually employed to classify an input data by computing the distance between said input data and prototypes memorized therein. The improved neural network is able to classify an input data, for instance, represented by a vector A even when some of its components are noisy or unknown during either the learning or the recognition phase. To that end, influence fields of various and different shapes are created for each neuron of the conventional neural network. The logic block transforms at least some of the n components (A1, . . . , An) of the input vector A into the m components (V1, . . . , Vm) of a network input vector V according to a linear or non-linear transform function F. In turn, vector V is applied as the input data to said conventional neural network. The transform function F is such that certain components of vector V are not modified, e.g. Vk=Aj, while other components are transformed as mentioned above, e.g. Vi=Fi(A1, . . . , An). In addition, one (or more) component of vector V can be used to compensate an offset that is present in the distance evaluation of vector V. Because, the logic block is placed in front of the said conventional neural network any modification thereof is avoided.

    Abstract translation: 本发明的改进的神经网络是基于通常用于通过计算所述输入数据与其中存储的原型之间的距离来对输入数据进行分类的输入空间的映射,将专用逻辑块与传统神经网络的组合。 改进的神经网络能够对例如由向量A表示的输入数据进行分类,即使在学习或识别阶段期间,其一些组件是噪声或未知的。 为此,为传统神经网络的每个神经元创建各种不同形状的影响场。 逻辑块根据线性或非线性将输入矢量A的n个分量(A1,...,An)中的至少一些变换成网络输入矢量V的m个分量(V1,...,Vm) 然后将矢量V作为输入数据施加到所述常规神经网络。 变换函数F使得向量V的某些分量不被修改,例如, Vk = Aj,而其它组分如上所述被转化,例如。 Vi = Fi(A1,...,An)。 另外,矢量V的一个(或多个)分量可以用于补偿矢量V的距离评估中存在的偏移。因为逻辑块被放置在所述传统神经网络的前面,所以避免了其任何修改 。

    METHOD FOR FORMING A THREE-DIMENSIONAL STRUCTURE OF METAL-INSULATOR-METAL TYPE
    9.
    发明申请
    METHOD FOR FORMING A THREE-DIMENSIONAL STRUCTURE OF METAL-INSULATOR-METAL TYPE 有权
    形成金属绝缘体金属型三维结构的方法

    公开(公告)号:US20110227194A1

    公开(公告)日:2011-09-22

    申请号:US13052262

    申请日:2011-03-21

    CPC classification number: H01L23/5223 H01L28/60 H01L2924/0002 H01L2924/00

    Abstract: A method for forming a capacitive structure in a metal level of an interconnection stack including a succession of metal levels and of via levels, including the steps of: forming, in the metal level, at least one conductive track in which a trench is defined; conformally forming an insulating layer on the structure; forming, in the trench, a conductive material; and planarizing the structure.

    Abstract translation: 一种用于在包括一系列金属水平和通孔级别的互连堆叠的金属层中形成电容结构的方法,包括以下步骤:在金属层面形成至少一个其中限定沟槽的导电轨道; 在结构上保形地形成绝缘层; 在沟槽中形成导电材料; 并平坦化结构。

    System for scaling images using neural networks
    10.
    发明授权
    System for scaling images using neural networks 有权
    使用神经网络缩放图像的系统

    公开(公告)号:US07734117B2

    公开(公告)日:2010-06-08

    申请号:US12021511

    申请日:2008-01-29

    CPC classification number: G06T3/4046

    Abstract: An artificial neural network (ANN) based system that is adapted to process an input pattern to generate an output pattern related thereto having a different number of components than the input pattern. The system (26) is comprised of an ANN (27) and a memory (28), such as a DRAM memory, that are serially connected. The input pattern (23) is applied to a processor (22), where it can be processed or not (the most general case), before it is applied to the ANN and stored therein as a prototype (if learned). A category is associated with each stored prototype. The processor computes the coefficients that allow the determination of the estimated values of the output pattern, these coefficients are the components of a so-called intermediate pattern (24). Assuming the ANN has already learned a number of input patterns, when a new input pattern is presented to the ANN in the recognition phase, the category of the closest prototype is output therefrom and is used as a pointer to the memory. In turn, the memory outputs the corresponding intermediate pattern. The input pattern and the intermediate pattern are applied to the processor to construct the output pattern (25) using the coefficients. Typically, the input pattern is a block of pixels in the field of scaling images.

    Abstract translation: 一种基于人造神经网络(ANN)的系统,其适于处理输入模式以产生与其相关的输出模式,该输出模式具有与输入模式不同数量的分量。 系统(26)由串联连接的ANN(27)和存储器(28)(诸如DRAM存储器)组成。 将输入模式(23)应用于处理器(22),在处理器(22)被应用于ANN并作为原型存储(如果被学习)之前)处理器(22),其可被处理(最常见的情况))。 类别与每个存储的原型相关联。 处理器计算允许确定输出图案的估计值的系数,这些系数是所谓的中间图案的分量(24)。 假设ANN已经学习了许多输入模式,当在识别阶段向ANN呈现新的输入模式时,最近的原型的类别从其输出并被用作指向存储器的指针。 反过来,存储器输出相应的中间模式。 将输入图案和中间图案应用于处理器,以使用系数构造输出图案(25)。 通常,输入图案是缩放图像领域的像素块。

Patent Agency Ranking