摘要:
An intrusion detection system (IDS) comprises a network processor (NP) coupled to a memory unit for storing programs and data. The NP is also coupled to one or more parallel pattern detection engines (PPDE) which provide high speed parallel detection of patterns in an input data stream. Each PPDE comprises many processing units (PUs) each designed to store intrusion signatures as a sequence of data with selected operation codes. The PUs have configuration registers for selecting modes of pattern recognition. Each PU compares a byte at each clock cycle. If a sequence of bytes from the input pattern match a stored pattern, the identification of the PU detecting the pattern is outputted with any applicable comparison data. By storing intrusion signatures in many parallel PUs, the IDS can process network data at the NP processing speed. PUs may be cascaded to increase intrusion coverage or to detect long intrusion signatures.
摘要:
An intrusion detection system (IDS) comprises a network processor (NP) coupled to a memory unit for storing programs and data. The NP is also coupled to one or more parallel pattern detection engines (PPDE) which provide high speed parallel detection of patterns in an input data stream. Each PPDE comprises many processing units (PUs) each designed to store intrusion signatures as a sequence of data with selected operation codes. The PUs have configuration registers for selecting modes of pattern recognition. Each PU compares a byte at each clock cycle. If a sequence of bytes from the input pattern match a stored pattern, the identification of the PU detecting the pattern is outputted with any applicable comparison data. By storing intrusion signatures in many parallel PUs, the IDS can process network data at the NP processing speed. PUs may be cascaded to increase intrusion coverage or to detect long intrusion signatures.
摘要:
An intrusion detection system (IDS) comprises a network processor (NP) coupled to a memory unit for storing programs and data. The NP is also coupled to one or more parallel pattern detection engines (PPDE) which provide high speed parallel detection of patterns in an input data stream. Each PPDE comprises many processing units (PUs) each designed to store intrusion signatures as a sequence of data with selected operation codes. The PUs have configuration registers for selecting modes of pattern recognition. Each PU compares a byte at each clock cycle. If a sequence of bytes from the input pattern match a stored pattern, the identification of the PU detecting the pattern is outputted with any applicable comparison data. By storing intrusion signatures in many parallel PUs, the IDS can process network data at the NP processing speed. PUs may be cascaded to increase intrusion coverage or to detect long intrusion signatures.
摘要:
An intrusion detection system (IDS) comprises a network processor (NP) coupled to a memory unit for storing programs and data. The NP is also coupled to one or more parallel pattern detection engines (PPDE) which provide high speed parallel detection of patterns in an input data stream. Each PPDE comprises many processing units (PUs) each designed to store intrusion signatures as a sequence of data with selected operation codes. The PUs have configuration registers for selecting modes of pattern recognition. Each PU compares a byte at each clock cycle. If a sequence of bytes from the input pattern match a stored pattern, the identification of the PU detecting the pattern is outputted with any applicable comparison data. By storing intrusion signatures in many parallel PUs, the IDS can process network data at the NP processing speed. PUs may be cascaded to increase intrusion coverage or to detect long intrusion signatures.
摘要:
A flow control method and system including an algorithm for deciding to transmit an arriving packet into a processing queue or to discard it, or, in the case of instructions or packets that must not be discarded, a similar method and system for deciding at a service event to transmit an instruction or packet into a processing queue or to skip the service event. The transmit probability is increased or decreased in consideration of minimum and maximum limits for each flow, aggregate limits for sets of flows, relative priority among flows, queue occupancy, and rate of change of queue occupancy. The effects include protection of flows below their minimum rates, correction of flows above their maximum rates, and, for flows between minimum and maximum rates, reduction of constituent flows of an aggregate that is above its aggregate maximum. Practice of the invention results in low queue occupancy during steady congestion.
摘要:
A flow control method and system including an algorithm for deciding to transmit an arriving packet into a processing queue or to discard it, or, in the case of instructions or packets that must not be discarded, a similar method and system for deciding at a service event to transmit an instruction or packet into a processing queue or to skip the service event. The transmit probability is increased or decreased in consideration of minimum and maximum limits for each flow, aggregate limits for sets of flows, relative priority among flows, queue occupancy, and rate of change of queue occupancy. The effects include protection of flows below their minimum rates, correction of flows above their maximum rates, and, for flows between minimum and maximum rates, reduction of constituent flows of an aggregate that is above its aggregate maximum. Practice of the invention results in low queue occupancy during steady congestion.
摘要:
A process control method and system including partitioning transmit decisions and certain measurements into one logical entity (Data Plane) and partitioning algorithm computation to update transmit probabilities into a second logical entity (Control Plane), the two entities periodically communicating fresh measurements from Data Plane to Control Plane and adjusted transmit probabilities from Control Plane to Data Plane. The transmit probability may be used in transmit/discard decisions of packets or instructions exercised at every arrival of a packet or instruction. In an alternative embodiment, the transmit probability may be used in transmit/delay decisions of awaiting instructions or packets exercised at every service event.
摘要:
Mobile network services are performed in a mobile data network in a way that is transparent to most of the existing equipment in the mobile data network. The mobile data network includes a radio access network and a core network. A first service mechanism in the radio access network breaks out data coming from a basestation, and performs one or more mobile network services at the edge of the mobile data network based on the broken out data. These services may include caching of data, data or video compression techniques, push-based services, charging, application serving, analytics, security, data filtering, and new revenue-producing services, as well as others. This architecture allows performing new mobile network services at the edge of a mobile data network within the infrastructure of an existing mobile data network.
摘要:
A method for developing portable network processor applications and/or managing heterogeneous network processors in a network is disclosed. The network includes host processor(s) utilizing system configuration application(s) that are network processor independent. In one aspect, the method and system include using standardized interface(s) for each network processor, using a standardized transport layer compatible with the interface(s), and providing a generic message application layer. The generic message application layer defines generic payload(s) and message type(s) for configuration communications between the network and host processors. In another aspect, the method and system include providing packet processing shell(s) and generic protocol software that is coupled with the packet processing shell(s) through standard interface(s), network processor independent, and performs operations for packet processing. The method also include providing a library that includes network processor specific information for performing the operations and providing block(s) for performing other network processor specific operations.
摘要:
The present invention relates to a method and system for compressing a tree structure. The method of the present invention includes providing a compressed format block for representing a plurality of levels of the tree structure, where the plurality of levels comprises a set of nodes. The method also includes compressing each node in the set of nodes into the compressed format block, such that the plurality of levels is traversed in a single memory access.