摘要:
A distributed packet processing apparatus capable of distributing packet load across a plurality of packet processing engines is provided. The distributed packet processing apparatus includes a plurality of processing engines each configured to process allocated packets, a first tag generating unit configured to allocate an input packet to a processing engine, which has a processing engine index corresponding to a tag index for the input packet, among the plurality of processing engines, a second tag generating unit configured to calculate a tag index for an output packet, and an index conversion unit configure to convert the tag index for the output packet to one processing engine index among a plurality of processing indexes for the plurality of the processing engines and allocates the output packet to a processing engine having the one processing engine such that loads are distributed among the plurality of processing engines.
摘要:
A high-speed content inspection apparatus for minimizing system overhead is provided. The high-speed content inspection apparatus extracts content in unit of sub-pattern by inspecting a payload of a packet in units of sub-pattern, and extract target content by inspecting a correlation between the extracted sub-patterns. If a sub-pattern present at the end of a payload is smaller than a predetermined unit of a sub-pattern, position information of the sub-pattern at the end of the payload is rolled back and the correlation is inspected. Accordingly, without having to add another hardware or high-performance hardware, target content can be efficiently detected in real time.
摘要:
A packet scheduling apparatus and method to fairly share network bandwidth between multiple subscribers and to fairly share the bandwidth allocated to each subscriber between multiple flows are provided. The packet scheduling method includes calculating first bandwidth for each subscriber to fairly share total bandwidth set for the transmission of packets between multiple subscribers; calculating second bandwidth for each flow to fairly share the first bandwidth between one or more flows that belong to each of the multiple subscribers; and scheduling a packet of each of the one or more flows based on the second bandwidth.
摘要:
A packet scheduling method and apparatus which allows multiple flows that require data transmission to the same output port of a network device such as a router to fairly share bandwidth. The packet scheduling method includes calculating an expected time of arrival of a (k+1)-th packet subsequent to a currently input k-th packet of individual flows by use of bandwidth allocated fairly to each of the flows and a length of the k-th packet; in response to the arrival of the (k+1)-th packet, comparing the expected time of arrival of the (k+1)-th packet to an actual time of arrival of the (k+1)-th packet; and scheduling the (k+1)-th packet of each flow according to the comparison result.
摘要:
A packet scheduling apparatus and method to fairly share network bandwidth between multiple subscribers and to fairly share the bandwidth allocated to each subscriber between multiple flows are provided. The packet scheduling method includes calculating first bandwidth for each subscriber to fairly share total bandwidth set for the transmission of packets between multiple subscribers; calculating second bandwidth for each flow to fairly share the first bandwidth between one or more flows that belong to each of the multiple subscribers; and scheduling a packet of each of the one or more flows based on the second bandwidth.
摘要:
Provided are a method of preventing cyber-attack based on a terminal and a terminal apparatus therefor. The terminal apparatus includes: a packet processor configured to determine whether excessive traffic is generated by a transmission packet; an anomalous traffic detecting unit configured to determine whether anomalous traffic is generated, using a first condition of the excessive traffic being maintained for a first time period and a second condition of a generation count of the same kind of transmission packets exceeding a predetermined threshold value for a second time period; and a traffic block request unit configured to generate a traffic block request signal for requesting blockage of the transmission packet according to the result of determining whether anomalous traffic is generated.
摘要:
A distributed packet processing apparatus capable of distributing packet load across a plurality of packet processing engines is provided. The distributed packet processing apparatus includes a plurality of processing engines each configured to process allocated packets, a first tag generating unit configured to allocate an input packet to a processing engine, which has a processing engine index corresponding to a tag index for the input packet, among the plurality of processing engines, a second tag generating unit configured to calculate a tag index for an output packet, and an index conversion unit configure to convert the tag index for the output packet to one processing engine index among a plurality of processing indexes for the plurality of the processing engines and allocates the output packet to a processing engine having the one processing engine index such that loads are distributed among the plurality of processing engines.
摘要:
A routing apparatus and method for a mobile ad-hoc network are provided. The routing apparatus selects a transmission path differently based on the priority of a message, thereby distributing paths such that the overall energy balance between mobile nodes can be maintained. Accordingly, congestion of traffic on a particular path can be prevented, and the overall performance and the lifetime of the network can be enhanced.
摘要:
A packet scheduling method and apparatus which allows multiple flows that require data transmission to the same output port of a network device such as a router to fairly share bandwidth. The packet scheduling method includes calculating an expected time of arrival of a (k+1)-th packet subsequent to a currently input k-th packet of individual flows by use of bandwidth allocated fairly to each of the flows and a length of the k-th packet; in response to the arrival of the (k+1)-th packet, comparing the expected time of arrival of the (k+1)-th packet to an actual time of arrival of the (k+1)-th packet; and scheduling the (k+1)-th packet of each flow according to the comparison result.