Abstract:
Methods, apparatuses and systems facilitating enhanced classification of network traffic based on observed flow-based and/or host-based behaviors.
Abstract:
Control and management of bandwidth at networks remote from the physical bandwidth management infrastructure. Particular implementations allow network equipment at a plurality of data centers, for example, to manage network traffic at remote branch office networks without deployment of network devices at the remote branch office networks.
Abstract:
An exemplary embodiment provides for a method for use in a network device operative to facilitate classification of data flows in a multipath network topology by intelligently mirroring one or more packets of the data flows to a set of cooperating network devices. The method, in one implementation, can involve tracking asymmetric data flows and synchronizing at least portions of the asymmetric data flows between a plurality of network devices to facilitate classification and other operations in multipath network topologies. In one implementation, the present invention allows a plurality of network devices, each disposed on the boundaries of an autonomous system (such as an ISP network) to communicate enough information about data flows encountered at each of the network devices to enable more accurate data flow classification. Since mirrored traffic may affect available bandwidth for regular network traffic, certain implementations of the invention include optimization directed to reducing the amount of mirrored traffic between network devices.
Abstract:
Methods, apparatuses and systems directed to a network traffic synchronization mechanism facilitating the deployment of network devices in redundant network topologies. In certain embodiments, when a first network device directly receives network traffic, it copies the network traffic and transmits it to at least one partner network device. The partner network device processes the copied network traffic, just as if it had received it directly, but, in one embodiment, discards the traffic before forwarding it on to its destination. In one embodiment, the partner network devices are operative to exchange directly received network traffic. As a result, the present invention provides enhanced reliability and seamless failover. Each unit, for example, is ready at any time to take over for the other unit should a failure occur. As discussed below, the network traffic synchronization mechanism can be applied to a variety of network devices, such as firewalls, gateways, network routers, and bandwidth management devices.
Abstract:
Methods and apparatuses allowing for dynamic partitioning of a network resource among a plurality of users. In one embodiment, the invention involves recognizing new users of a network resource; creating user partitions on demand for new users, wherein the user partition is operable to allocate a portion of a network resource; and, reclaiming inactive user partitions for subsequent new users.
Abstract:
An example embodiment of the invention provides a process for lockless processing of hierarchical bandwidth partitions configurations in multiple processor architectures. In one embodiment, the process runs in an NPU's data plane and receives a packet for a partition from a child partition through a work queue. The process determines a suggested target bandwidth rate for the receiving partition's child partitions, based in part on a count of active child partitions, if a predefined time interval has passed. The process adopts a target bandwidth rate for the receiving partition suggested by the receiving partition's parent partition, if the receiving partition is not a root partition and the predefined time interval has passed. The process then transmits the packet to the receiving partition's parent partition through the work queue, if the receiving partition is not a root partition. Otherwise, the process transmits the packet to a port.
Abstract:
Methods, apparatuses and systems directed to enhanced packet load shedding mechanisms implemented in various network devices. In one implementation, the present invention enables a selective load shedding mechanism that intelligently discards packets to allow or facilitate management access during DoS attacks or other high traffic events. In one implementation, the present invention is directed to a selective load shedding mechanism that, while shedding load necessary to allow a network device to operate appropriately, does not attempt to control traffic flows, which allows for other processes to process, classify, diagnose and/or monitor network traffic during high traffic volume periods. In another implementation, the present invention provides a packet load shedding mechanism that reduces the consumption of system resources during periods of high network traffic volume.
Abstract:
Methods, apparatuses and systems allowing for bandwidth management schemes responsive to utilization characteristics associated with individual users. In one embodiment, the present invention allows network administrators to penalize users who carry out specific questionable or suspicious activities, such as the use of proxy tunnels to disguise the true nature of the data flows in order to evade classification and control by bandwidth management devices. In one embodiment, each individual user may be accorded an initial suspicion score. Each time the user is associated with a questionable or suspicious activity (for example, detecting the set up of a connection to an outside HTTP tunnel, or peer-to-peer application flow), his or her suspicion score is downgraded. Data flows corresponding to users with sufficiently low suspicion scores, in one embodiment, can be treated in a different manner from data flows associated with other users. For example, different or more rigorous classification rules and policies can be applied to the data flows associated with suspicious users.
Abstract:
In a computer system having a memory, a processor, and a network interface, a method for listening on multiple conferencing interfaces having the steps of loading a set of transport components into the memory; initializing each transport components of the set of transport components to listen on a particular conferencing interface using the network interface, each transport component of the set of transport components listening to a different conferencing interface; receiving an incoming call signal on the network interface having an incoming conferencing interface; processing the incoming call signal to detect the incoming conferencing interface; and launching an application based on the incoming conferencing interface. An apparatus for listening on multiple conferencing interfaces having a set of transport components coupled to the network interface, each transport component of the set of transport components having the capability of receiving a signal on a different conferencing interface; a conference component coupled to each component in the set of transport components; a call processing module coupled to the conference component; and, a process manager coupled to the call processing module; the conference component containing a circuit for causing the call processing module to cause process manager to activate a conferencing application upon detecting a call from one transport component of the set of transport components.
Abstract:
In a packet communication environment, a method is provided for automatically classifying packet flows for use in allocating bandwidth resources by a rule of assignment of a service level. The method comprises applying individual instances of traffic classification paradigms to packet network flows based on selectable information obtained from a plurality of layers of a multi-layered communication protocol in order to define a characteristic class, then mapping the flow to the defined traffic class. It is useful to note that the automatic classification is sufficiently robust to classify a complete enumeration of the possible traffic.