摘要:
A system and method are shown for offloading a computational service on a point-to-point connection. When a tunnel initiator network device, such as a Level 2 Tunneling Protocol Access Concentrator (LAC), detects a tunnel client network device, the LAC sets up a tunnel connection with a tunnel endpoint network device, such a Level 2 Tunneling Protocol Network Server (LNS). During a process of establishing a call session on the tunnel connection, the LAC sends its compression capabilities to the LNS. When the LNS detects that the LAC is capable of compressing tunnel packets, the LNS negotiates compression parameters with the tunnel client network device. Subsequently, the LNS transmits the negotiated compression parameters to the LAC that configures a compression engine based on the received compression parameters. Hereinafter, the tunnel client network device will send compressed tunnel packets to the LAC that will decompress the received tunnel packets prior to transmitting the tunnel packets to the LNS on the tunnel connections. Similarly, the LNS will send uncompressed tunnel packets on the tunnel connection to the LAC that will compress the received tunnel packets prior to transmitting the tunnel packets to the tunnel client network device.
摘要:
Systems and methods for providing fast handoff support by transferring information are provided. Additionally, a generic protocol message format is presented which allows the transfer of information used in the handoff. The generic protocol allows a gateway to request contexts or session information and send information that allows tunnel setup and mapping to other connections. The session, tunnel, and mapping information allow the gateways to switch packet processing operations without causing disruption to the packet flow. Further, in inter-gateway handoffs or inter-access network handoffs, fast and seamless handoffs are provided so the mobile station keeps the same IP address and the session continues.
摘要:
A system and method for processing packets of information includes an ingress module. The ingress module receives a plurality of packets of information from a first network. The ingress module determines the type of each of the plurality of packets. A route server module is coupled to the ingress module. The route server module sends a distributed processing request to the ingress module. The ingress module receives the distributed processing request and, responsively, performs a first set of processing operations on selected ones of the plurality of packets. The selected ones of the plurality of packets are of a first type. The ingress module forwards others of the plurality of packets of information to the route server module. Each of the others of the plurality of packets are of a type distinct from the first type. The route server module receives the others of the plurality of packets of information and performs a second set of processing operations on the others of the plurality of packets of information.
摘要:
A method for allocating network addresses comprises providing an address pool comprising a plurality of network addresses and then dividing the address pool into a plurality of address sub-pool that comprise a unique subset of the network addresses of the address pool. Each of the sub-pools is available for use by any one of a plurality of routing devices of a network access server. The method then comprises receiving a request to assign a first network address to a first user, allocating a first address sub-pool of the plurality of address sub-pools to a first routing device of the plurality of routing devices and transmitting a first message to the other routing devices to indicate that the first address sub-pool has been allocated. The method additionally comprises assigning the first network address to the first user from the first address sub-pool and advertising an aggregate route for the first address sub-pool over a network.
摘要:
A system for load balancing, the system includes a LAC, a contact LNS, and a plurality of load balancing LNSs. The LAC includes a contact LNS address, the contact LNS address specifies the address of a contact LNS. The contact LNS is communicatively coupled to the LAC and the plurality of load balancing LNSs are communicatively coupled to the contact LNS and to the LAC. The LAC sends a message to the contact LNS, the message informing the LAC of its availability and the contact LNS sends a response message containing IP address of a selected one of the plurality of load balancing LNSs to which the LAC should establish a session.
摘要:
A high density network access server implements a tunneling protocol between a modem module and a route server module. PPP and routing control packets received from the PPP link are tunneled to the route server for processing. The IP data packet forwarding function for the network access server is distributed directly to the modem modules. The combination of distributed PPP processing and distributed IP data packet forwarding enables the capacity of the network access server to be scaled to orders of magnitude greater than previously known, to handle thousands or even tens of thousands of simultaneous data sessions.
摘要:
Systems and methods for providing fast handoff support by transferring information are provided. Additionally, a generic protocol message format is presented which allows the transfer of information used in the handoff. The generic protocol allows a gateway to request contexts or session information and send information that allows tunnel setup and mapping to other connections. The session, tunnel, and mapping information allow the gateways to switch packet processing operations without causing disruption to the packet flow. Further, in inter-gateway handoffs or inter-access network handoffs, fast and seamless handoffs are provided so the mobile station keeps the same IP address and the session continues.
摘要:
Systems and methods are provided that allow voice and data traffic to be shifted from one chassis to other chassis without interrupting service. Geographic Redundancy (GR) is an inter-chassis redundancy, where the chassis may be a home agent, a packet data serving node, or any combination of wireless networking devices. Additionally, each chassis can have one or more partitions that handle subscriber session traffic and a corresponding redundant partition on a different chassis. The redundant chassis partition can take over all or a portion of the functionality of the active chassis partition if the active chassis or any critical peer servers/gateways communicating with the active chassis should fail. This provides users with uninterrupted service in the case of some failures.
摘要:
Systems and methods for intercepting and redirecting requests are provided. More particularly, certain information is identified in a packet and the packet is redirected to a specified server. The information that is redirected may be bound for a server in a network that a mobile node is currently visiting, and it is advantageous to fulfill the request in another network instead. The request is redirected to the other network; however, the response to the request may be modified changing the source address and other information so that the response appears to have originated from the server in the visited network to which the mobile node sent the request. Load balancing may also be incorporated when redirecting requests from one or more mobile nodes.
摘要:
Systems and methods for caching information related to access rights are provided. The access rights may be rules stored in an access control list. The cache may include packet parameters against which packets in a data flow are matched to determine if a match is possible from the cache. If a match is possible, a corresponding rule is applied to the packet. If a match is not found in the cache, the access control list may be searched for a corresponding rule. The rule from the access control list may be populated into the cache when a match is found in the access control list.