Abstract:
The invention includes a method and apparatus for generating a link transmission schedule for handling traffic variation in wireless networks without dynamic scheduling or routing. The method includes determining fixed traffic capacities associated with respective wireless links of a wireless network according to a routing algorithm, and generating, using the routing algorithm and the fixed traffic capacities, a link transmission schedule including at least one condition by which traffic is transmitted using each of the network links. The link transmission schedule is adapted to remain substantially fixed during dynamic traffic changes. The routing algorithm may be a two-phase routing algorithm in which traffic is distributed by each node in the wireless network to every node in the wireless network using traffic split ratios. For two-phase routing, fixed traffic capacities may be determined using ingress and egress traffic capacities and traffic split ratios associated with respective nodes in the wireless network.
Abstract:
A load-balanced network architecture is disclosed in which a traffic flow deliverable from a source node to a destination node via intermediate nodes is split into parts, and the parts are distributed to respective ones of the intermediate nodes. Path delay differences for the parts are substantially equalized by delay adjustment at one or more of the intermediate nodes, and packets of one or more of the parts are scheduled for routing from respective ones of the intermediate nodes to the destination node based on arrival times of the packets at the source node.
Abstract:
A network of nodes interconnected by links has content filtering specified at certain nodes, and routing of packet connections through the network is generated based on the specified content-filtering nodes. The network is specified via a content-filtering node placement method and a network-capacity maximization method so as to apply content filtering to packets for substantially all traffic (packet streams) carried by the network.
Abstract:
A method of networking a plurality of servers together within a data center is disclosed. The method includes the step of addressing a data packet for delivery to a destination server by providing the destination server address as a flat address. The method further includes the steps of obtaining routing information required to route the packet to the destination server. This routing information may be obtained from a directory service servicing the plurality of servers. Once the routing information is obtained, the data packet may be routed to the destination server according to the flat address of the destination server and routing information obtained from the directory service.
Abstract:
A system for commoditizing data center networking is disclosed. The system includes an interconnection topology for a data center having a plurality of servers and a plurality of nodes of a network in the data center through which data packets may be routed. The system uses a routing scheme where the routing is oblivious to the traffic pattern between nodes in the network, and wherein the interconnection topology contains a plurality of paths between one or more servers. The multipath routing may be Valiant load balancing. It disaggregates the function of load balancing into a group of regular servers, with the result that load balancing server hardware can be distributed amongst racks in the data center leading to greater agility and less fragmentation. The architecture creates a huge, flexible switching domain, supporting any server/any service, full mesh agility, and unregimented server capacity at low cost.
Abstract:
A scheme for routing packets of traffic to their destination after ensuring that they pass through one or more pre-determined intermediate nodes, thereby permitting all permissible traffic patterns to be handled without knowledge of the traffic matrix, subject to edge-link capacity constraints. In one embodiment, a request for a path with a service demand for routing data between the ingress point and the egress point is received. A set of two or more intermediate nodes between the ingress point and the egress point is selected. Based on a bandwidth of the network, respective fractions of the data to send from the ingress point to each node of the set of intermediate nodes are determined. The data is routed in the determined respective fractions from the ingress point to each node of the set of intermediate nodes, and routed from each node of the set of intermediate nodes to the egress point.
Abstract:
Improved p-cycle restoration techniques using a signaling protocol are disclosed. For example, a technique for use in at least one node of a data communication network for recovering from a failure, wherein the data communication network includes multiple nodes and multiple links for connecting the multiple nodes, comprises the following steps/operations. Notification of the failure is obtained at the at least one node. A determination is made whether the failure is a single link failure or one of a node failure and a multiple link failure. A pre-configured protection cycle (p-cycle) plan is implemented when the failure is a single link failure but not when the failure is one of a node failure and a multiple link failure, such that two independent paths in the network are not connected when implementing the pre-configured protection cycle plan. Implementation of the pre-configured protection cycle plan may further comprise the node sending at least one message to another node in the data communication network and/or receiving at least one message from another node in the data communication network.
Abstract:
A method of networking a plurality of servers together within a data center is disclosed. The method includes the step of addressing a data packet for delivery to a destination server by providing the destination server address as a flat address. The method further includes the steps of obtaining routing information required to route the packet to the destination server. This routing information may be obtained from a directory service servicing the plurality of servers. Once the routing information is obtained, the data packet may be routed to the destination server according to the flat address of the destination server and routing information obtained from the directory service.
Abstract:
A system for commoditizing data center networking is disclosed. The system includes an interconnection topology for a data center having a plurality of servers and a plurality of nodes of a network in the data center through which data packets may be routed. The system uses a routing scheme where the routing is oblivious to the traffic pattern between nodes in the network, and wherein the interconnection topology contains a plurality of paths between one or more servers. The multipath routing may be Valiant load balancing. It disaggregates the function of load balancing into a group of regular servers, with the result that load balancing server hardware can be distributed amongst racks in the data center leading to greater agility and less fragmentation. The architecture creates a huge, flexible switching domain, supporting any server/any service, full mesh agility, and unregimented server capacity at low cost.
Abstract:
A number of techniques are described for routing methods that improve resistance to faults affecting groups of links subject to common risks. One of these techniques accounts for failure potentials in physical networks by considering shared risk link groups separately from performance and costs metrics in determining a primary routing path and a backup path. A shared risk link group (SRLG) is an attribute attached to a link to identify edges that have physical links in common and can therefore be simultaneously disrupted due to a single fault. Another technique considers node disjointness and provides a solution of two paths that are as node disjoint as possible and minimizes administrative costs. The techniques may further be combined in a priority order thereby providing a solution of at least two paths that are strictly SRLG disjoint, as node-disjoint as possible, and have minimum administrative costs. Due to the priority order of evaluation and typical network physical configurations of links, with the links associated common fault SRLGs, the priority ordering technique is very efficient in determining at least two paths for routing between a source and destination node.