Abstract:
Aspects and implementations of the present disclosure are directed to an indirect generalized hypercube network in a computer network facility. Servers in the computer network facility participate in both an over-subscribed fat tree network hierarchy culminating in a gateway connection to external networks and in an indirect hypercube network interconnecting a plurality of servers in the fat tree. The participant servers have multiple network interface ports, including at least one port for a link to an edge layer network device of the fat tree and at least one port for a link to a peer server in the indirect hypercube network. Servers are grouped by edge layer network device to form virtual switches in the indirect hypercube network and data packets are routed between servers using routes through the virtual switches. Routes leverage properties of the hypercube topology. Participant servers function as destination points and as virtual interfaces for the virtual switches.
Abstract:
Energy proportional solutions are provided for computer networks such as datacenters. Congestion sensing heuristics are used to adaptively route traffic across links. Traffic intensity is sensed and links are dynamically activated as they are needed. As the offered load is decreased, the lower channel utilization is sensed and the link speed is reduced to save power. Flattened butterfly topologies can be used in a further power saving approach. Switch mechanisms are exploit the topology's capabilities by reconfiguring link speeds on-the-fly to match bandwidth and power with the traffic demand. For instance, the system may estimate the future bandwidth needs of each link and reconfigure its data rate to meet those requirements while consuming less power. In one configuration, a mechanism is provided where the switch tracks the utilization of each of its links over an epoch, and then makes an adjustment at the end of the epoch.
Abstract:
Aspects and implementations of the present disclosure are directed to an indirect generalized hypercube network in a data center. Servers in the data center participate in both an over-subscribed fat tree network hierarchy culminating in a gateway connection to external networks and in an indirect hypercube network interconnecting a plurality of servers in the fat tree. The participant servers have multiple network interface ports, including at least one port for a link to an edge layer network device of the fat tree and at least one port for a link to a peer server in the indirect hypercube network. Servers are grouped by edge layer network device to form virtual switches in the indirect hypercube network and data packets are routed between servers using routes through the virtual switches. Routes leverage properties of the hypercube topology. Participant servers function as destination points and as virtual interfaces for the virtual switches.
Abstract:
The location of the memory controllers within the on-chip fabric of multiprocessor architectures plays a central role in latency bandwidth characteristics of the processor-to-memory traffic. Intelligent placement substantially reduces the maximum channel load depending on the specific memory controller configuration selected. A variety of simulation techniques are used along and in combination to determine optimal memory controller arrangements. Diamond-type and diagonal X-type memory controller configurations that spread network traffic across all rows and columns in a multiprocessor array substantially improve over other arrangements. Such placements reduce interconnect latency by an average of 10% for real workloads, and the small number of memory controllers relative to the number of on-chip cores opens up a rich design space to optimize latency and bandwidth characteristics of the on-chip network.
Abstract:
The exemplary embodiments provide an indirect hypercube topology for a datacenter network. The indirect hypercube is formed by providing each host with a multi-port network interface controller (NIC). One port of the NIC is connected to a fat-tree network while another port is connected to a peer host forming a single dimension of an indirect binary n-cube. Hence, the composite topology becomes a hierarchical tree of cubes. The hierarchical tree of cubes topology uses (a) the fat-tree topology to scale to large host count and (b) the indirect binary n-cube topology at the leaves of the fat-tree topology for a tightly coupled high-bandwidth interconnect among a subset of hosts.