Systems and methods for routing data through data centers using an indirect generalized hypercube network

    公开(公告)号:US09929960B1

    公开(公告)日:2018-03-27

    申请号:US15609847

    申请日:2017-05-31

    Applicant: Google Inc.

    CPC classification number: H04L47/122 H04L43/0888 H04L45/22

    Abstract: Aspects and implementations of the present disclosure are directed to an indirect generalized hypercube network in a computer network facility. Servers in the computer network facility participate in both an over-subscribed fat tree network hierarchy culminating in a gateway connection to external networks and in an indirect hypercube network interconnecting a plurality of servers in the fat tree. The participant servers have multiple network interface ports, including at least one port for a link to an edge layer network device of the fat tree and at least one port for a link to a peer server in the indirect hypercube network. Servers are grouped by edge layer network device to form virtual switches in the indirect hypercube network and data packets are routed between servers using routes through the virtual switches. Routes leverage properties of the hypercube topology. Participant servers function as destination points and as virtual interfaces for the virtual switches.

    Systems and methods for energy proportional multiprocessor networks
    2.
    发明授权
    Systems and methods for energy proportional multiprocessor networks 有权
    能量比例多处理器网络的系统和方法

    公开(公告)号:US08806244B1

    公开(公告)日:2014-08-12

    申请号:US14084054

    申请日:2013-11-19

    Applicant: Google Inc.

    Abstract: Energy proportional solutions are provided for computer networks such as datacenters. Congestion sensing heuristics are used to adaptively route traffic across links. Traffic intensity is sensed and links are dynamically activated as they are needed. As the offered load is decreased, the lower channel utilization is sensed and the link speed is reduced to save power. Flattened butterfly topologies can be used in a further power saving approach. Switch mechanisms are exploit the topology's capabilities by reconfiguring link speeds on-the-fly to match bandwidth and power with the traffic demand. For instance, the system may estimate the future bandwidth needs of each link and reconfigure its data rate to meet those requirements while consuming less power. In one configuration, a mechanism is provided where the switch tracks the utilization of each of its links over an epoch, and then makes an adjustment at the end of the epoch.

    Abstract translation: 为诸如数据中心的计算机网络提供能量比例解决方案。 拥塞感知启发式用于自适应地跨链路路由流量。 检测到交通强度,并根据需要动态激活链路。 随着提供的负载减小,感测到较低的信道利用率,并且减少链路速度以节省功率。 扁平蝶形拓扑可以用于进一步节能方法。 交换机制通过重新配置链路速度来快速利用拓扑的功能,以匹配带宽和功率与流量需求。 例如,系统可以估计每个链路的未来带宽需求,并重新配置其数据速率以满足这些要求,同时消耗更少的功率。 在一种配置中,提供了一种机制,其中开关在历元上跟踪其每个链接的利用率,然后在时代结束时进行调整。

    Systems and methods for routing data through data centers using an indirect generalized hypercube network

    公开(公告)号:US09705798B1

    公开(公告)日:2017-07-11

    申请号:US14149469

    申请日:2014-01-07

    Applicant: Google Inc.

    CPC classification number: H04L47/122 H04L43/0888 H04L45/22

    Abstract: Aspects and implementations of the present disclosure are directed to an indirect generalized hypercube network in a data center. Servers in the data center participate in both an over-subscribed fat tree network hierarchy culminating in a gateway connection to external networks and in an indirect hypercube network interconnecting a plurality of servers in the fat tree. The participant servers have multiple network interface ports, including at least one port for a link to an edge layer network device of the fat tree and at least one port for a link to a peer server in the indirect hypercube network. Servers are grouped by edge layer network device to form virtual switches in the indirect hypercube network and data packets are routed between servers using routes through the virtual switches. Routes leverage properties of the hypercube topology. Participant servers function as destination points and as virtual interfaces for the virtual switches.

    Method for optimizing memory controller placement in multi-core processors using a fitness metric for a bit vector of EAH memory controller
    4.
    发明授权
    Method for optimizing memory controller placement in multi-core processors using a fitness metric for a bit vector of EAH memory controller 有权
    使用EAH存储器控制器的位向量的适应度量来优化多核处理器中的存储器控​​制器布置的方法

    公开(公告)号:US08682815B1

    公开(公告)日:2014-03-25

    申请号:US13847748

    申请日:2013-03-20

    Applicant: Google Inc.

    CPC classification number: G06F12/0813 G06F15/17312 G06F17/5072 G06F17/5077

    Abstract: The location of the memory controllers within the on-chip fabric of multiprocessor architectures plays a central role in latency bandwidth characteristics of the processor-to-memory traffic. Intelligent placement substantially reduces the maximum channel load depending on the specific memory controller configuration selected. A variety of simulation techniques are used along and in combination to determine optimal memory controller arrangements. Diamond-type and diagonal X-type memory controller configurations that spread network traffic across all rows and columns in a multiprocessor array substantially improve over other arrangements. Such placements reduce interconnect latency by an average of 10% for real workloads, and the small number of memory controllers relative to the number of on-chip cores opens up a rich design space to optimize latency and bandwidth characteristics of the on-chip network.

    Abstract translation: 多处理器架构的片上架构内的存储器控​​制器的位置在处理器到存储器流量的延迟带宽特性中起着核心作用。 根据所选择的特定内存控制器配置,智能放置可显着降低最大通道负载。 各种模拟技术沿着并结合使用以确定最佳的存储器控​​制器布置。 在多处理器阵列中跨所有行和列传播网络流量的钻石型和对角X型存储器控制器配置大大改进了其他布置。 这样的布局将实际工作负载的互连延迟平均降低了10%,而相对于片上内核数量的少量内存控制器开辟了丰富的设计空间,以优化片上网络的延迟和带宽特性。

    Bi-Connected hierarchical data center network based on multi-ported network interface controllers (NICs)

    公开(公告)号:US10084718B1

    公开(公告)日:2018-09-25

    申请号:US14145114

    申请日:2013-12-31

    Applicant: Google Inc.

    CPC classification number: H04L47/60 H04L45/06 H04L49/15

    Abstract: The exemplary embodiments provide an indirect hypercube topology for a datacenter network. The indirect hypercube is formed by providing each host with a multi-port network interface controller (NIC). One port of the NIC is connected to a fat-tree network while another port is connected to a peer host forming a single dimension of an indirect binary n-cube. Hence, the composite topology becomes a hierarchical tree of cubes. The hierarchical tree of cubes topology uses (a) the fat-tree topology to scale to large host count and (b) the indirect binary n-cube topology at the leaves of the fat-tree topology for a tightly coupled high-bandwidth interconnect among a subset of hosts.

Patent Agency Ranking