PCIe lane aggregation over a high speed link

    公开(公告)号:US10235318B2

    公开(公告)日:2019-03-19

    申请号:US15812493

    申请日:2017-11-14

    Abstract: A method of operating a computer network system configured with disaggregated inputs/outputs. This system can be configured in a leaf-spine architecture and include a router coupled to a network source, a plurality of core switches coupled to the router, a plurality of aggregator switches coupled to each of the plurality of core switches, and a plurality of rack modules coupled to each of the plurality of aggregator switches. Each of rack modules can include an I/O appliance with a downstream aggregator module, a plurality of server devices each with PCIe interfaces, and an upstream aggregator module that aggregates each of the PCIe interfaces. A high-speed link can be configured between the downstream and upstream aggregator modules via aggregation of many serial lanes to provide reliable high speed bit stream transport over long distances, which allows for better utilization of resources and scalability of memory capacity independent of the server count.

    Isolated shared memory architecture (iSMA)
    2.
    发明授权
    Isolated shared memory architecture (iSMA) 有权
    隔离共享内存架构(iSMA)

    公开(公告)号:US09250831B1

    公开(公告)日:2016-02-02

    申请号:US14194574

    申请日:2014-02-28

    Abstract: Techniques for a massively parallel and memory centric computing system. The system has a plurality of processing units operably coupled to each other through one or more communication channels. Each of the plurality of processing units has an ISMn interface device. Each of the plurality of ISMn interface devices is coupled to an ISMe endpoint connected to each of the processing units. The system has a plurality of DRAM or Flash memories configured in a disaggregated architecture and one or more switch nodes operably coupling the plurality of DRAM or Flash memories in the disaggregated architecture. The system has a plurality of high speed optical cables configured to communicate at a transmission rate of 100 G or greater to facilitate communication from any one of the plurality of processing units to any one of the plurality of DRAM or Flash memories.

    Abstract translation: 大规模并行和以内存为中心的计算系统的技术。 该系统具有通过一个或多个通信信道彼此可操作地耦合的多个处理单元。 多个处理单元中的每一个具有ISMn接口装置。 多个ISMn接口设备中的每一个耦合到连接到每个处理单元的ISMe端点。 该系统具有以分解体系结构配置的多个DRAM或闪存,以及可操作地以分解结构耦合多个DRAM或闪速存储器的一个或多个交换节点。 该系统具有多个高速光缆,其配置为以100G或更大的传输速率进行通信,以便于从多个处理单元中的任何一个到多个DRAM或闪存中的任何一个的通信。

    PCIE lane aggregation over a high speed link

    公开(公告)号:US10929325B2

    公开(公告)日:2021-02-23

    申请号:US16738984

    申请日:2020-01-09

    Abstract: A method of operating a computer network system configured with disaggregated inputs/outputs. This system can be configured in a leaf-spine architecture and include a router coupled to a network source, a plurality of core switches coupled to the router, a plurality of aggregator switches coupled to each of the plurality of core switches, and a plurality of rack modules coupled to each of the plurality of aggregator switches. Each of rack modules can include an I/O appliance with a downstream aggregator module, a plurality of server devices each with PCIe interfaces, and an upstream aggregator module that aggregates each of the PCIe interfaces. A high-speed link can be configured between the downstream and upstream aggregator modules via aggregation of many serial lanes to provide reliable high speed bit stream transport over long distances, which allows for better utilization of resources and scalability of memory capacity independent of the server count.

    PCIE lane aggregation over a high speed link
    5.
    发明授权
    PCIE lane aggregation over a high speed link 有权
    通过高速链路的PCIE通道聚合

    公开(公告)号:US09430437B1

    公开(公告)日:2016-08-30

    申请号:US13963329

    申请日:2013-08-09

    Abstract: A computer network system configured with disaggregated inputs/outputs. This system can be configured in a leaf-spine architecture and can include a router coupled to a network source, a plurality of core switches coupled to the router, a plurality of aggregator switches coupled to each of the plurality of core switches, and a plurality of rack modules coupled to each of the plurality of aggregator switches. The plurality of rack modules can each include an I/O appliance with a downstream aggregator module, a plurality of server devices each with PCIe interfaces, and an upstream aggregator module that aggregates each of the PCIe interfaces. A high-speed link can be configured between the downstream and upstream aggregator modules via aggregation of many serial lanes to provide reliable high speed bit stream transport over long distances, which allows for better utilization of resources and scalability of memory capacity independent of the server count.

    Abstract translation: 配置有分解输入/输出的计算机网络系统。 该系统可以被配置在叶脊结构中,并且可以包括耦合到网络源的路由器,耦合到路由器的多个核心交换机,耦合到多个核心交换机中的每一个的多个聚合器交换机,以及多个 耦合到所述多个聚合器开关中的每一个的机架模块。 多个机架模块可以各自包括具有下游聚合器模块的I / O设备,每个具有PCIe接口的多个服务器设备和聚集每个PCIe接口的上游聚合器模块。 可以通过多个串行通道的聚合在下游和上游聚合器模块之间配置高速链路,以提供长距离可靠的高速比特流传输,这样可以更好地利用资源和独立于服务器数量的内存容量的可扩展性 。

    PCIe lane aggregation over a high speed link

    公开(公告)号:US10572425B2

    公开(公告)日:2020-02-25

    申请号:US16267748

    申请日:2019-02-05

    Abstract: A method of operating a computer network system configured with disaggregated inputs/outputs. This system can be configured in a leaf-spine architecture and include a router coupled to a network source, a plurality of core switches coupled to the router, a plurality of aggregator switches coupled to each of the plurality of core switches, and a plurality of rack modules coupled to each of the plurality of aggregator switches. Each of rack modules can include an I/O appliance with a downstream aggregator module, a plurality of server devices each with PCIe interfaces, and an upstream aggregator module that aggregates each of the PCIe interfaces. A high-speed link can be configured between the downstream and upstream aggregator modules via aggregation of many serial lanes to provide reliable high speed bit stream transport over long distances, which allows for better utilization of resources and scalability of memory capacity independent of the server count.

Patent Agency Ranking