Massively parallel supercomputer
    3.
    发明授权
    Massively parallel supercomputer 有权
    大型并行超级计算机

    公开(公告)号:US08250133B2

    公开(公告)日:2012-08-21

    申请号:US12492799

    申请日:2009-06-26

    IPC分类号: G06F15/16

    摘要: A novel massively parallel supercomputer of hundreds of teraOPS-scale includes node architectures based upon System- On-a-Chip technology, i.e., each processing node comprises a single Application Specific Integrated Circuit (ASIC). Within each ASIC node is a plurality of processing elements each of which consists of a central processing unit (CPU) and plurality of floating point processors to enable optimal balance of computational performance, packaging density, low cost, and power and cooling requirements. The plurality of processors within a single node individually or simultaneously work on any combination of computation or communication as required by the particular algorithm being solved. The system-on-a-chip ASIC nodes are interconnected by multiple independent networks that optimally maximizes packet communications throughput and minimizes latency. The multiple networks include three high-speed networks for parallel algorithm message passing including a Torus, Global Tree, and a Global Asynchronous network that provides global barrier and notification functions.

    摘要翻译: 数百个teraOPS级别的新型大规模并行超级计算机包括基于片上系统技术的节点架构,即每个处理节点包括单个专用集成电路(ASIC)。 在每个ASIC节点内是多个处理元件,每个处理元件由中央处理单元(CPU)和多个浮点处理器组成,以实现计算性能,封装密度,低成本以及功率和冷却​​要求的最佳平衡。 单个节点内的多个处理器单独或同时工作在要解决的特定算法所要求的计算或通信的任何组合上。 片上系统ASIC节点通过多个独立网络互连,从而最大限度地最大限度地提高了分组通信吞吐量并最大限度地减少了延迟。 多个网络包括用于并行算法消息传递的三个高速网络,包括Torus,全局树和提供全局障碍和通知功能的全球异步网络。

    Massively parallel supercomputer
    4.
    发明授权
    Massively parallel supercomputer 有权
    大型并行超级计算机

    公开(公告)号:US07555566B2

    公开(公告)日:2009-06-30

    申请号:US10468993

    申请日:2002-02-25

    IPC分类号: G06F15/16

    摘要: A novel massively parallel supercomputer of hundreds of teraOPS-scale includes node architectures based upon System-On-a-Chip technology, i.e., each processing node comprises a single Application Specific Integrated Circuit (ASIC). Within each ASIC node is a plurality of processing elements each of which consists of a central processing unit (CPU) and plurality of floating point processors to enable optimal balance of computational performance, packaging density, low cost, and power and cooling requirements. The plurality of processors within a single node may be used individually or simultaneously to work on any combination of computation or communication as required by the particular algorithm being solved or executed at any point in time. The system-on-a-chip ASIC nodes are interconnected by multiple independent networks that optimally maximizes packet communications throughput and minimizes latency. In the preferred embodiment, the multiple networks include three high-speed networks for parallel algorithm message passing including a Torus, Global Tree, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. For particular classes of parallel algorithms, or parts of parallel calculations, this architecture exhibits exceptional computational performance, and may be enabled to perform calculations for new classes of parallel algorithms. Additional networks are provided for external connectivity and used for Input/Output, System Management and Configuration, and Debug and Monitoring functions. Special node packaging techniques implementing midplane and other hardware devices facilitates partitioning of the supercomputer in multiple networks for optimizing supercomputing resources.

    摘要翻译: 数百个teraOPS级别的新型大规模并行超级计算机包括基于片上系统技术的节点架构,即,每个处理节点包括单个专用集成电路(ASIC)。 在每个ASIC节点内是多个处理元件,每个处理元件由中央处理单元(CPU)和多个浮点处理器组成,以实现计算性能,封装密度,低成本以及功率和冷却​​要求的最佳平衡。 单个节点内的多个处理器可以单独使用或同时使用,以在任何时间点解决或执行的特定算法所要求的任何计算或通信组合上工作。 片上系统ASIC节点通过多个独立网络互连,从而最大限度地最大限度地提高了分组通信吞吐量并最大限度地减少了延迟。 在优选实施例中,多个网络包括用于并行算法消息传递的三个高速网络,包括提供全局障碍和通知功能的环形,全局树和全球异步网络。 这些多个独立网络可以根据用于优化算法处理性能的算法的需求或阶段来协同或独立地利用。 对于特定类别的并行算法或并行计算的部分,该架构具有出色的计算性能,并且可以启用对新类并行算法执行计算。 为外部连接提供附加网络,用于输入/输出,系统管理和配置以及调试和监控功能。 实现中平面和其他硬件设备的特殊节点打包技术有助于在多个网络中划分超级计算机,以优化超级计算资源。

    NOVEL MASSIVELY PARALLEL SUPERCOMPUTER
    5.
    发明申请
    NOVEL MASSIVELY PARALLEL SUPERCOMPUTER 有权
    新的大型并行超级计算机

    公开(公告)号:US20090259713A1

    公开(公告)日:2009-10-15

    申请号:US12492799

    申请日:2009-06-26

    摘要: A novel massively parallel supercomputer of hundreds of teraOPS-scale includes node architectures based upon System-On-a-Chip technology, i.e., each processing node comprises a single Application Specific Integrated Circuit (ASIC). Within each ASIC node is a plurality of processing elements each of which consists of a central processing unit (CPU) and plurality of floating point processors to enable optimal balance of computational performance, packaging density, low cost, and power and cooling requirements. The plurality of processors within a single node may be used individually or simultaneously to work on any combination of computation or communication as required by the particular algorithm being solved or executed at any point in time. The system-on-a-chip ASIC nodes are interconnected by multiple independent networks that optimally maximizes packet communications throughput and minimizes latency. In the preferred embodiment, the multiple networks include three high-speed networks for parallel algorithm message passing including a Torus, Global Tree, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. For particular classes of parallel algorithms, or parts of parallel calculations, this architecture exhibits exceptional computational performance, and may be enabled to perform calculations for new classes of parallel algorithms. Additional networks are provided for external connectivity and used for Input/Output, System Management and Configuration, and Debug and Monitoring functions. Special node packaging techniques implementing midplane and other hardware devices facilitates partitioning of the supercomputer in multiple networks for optimizing supercomputing resources.

    摘要翻译: 数百个teraOPS级别的新型大规模并行超级计算机包括基于片上系统技术的节点架构,即每个处理节点包括单个专用集成电路(ASIC)。 在每个ASIC节点内是多个处理元件,每个处理元件由中央处理单元(CPU)和多个浮点处理器组成,以实现计算性能,封装密度,低成本以及功率和冷却​​要求的最佳平衡。 单个节点内的多个处理器可以单独使用或同时使用,以在任何时间点解决或执行的特定算法所要求的任何计算或通信组合上工作。 片上系统ASIC节点通过多个独立网络互连,从而最大限度地最大限度地提高了分组通信吞吐量并最大限度地减少了延迟。 在优选实施例中,多个网络包括用于并行算法消息传递的三个高速网络,包括提供全局障碍和通知功能的环形,全局树和全球异步网络。 这些多个独立网络可以根据用于优化算法处理性能的算法的需求或阶段来协同或独立地利用。 对于特定类别的并行算法或并行计算的部分,该架构具有出色的计算性能,并且可以启用对新类并行算法执行计算。 为外部连接提供附加网络,用于输入/输出,系统管理和配置以及调试和监控功能。 实现中平面和其他硬件设备的特殊节点打包技术有助于在多个网络中划分超级计算机,以优化超级计算资源。

    NOVEL MASSIVELY PARALLEL SUPERCOMPUTER
    6.
    发明申请
    NOVEL MASSIVELY PARALLEL SUPERCOMPUTER 有权
    新的大型并行超级计算机

    公开(公告)号:US20120311299A1

    公开(公告)日:2012-12-06

    申请号:US13566024

    申请日:2012-08-03

    IPC分类号: G06F15/80

    摘要: A novel massively parallel supercomputer of hundreds of teraOPS-scale includes node architectures based upon System-On-a-Chip technology, i.e., each processing node comprises a single Application Specific Integrated Circuit (ASIC). Within each ASIC node is a plurality of processing elements each of which consists of a central processing unit (CPU) and plurality of floating point processors to enable optimal balance of computational performance, packaging density, low cost, and power and cooling requirements. The plurality of processors within a single node individually or simultaneously work on any combination of computation or communication as required by the particular algorithm being solved. The system-on-a-chip ASIC nodes are interconnected by multiple independent networks that optimally maximizes packet communications throughput and minimizes latency. The multiple networks include three high-speed networks for parallel algorithm message passing including a Torus, Global Tree, and a Global Asynchronous network that provides global barrier and notification functions.

    摘要翻译: 数百个teraOPS级别的新型大规模并行超级计算机包括基于片上系统技术的节点架构,即每个处理节点包括单个专用集成电路(ASIC)。 在每个ASIC节点内是多个处理元件,每个处理元件由中央处理单元(CPU)和多个浮点处理器组成,以实现计算性能,封装密度,低成本以及功率和冷却​​要求的最佳平衡。 单个节点内的多个处理器单独或同时工作在要解决的特定算法所要求的计算或通信的任何组合上。 片上系统ASIC节点通过多个独立网络进行互连,从而最大限度地最大限度地提高了分组通信吞吐量并最大限度地减少了延迟。 多个网络包括用于并行算法消息传递的三个高速网络,包括Torus,全局树和提供全局障碍和通知功能的全球异步网络。

    Global interrupt and barrier networks
    7.
    发明授权
    Global interrupt and barrier networks 失效
    全局中断和屏障网络

    公开(公告)号:US07444385B2

    公开(公告)日:2008-10-28

    申请号:US10468997

    申请日:2002-02-25

    IPC分类号: G06F15/16

    摘要: A system and method for generating global asynchronous signals in a computing structure. Particularly, a global interrupt and barrier network is implemented that implements logic for generating global interrupt and barrier signals for controlling global asynchronous operations performed by processing elements at selected processing nodes of a computing structure in accordance with a processing algorithm; and includes the physical interconnecting of the processing nodes for communicating the global interrupt and barrier signals to the elements via low-latency paths. The global asynchronous signals respectively initiate interrupt and barrier operations at the processing nodes at times selected for optimizing performance of the processing algorithms. In one embodiment, the global interrupt and barrier network is implemented in a scalable, massively parallel supercomputing device structure comprising a plurality of processing nodes interconnected by multiple independent networks, with each node including one or more processing elements for performing computation or communication activity as required when performing parallel algorithm operations. One multiple independent network includes a global tree network for enabling high-speed global tree communications among global tree network nodes or sub-trees thereof. The global interrupt and barrier network may operate in parallel with the global tree network for providing global asynchronous sideband signals.

    摘要翻译: 一种用于在计算结构中产生全局异步信号的系统和方法。 特别地,实现了全局中断和屏障网络,其实现用于根据处理算法产生用于控制由计算结构的选定处理节点处理元件执行的全局异步操作的全局中断和屏障信号的逻辑; 并且包括用于经由低延迟路径将全局中断和屏障信号传送到元件的处理节点的物理互连。 全局异步信号分别在处理节点处启动中断和屏障操作,这些时间被选择用于优化处理算法的性能。 在一个实施例中,全局中断和屏障网络在可扩展的大规模并行超级计算设备结构中实现,该结构包括由多个独立网络互连的多个处理节点,每个节点包括用于根据需要执行计算或通信活动的一个或多个处理元件 当执行并行算法操作时。 一个多个独立网络包括全局树网络,用于在全球树网络节点或其子树之间实现高速全局树通信。 全局中断和屏障网络可以与全局树网络并行操作,以提供全局异步边带信号。

    SYSTEM AND METHOD FOR PROGRAMMABLE BANK SELECTION FOR BANKED MEMORY SUBSYSTEMS
    8.
    发明申请
    SYSTEM AND METHOD FOR PROGRAMMABLE BANK SELECTION FOR BANKED MEMORY SUBSYSTEMS 有权
    用于银行存储器子系统的可编程银行选择的系统和方法

    公开(公告)号:US20090006718A1

    公开(公告)日:2009-01-01

    申请号:US11768805

    申请日:2007-06-26

    IPC分类号: G06F12/02

    摘要: A programmable memory system and method for enabling one or more processor devices access to shared memory in a computing environment, the shared memory including one or more memory storage structures having addressable locations for storing data. The system comprises: one or more first logic devices associated with a respective one or more processor devices, each first logic device for receiving physical memory address signals and programmable for generating a respective memory storage structure select signal upon receipt of pre-determined address bit values at selected physical memory address bit locations; and, a second logic device responsive to each the respective select signal for generating an address signal used for selecting a memory storage structure for processor access. The system thus enables each processor device of a computing environment memory storage access distributed across the one or more memory storage structures.

    摘要翻译: 一种用于使得一个或多个处理器设备能够访问计算环境中的共享存储器的可编程存储器系统和方法,所述共享存储器包括具有用于存储数据的可寻址位置的一个或多个存储器存储结构。 该系统包括:与相应的一个或多个处理器设备相关联的一个或多个第一逻辑设备,用于接收物理存储器地址信号的每个第一逻辑设备,并且可编程以在接收到预定地址位值时产生相应的存储器存储结构选择信号 在选定的物理存储器地址位置; 以及响应于每个相应选择信号的第二逻辑设备,用于产生用于选择用于处理器访问的存储器存储结构的地址信号。 因此,该系统使每个处理器设备能够分布在一个或多个存储器结构上的计算环境存储器存储访问。

    System and method for programmable bank selection for banked memory subsystems
    9.
    发明授权
    System and method for programmable bank selection for banked memory subsystems 有权
    用于存储存储器子系统的可编程存储体选择的系统和方法

    公开(公告)号:US07793038B2

    公开(公告)日:2010-09-07

    申请号:US11768805

    申请日:2007-06-26

    IPC分类号: G06F12/00

    摘要: A programmable memory system and method for enabling one or more processor devices access to shared memory in a computing environment, the shared memory including one or more memory storage structures having addressable locations for storing data. The system comprises: one or more first logic devices associated with a respective one or more processor devices, each first logic device for receiving physical memory address signals and programmable for generating a respective memory storage structure select signal upon receipt of pre-determined address bit values at selected physical memory address bit locations; and, a second logic device responsive to each of the respective select signal for generating an address signal used for selecting a memory storage structure for processor access. The system thus enables each processor device of a computing environment memory storage access distributed across the one or more memory storage structures.

    摘要翻译: 一种用于使得一个或多个处理器设备能够访问计算环境中的共享存储器的可编程存储器系统和方法,所述共享存储器包括具有用于存储数据的可寻址位置的一个或多个存储器存储结构。 该系统包括:与相应的一个或多个处理器设备相关联的一个或多个第一逻辑设备,用于接收物理存储器地址信号的每个第一逻辑设备,并且可编程以在接收到预定地址位值时产生相应的存储器存储结构选择信号 在选定的物理存储器地址位置; 以及响应于每个相应选择信号以产生用于选择用于处理器访问的存储器存储结构的地址信号的第二逻辑器件。 因此,该系统使每个处理器设备能够分布在一个或多个存储器结构上的计算环境存储器存储访问。

    Snoop filtering system in a multiprocessor system
    10.
    发明授权
    Snoop filtering system in a multiprocessor system 有权
    多处理器系统中的Snoop过滤系统

    公开(公告)号:US07380071B2

    公开(公告)日:2008-05-27

    申请号:US11093127

    申请日:2005-03-29

    IPC分类号: G06F13/28 G06F12/00

    摘要: A system and method for supporting cache coherency in a computing environment having multiple processing units, each unit having an associated cache memory system operatively coupled therewith. The system includes a plurality of interconnected snoop filter units, each snoop filter unit corresponding to and in communication with a respective processing unit, with each snoop filter unit comprising a plurality of devices for receiving asynchronous snoop requests from respective memory writing sources in the computing environment; and a point-to-point interconnect comprising communication links for directly connecting memory writing sources to corresponding receiving devices; and, a plurality of parallel operating filter devices coupled in one-to-one correspondence with each receiving device for processing snoop requests received thereat and one of forwarding requests or preventing forwarding of requests to its associated processing unit. Each of the plurality of parallel operating filter devices comprises parallel operating sub-filter elements, each simultaneously receiving an identical snoop request and implementing one or more different snoop filter algorithms for determining those snoop requests for data that are determined not cached locally at the associated processing unit and preventing forwarding of those requests to the processor unit. In this manner, a number of snoop requests forwarded to a processing unit is reduced thereby increasing performance of the computing environment.

    摘要翻译: 一种用于在具有多个处理单元的计算环境中支持高速缓存一致性的系统和方法,每个单元具有与其可操作地耦合的相关联的高速缓冲存储器系统。 该系统包括多个互连的窥探过滤器单元,每个窥探过滤器单元对应于相应处理单元并与其通信,每个窥探过滤器单元包括用于在计算环境中从相应存储器写入源接收异步窥探请求的多个设备 ; 以及包括用于将存储器写入源直接连接到对应的接收设备的通信链路的点对点互连; 以及与每个接收设备一一对应地耦合的多个并行操作过滤器设备,用于处理在其上接收的窥探请求,并且转发请求之一或者阻止将请求转发到其相关联的处理单元。 多个并行操作过滤器装置中的每一个包括并行操作子滤波器元件,每个并行操作子滤波器元件同时接收相同的窥探请求,并且实现一个或多个不同的窥探滤波器算法,用于确定对于在相关处理中本地未被缓存的数据被确定的窥探请求 并且防止将这些请求转发到处理器单元。 以这种方式,减少了转发到处理单元的多个窥探请求,从而增加了计算环境的性能。