Method and apparatus for filtering snoop requests in a point-to-point interconnect architecture
    63.
    发明授权
    Method and apparatus for filtering snoop requests in a point-to-point interconnect architecture 有权
    用于在点对点互连架构中过滤窥探请求的方法和装置

    公开(公告)号:US07386683B2

    公开(公告)日:2008-06-10

    申请号:US11093131

    申请日:2005-03-29

    IPC分类号: G06F13/28 G06F12/00

    摘要: A method and apparatus for supporting cache coherency in a multiprocessor computing environment having multiple processing units, each processing unit having one or more local cache memories associated and operatively connected therewith. The method comprises providing a snoop filter device associated with each processing unit, each snoop filter device having a plurality of dedicated input ports for receiving snoop requests from dedicated memory writing sources in the multiprocessor computing environment. Each of the memory writing sources is directly connected to the dedicated input ports of all other snoop filter devices associated with all other processing units in a point-to-point interconnect fashion. Each snoop filter device includes a plurality of parallel operating port snoop filters in correspondence with the plurality of dedicated input ports that are adapted to concurrently filter snoop requests received from respective dedicated memory writing sources and forward a subset of those requests to its associated processing unit.

    摘要翻译: 一种用于在具有多个处理单元的多处理器计算环境中支持高速缓存一致性的方法和装置,每个处理单元具有与其相关联并与之可操作地相连的一个或多个本地高速缓冲存储器。 该方法包括提供与每个处理单元相关联的窥探过滤器设备,每个窥探过滤器设备具有多个专用输入端口,用于从多处理器计算环境中的专用存储器写入源接收窥探请求。 每个存储器写入源以点对点互连方式直接连接到与所有其他处理单元相关联的所有其他窥探滤波器设备的专用输入端口。 每个窥探过滤器装置包括与多个专用输入端口相对应的多个并行操作端口窥探滤波器,该多个专用输入端口适于同时滤除从相应专用存储器写入源接收到的窥探请求,并将这些请求的子集转发到其相关联的处理单元。

    Snoop filter for filtering snoop requests
    64.
    发明授权
    Snoop filter for filtering snoop requests 有权
    用于过滤窥探请求的Snoop过滤器

    公开(公告)号:US07373462B2

    公开(公告)日:2008-05-13

    申请号:US11093152

    申请日:2005-03-29

    IPC分类号: G06F13/28 G06F12/00

    摘要: A method and apparatus for supporting cache coherency in a multiprocessor computing environment having multiple processing units, each processing unit having one or more local cache memories associated and operatively connected therewith. The method comprises providing a snoop filter device associated with each processing unit, each snoop filter device having a plurality of dedicated input ports for receiving snoop requests from dedicated memory writing sources in the multiprocessor computing environment. Each snoop filter device includes a plurality of parallel operating port snoop filters in correspondence with the plurality of dedicated input ports, each port snoop filter implementing one or more parallel operating sub-filter elements that are adapted to concurrently filter snoop requests received from respective dedicated memory writing sources and forward a subset of those requests to its associated processing unit.

    摘要翻译: 一种用于在具有多个处理单元的多处理器计算环境中支持高速缓存一致性的方法和装置,每个处理单元具有与其相关联并与之可操作地相连的一个或多个本地高速缓冲存储器。 该方法包括提供与每个处理单元相关联的窥探过滤器设备,每个窥探过滤器设备具有多个专用输入端口,用于从多处理器计算环境中的专用存储器写入源接收窥探请求。 每个窥探过滤器装置包括与多个专用输入端口相对应的多个并行操作端口窥探滤波器,每个端口窥探滤波器实现一个或多个并行操作子滤波器元件,其适于同时滤除从相应专用存储器接收的窥探请求 写入源并将这些请求的子集转发到其相关联的处理单元。

    Snoop filter for filtering snoop requests
    65.
    发明授权
    Snoop filter for filtering snoop requests 有权
    用于过滤窥探请求的Snoop过滤器

    公开(公告)号:US08677073B2

    公开(公告)日:2014-03-18

    申请号:US13587420

    申请日:2012-08-16

    IPC分类号: G06F13/28 G06F12/00

    摘要: A method and apparatus for supporting cache coherency in a multiprocessor computing environment having multiple processing units, each processing unit having one or more local cache memories associated and operatively connected therewith. The method comprises providing a snoop filter device associated with each processing unit, each snoop filter device having a plurality of dedicated input ports for receiving snoop requests from dedicated memory writing sources in the multiprocessor computing environment. Each snoop filter device includes a plurality of parallel operating port snoop filters in correspondence with the plurality of dedicated input ports, each port snoop filter implementing one or more parallel operating sub-filter elements that are adapted to concurrently filter snoop requests received from respective dedicated memory writing sources and forward a subset of those requests to its associated processing unit.

    摘要翻译: 一种用于在具有多个处理单元的多处理器计算环境中支持高速缓存一致性的方法和装置,每个处理单元具有与其相关联并与之可操作地相连的一个或多个本地高速缓冲存储器。 该方法包括提供与每个处理单元相关联的窥探过滤器设备,每个窥探过滤器设备具有多个专用输入端口,用于从多处理器计算环境中的专用存储器写入源接收窥探请求。 每个窥探过滤器装置包括与多个专用输入端口相对应的多个并行操作端口窥探滤波器,每个端口窥探滤波器实现一个或多个并行操作子滤波器元件,其适于同时滤除从相应专用存储器接收的窥探请求 写入源并将这些请求的子集转发到其相关联的处理单元。

    TLB EXCLUSION RANGE
    67.
    发明申请
    TLB EXCLUSION RANGE 有权
    TLB排除范围

    公开(公告)号:US20130024648A1

    公开(公告)日:2013-01-24

    申请号:US13618730

    申请日:2012-09-14

    IPC分类号: G06F12/10

    摘要: A system and method for accessing memory are provided. The system comprises a lookup buffer for storing one or more page table entries, wherein each of the one or more page table entries comprises at least a virtual page number and a physical page number; a logic circuit for receiving a virtual address from said processor, said logic circuit for matching the virtual address to the virtual page number in one of the page table entries to select the physical page number in the same page table entry, said page table entry having one or more bits set to exclude a memory range from a page.

    摘要翻译: 提供了一种访问存储器的系统和方法。 该系统包括用于存储一个或多个页表条目的查找缓冲器,其中所述一个或多个页表条目中的每一个包括至少虚拟页码和物理页号; 用于从所述处理器接收虚拟地址的逻辑电路,所述逻辑电路用于将所述虚拟地址与所述页表项之一中的虚拟页号进行匹配,以选择所述同一页表项中的所述物理页号,所述页表项具有 一个或多个位被设置为从页面排除存储器范围。

    Managing coherence via put/get windows
    68.
    发明授权
    Managing coherence via put/get windows 失效
    通过put / get窗口管理一致性

    公开(公告)号:US07870343B2

    公开(公告)日:2011-01-11

    申请号:US10468995

    申请日:2002-02-25

    摘要: A method and apparatus for managing coherence between two processors of a two processor node of a multi-processor computer system. Generally the present invention relates to a software algorithm that simplifies and significantly speeds the management of cache coherence in a message passing parallel computer, and to hardware apparatus that assists this cache coherence algorithm. The software algorithm uses the opening and closing of put/get windows to coordinate the activated required to achieve cache coherence. The hardware apparatus may be an extension to the hardware address decode, that creates, in the physical memory address space of the node, an area of virtual memory that (a) does not actually exist, and (b) is therefore able to respond instantly to read and write requests from the processing elements.

    摘要翻译: 一种用于管理多处理器计算机系统的两个处理器节点的两个处理器之间的相干性的方法和装置。 通常,本发明涉及一种软件算法,其简化并显着加速了传送并行计算机的消息中的高速缓存一致性的管理以及辅助该高速缓存一致性算法的硬件设备。 软件算法使用put / get窗口的打开和关闭来协调激活的所需要的,以实现缓存一致性。 硬件设备可以是硬件地址解码的扩展,其在节点的物理存储器地址空间中创建(a)实际不存在的虚拟存储器的区域,并且(b)因此能够立即响应 从处理元素读取和写入请求。

    NOVEL MASSIVELY PARALLEL SUPERCOMPUTER
    69.
    发明申请
    NOVEL MASSIVELY PARALLEL SUPERCOMPUTER 有权
    新的大型并行超级计算机

    公开(公告)号:US20090259713A1

    公开(公告)日:2009-10-15

    申请号:US12492799

    申请日:2009-06-26

    摘要: A novel massively parallel supercomputer of hundreds of teraOPS-scale includes node architectures based upon System-On-a-Chip technology, i.e., each processing node comprises a single Application Specific Integrated Circuit (ASIC). Within each ASIC node is a plurality of processing elements each of which consists of a central processing unit (CPU) and plurality of floating point processors to enable optimal balance of computational performance, packaging density, low cost, and power and cooling requirements. The plurality of processors within a single node may be used individually or simultaneously to work on any combination of computation or communication as required by the particular algorithm being solved or executed at any point in time. The system-on-a-chip ASIC nodes are interconnected by multiple independent networks that optimally maximizes packet communications throughput and minimizes latency. In the preferred embodiment, the multiple networks include three high-speed networks for parallel algorithm message passing including a Torus, Global Tree, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. For particular classes of parallel algorithms, or parts of parallel calculations, this architecture exhibits exceptional computational performance, and may be enabled to perform calculations for new classes of parallel algorithms. Additional networks are provided for external connectivity and used for Input/Output, System Management and Configuration, and Debug and Monitoring functions. Special node packaging techniques implementing midplane and other hardware devices facilitates partitioning of the supercomputer in multiple networks for optimizing supercomputing resources.

    摘要翻译: 数百个teraOPS级别的新型大规模并行超级计算机包括基于片上系统技术的节点架构,即每个处理节点包括单个专用集成电路(ASIC)。 在每个ASIC节点内是多个处理元件,每个处理元件由中央处理单元(CPU)和多个浮点处理器组成,以实现计算性能,封装密度,低成本以及功率和冷却​​要求的最佳平衡。 单个节点内的多个处理器可以单独使用或同时使用,以在任何时间点解决或执行的特定算法所要求的任何计算或通信组合上工作。 片上系统ASIC节点通过多个独立网络互连,从而最大限度地最大限度地提高了分组通信吞吐量并最大限度地减少了延迟。 在优选实施例中,多个网络包括用于并行算法消息传递的三个高速网络,包括提供全局障碍和通知功能的环形,全局树和全球异步网络。 这些多个独立网络可以根据用于优化算法处理性能的算法的需求或阶段来协同或独立地利用。 对于特定类别的并行算法或并行计算的部分,该架构具有出色的计算性能,并且可以启用对新类并行算法执行计算。 为外部连接提供附加网络,用于输入/输出,系统管理和配置以及调试和监控功能。 实现中平面和其他硬件设备的特殊节点打包技术有助于在多个网络中划分超级计算机,以优化超级计算资源。

    NOVEL SNOOP FILTER FOR FILTERING SNOOP REQUESTS
    70.
    发明申请
    NOVEL SNOOP FILTER FOR FILTERING SNOOP REQUESTS 失效
    用于过滤SNOOP要求的新SNOOP过滤器

    公开(公告)号:US20090006770A1

    公开(公告)日:2009-01-01

    申请号:US12113262

    申请日:2008-05-01

    IPC分类号: G06F12/08

    摘要: A method and apparatus for supporting cache coherency in a multiprocessor computing environment having multiple processing units, each processing unit having one or more local cache memories associated and operatively connected therewith. The method comprises providing a snoop filter device associated with each processing unit, each snoop filter device having a plurality of dedicated input ports for receiving snoop requests from dedicated memory writing sources in the multiprocessor computing environment. Each snoop filter device includes a plurality of parallel operating port snoop filters in correspondence with the plurality of dedicated input ports, each port snoop filter implementing one or more parallel operating sub-filter elements that are adapted to concurrently filter snoop requests received from respective dedicated memory writing sources and forward a subset of those requests to its associated processing unit.

    摘要翻译: 一种用于在具有多个处理单元的多处理器计算环境中支持高速缓存一致性的方法和装置,每个处理单元具有与其相关联并与之可操作地相连的一个或多个本地高速缓冲存储器。 该方法包括提供与每个处理单元相关联的窥探过滤器设备,每个窥探过滤器设备具有多个专用输入端口,用于从多处理器计算环境中的专用存储器写入源接收窥探请求。 每个窥探过滤器装置包括与多个专用输入端口相对应的多个并行操作端口窥探滤波器,每个端口窥探滤波器实现一个或多个并行操作子滤波器元件,其适于同时滤除从相应专用存储器接收的窥探请求 写入源并将这些请求的子集转发到其相关联的处理单元。