Centralized adaptive network memory engine
    1.
    发明授权
    Centralized adaptive network memory engine 有权
    集中自适应网络内存引擎

    公开(公告)号:US07925711B1

    公开(公告)日:2011-04-12

    申请号:US11957411

    申请日:2007-12-15

    IPC分类号: G06F15/16

    摘要: There is a constant battle to break even between continuing improvements in DRAM capacities and the growing memory demands of large-memory high-performance applications. Performance of such applications degrades quickly once the system hits the physical memory limit and starts swapping to the local disk. We present the design, implementation and evaluation of Anemone—an Adaptive Network Memory Engine—that virtualizes the collective unused memory of multiple machines across a gigabit Ethernet LAN, without requiring any modifications to the either the large memory applications or the Linux kernel. We have implemented a working prototype of Anemone and evaluated it using real-world unmodified applications such as ray-tracing and large in-memory sorting. Our results with the Anemone prototype show that unmodified single-process applications execute 2 to 3 times faster and multiple concurrent processes execute 6 to 7.7 times faster, when compared to disk based paging. The Anemone prototype reduces page-fault latencies by a factor of 19.6—from an average of 9.8 ms with disk based paging to 500 μs with Anemone. Most importantly, Anemone provides a virtualized low-latency access to potentially “unlimited” network memory resources.

    摘要翻译: 即使在DRAM容量的持续改进和大内存高性能应用程序的不断增长的内存需求之间,也有不断的战斗。 一旦系统达到物理内存限制并开始交换到本地磁盘,这种应用程序的性能就会降低。 我们介绍Anemone-a自适应网络内存引擎的设计,实现和评估,它可以跨越千兆以太网LAN虚拟化多台机器的集体未使用的内存,而无需对大内存应用程序或Linux内核进行任何修改。 我们已经实施了Anemone的工作原型,并使用真实的未修改应用程序(如光线跟踪和大内存分类)对其进行了评估。 我们的结果与海葵原型显示,与基于磁盘的分页相比,未修改的单进程应用程序执行速度快2到3倍,多个并发进程的执行速度提高了6到7.7倍。 海葵原型将页面故障延迟减少了19.6倍,从平均9.8毫秒,基于磁盘的寻呼到500微秒与海葵。 最重要的是,Anemone提供虚拟化的低延迟访问潜在的“无限制”网络内存资源。

    Distributed adaptive network memory engine
    2.
    发明授权
    Distributed adaptive network memory engine 有权
    分布式自适应网络内存引擎

    公开(公告)号:US08280976B1

    公开(公告)日:2012-10-02

    申请号:US13276380

    申请日:2011-10-19

    IPC分类号: G06F15/16

    CPC分类号: H04L67/2842

    摘要: Memory demands of large-memory applications continue to remain one step ahead of the improvements in DRAM capacities of commodity systems. Performance of such applications degrades rapidly once the system hits the physical memory limit and starts paging to the local disk. A distributed network-based virtual memory scheme is provided which treats remote memory as another level in the memory hierarchy between very fast local memory and very slow local disks. Performance over gigabit Ethernet shows significant performance gains over local disk. Large memory applications may access potentially unlimited network memory resources without requiring any application or operating system code modifications, relinking or recompilation. A preferred embodiment employs kernel-level driver software.

    摘要翻译: 大型存储器应用的存储器需求在商品系统的DRAM容量的改进方面继续保持领先一步。 一旦系统达到物理内存限制,此类应用程序的性能就会下降,并开始分页到本地磁盘。 提供了一种基于分布式网络的虚拟内存方案,将远程内存视为非常快速的本地内存和非常慢的本地磁盘之间的内存层次结构中的另一个级别。 千兆以太网上的性能显示出超过本地磁盘的性能提升。 大型内存应用程序可以访问潜在的无限的网络内存资源,而不需要任何应用程序或操作系统代码修改,重新链接或重新编译。 优选实施例使用内核级驱动程序软件。

    Distributed adaptive network memory engine
    3.
    发明授权
    Distributed adaptive network memory engine 有权
    分布式自适应网络内存引擎

    公开(公告)号:US07917599B1

    公开(公告)日:2011-03-29

    申请号:US11957410

    申请日:2007-12-15

    IPC分类号: G06F15/16

    CPC分类号: H04L67/2842

    摘要: Memory demands of large-memory applications continue to remain one step ahead of the improvements in DRAM capacities of commodity systems. Performance of such applications degrades rapidly once the system hits the physical memory limit and starts paging to the local disk. A distributed network-based virtual memory scheme is provided which treats remote memory as another level in the memory hierarchy between very fast local memory and very slow local disks. Performance over gigabit Ethernet shows significant performance gains over local disk. Large memory applications may access potentially unlimited network memory resources without requiring any application or operating system code modifications, relinkling or recompilation. A preferred embodiment employs kernel-level driver software.

    摘要翻译: 大型存储器应用的存储器需求在商品系统的DRAM容量的改进方面继续保持领先一步。 一旦系统达到物理内存限制并且开始分页到本地磁盘,这种应用程序的性能就会降低。 提供了一种基于分布式网络的虚拟内存方案,它将远程内存视为非常快速的本地内存和非常慢的本地磁盘之间的内存层次结构中的另一个级别。 千兆以太网上的性能显示出超过本地磁盘的性能提升。 大型内存应用程序可以访问潜在的无限的网络内存资源,而不需要任何应用程序或操作系统代码修改,重新链接或重新编译。 优选实施例使用内核级驱动程序软件。

    Distributed adaptive network memory engine
    4.
    发明授权
    Distributed adaptive network memory engine 有权
    分布式自适应网络内存引擎

    公开(公告)号:US08417789B1

    公开(公告)日:2013-04-09

    申请号:US13278319

    申请日:2011-10-21

    IPC分类号: G06F15/16

    CPC分类号: H04L67/2842

    摘要: Memory demands of large-memory applications continue to remain one step ahead of the improvements in DRAM capacities of commodity systems. Performance of such applications degrades rapidly once the system hits the physical memory limit and starts paging to the local disk. A distributed network-based virtual memory scheme is provided which treats remote memory as another level in the memory hierarchy between very fast local memory and very slow local disks. Performance over gigabit Ethernet shows significant performance gains over local disk. Large memory applications may access potentially unlimited network memory resources without requiring any application or operating system code modifications, relinkling or recompilation. A preferred embodiment employs kernel-level driver software.

    摘要翻译: 大型存储器应用的存储器需求在商品系统的DRAM容量的改进方面继续保持领先一步。 一旦系统达到物理内存限制,此类应用程序的性能就会下降,并开始分页到本地磁盘。 提供了一种基于分布式网络的虚拟内存方案,将远程内存视为非常快速的本地内存和非常慢的本地磁盘之间的内存层次结构中的另一个级别。 千兆以太网上的性能显示出超过本地磁盘的性能提升。 大型内存应用程序可以访问潜在的无限的网络内存资源,而不需要任何应用程序或操作系统代码修改,重新链接或重新编译。 优选实施例使用内核级驱动程序软件。

    Centralized adaptive network memory engine
    5.
    发明授权
    Centralized adaptive network memory engine 有权
    集中自适应网络内存引擎

    公开(公告)号:US08291034B1

    公开(公告)日:2012-10-16

    申请号:US13073459

    申请日:2011-03-28

    IPC分类号: G06F15/16

    摘要: There is a constant battle to break even between continuing improvements in DRAM capacities and the growing memory demands of large-memory high-performance applications. Performance of such applications degrades quickly once the system hits the physical memory limit and starts swapping to the local disk. We present the design, implementation and evaluation of Anemone—an Adaptive Network Memory Engine—that virtualizes the collective unused memory of multiple machines across a gigabit Ethernet LAN, without requiring any modifications to the either the large memory applications or the Linux kernel. We have implemented a working prototype of Anemone and evaluated it using real-world unmodified applications such as ray-tracing and large in-memory sorting. Our results with the Anemone prototype show that unmodified single-process applications execute 2 to 3 times faster and multiple concurrent processes execute 6 to 7.7 times faster, when compared to disk based paging. The Anemone prototype reduces page-fault latencies by a factor of 19.6—from an average of 9.8 ms with disk based paging to 500 μs with Anemone. Most importantly, Anemone provides a virtualized low-latency access to potentially “unlimited” network memory resources.

    摘要翻译: 即使在DRAM容量的持续改进和大内存高性能应用程序的不断增长的内存需求之间,也有不断的战斗。 一旦系统达到物理内存限制并开始交换到本地磁盘,这种应用程序的性能就会降低。 我们介绍Anemone-a自适应网络内存引擎的设计,实现和评估,它可以跨越千兆以太网LAN虚拟化多台机器的集体未使用的内存,而无需对大内存应用程序或Linux内核进行任何修改。 我们已经实施了Anemone的工作原型,并使用真实的未修改应用程序(如光线跟踪和大内存分类)对其进行了评估。 我们的结果与海葵原型显示,与基于磁盘的分页相比,未修改的单进程应用程序执行速度快2到3倍,多个并发进程的执行速度提高了6到7.7倍。 海葵原型将页面故障延迟减少了19.6倍,从平均9.8毫秒,基于磁盘的寻呼到500微秒与海葵。 最重要的是,Anemone提供虚拟化的低延迟访问可能无限的网络内存资源。

    Distributed adaptive network memory engine

    公开(公告)号:US08046425B1

    公开(公告)日:2011-10-25

    申请号:US13073407

    申请日:2011-03-28

    IPC分类号: G06F15/16

    CPC分类号: H04L67/2842

    摘要: Memory demands of large-memory applications continue to remain one step ahead of the improvements in DRAM capacities of commodity systems. Performance of such applications degrades rapidly once the system hits the physical memory limit and starts paging to the local disk. A distributed network-based virtual memory scheme is provided which treats remote memory as another level in the memory hierarchy between very fast local memory and very slow local disks. Performance over gigabit Ethernet shows significant performance gains over local disk. Large memory applications may access potentially unlimited network memory resources without requiring any application or operating system code modifications, relinkling or recompilation. A preferred embodiment employs kernel-level driver software.