AFFINITY-AWARE PARALLEL ZEROING OF MEMORY FOR INITIALIZATION OF LARGE PAGES IN NON-UNIFORM MEMORY ACCESS (NUMA) SERVERS
    3.
    发明申请
    AFFINITY-AWARE PARALLEL ZEROING OF MEMORY FOR INITIALIZATION OF LARGE PAGES IN NON-UNIFORM MEMORY ACCESS (NUMA) SERVERS 有权
    对非均匀存储器访问(NUMA)服务器中的大型页面进行初始化的存储空间的平均值并行调整

    公开(公告)号:US20160378388A1

    公开(公告)日:2016-12-29

    申请号:US14883304

    申请日:2015-10-14

    IPC分类号: G06F3/06

    摘要: Embodiments disclosed herein generally relate to techniques for zeroing memory in computing systems where access to memory is non-uniform. Embodiments include a system having a processor and a memory storing a program; and other embodiments include a computer readable medium containing a program. When executed on a processor, the program causes the processor to perform an operation that includes receiving, via a system call, a request for a pool of memory. The operation also includes determining a size of the requested pool of memory, and creating a dummy memory segment. The size of the dummy memory segment is larger than the size of the requested pool of memory. The operation further includes filling the dummy memory segment with one or more pages, based on the determined size of the requested pool of memory, and deleting the dummy memory segment.

    摘要翻译: 本文公开的实施例通常涉及用于对存储器访问不均匀的计算系统中的存储器归零的技术。 实施例包括具有处理器和存储程序的存储器的系统; 并且其他实施例包括包含程序的计算机可读介质。 当在处理器上执行时,程序使处理器执行包括经由系统调用接收对存储器池的请求的操作。 该操作还包括确定所请求的存储池的大小,以及创建虚拟存储器段。 虚拟内存段的大小大于请求的内存池的大小。 该操作还包括基于确定的所请求的存储池的大小填充虚拟存储器段与一个或多个页面,以及删除虚拟存储器段。

    USE OF CAPI-ATTACHED STORAGE AS EXTENDED MEMORY

    公开(公告)号:US20190377492A1

    公开(公告)日:2019-12-12

    申请号:US16005998

    申请日:2018-06-12

    IPC分类号: G06F3/06 G06F12/02

    摘要: Improved techniques for memory expansion are provided. A storage volume is opened on a storage device attached to a computing system, and the storage volume is configured as extended memory. A number of hardware threads available in the computing system are determined, and a number of contexts equal to the determined number of hardware threads are generated. Each context is assigned to one of the hardware threads. It is further determined that a first hardware thread has requested a first page that has been paged to the storage volume, where the first hardware thread is assigned a first context. A synchronous input output (I/O) interface is accessed to request that the first page be moved to memory, based on the first context. While the first page is being moved to memory, a priority of the first hardware thread is reduced.

    EMULATING MEMORY MAPPED I/O FOR COHERENT ACCELERATORS IN ERROR STATE

    公开(公告)号:US20170123684A1

    公开(公告)日:2017-05-04

    申请号:US14931612

    申请日:2015-11-03

    IPC分类号: G06F3/06

    摘要: Embodiments disclose techniques for emulating memory mapped I/O (MMIO) for coherent accelerators in an error state. In one embodiment, once an operating system determines that a processor is unable to access a coherent accelerator via a MMIO operation, the operating system deletes one or more page table entries associated with MMIO of one or more hardware contexts of the coherent accelerator. After deleting the page table entries, the operating system can detect a page fault associated with execution of a process by the processor. Upon determining that the page fault was caused by the process attempting to access one of the deleted page table entries while executing a MMIO operation, the operating system emulates the execution of the MMIO operation for the faulting process, giving the process the illusion that its requested MMIO operation was successful.

    SHARING AN ACCELERATOR CONTEXT ACROSS MULTIPLE PROCESSES

    公开(公告)号:US20170115921A1

    公开(公告)日:2017-04-27

    申请号:US14923885

    申请日:2015-10-27

    IPC分类号: G06F3/06

    摘要: The present disclosure relates to sharing a context on a coherent hardware accelerator among multiple processes. According to one embodiment, in response to a first process requesting to create a shared memory space, a system creates a shared hardware context on the coherent hardware accelerator and binds the first process and the shared memory space to the hardware context. In response to the first process spawning one or more second processes, the system binds the one or more second processes to the shared memory space and the hardware context. Subsequently, the system performs one or more operations initiated by the first process or one of the one or more second processes on the coherent hardware accelerator according to the bound hardware context.

    AFFINITY-AWARE PARALLEL ZEROING OF MEMORY IN NON-UNIFORM MEMORY ACCESS (NUMA) SERVERS
    10.
    发明申请
    AFFINITY-AWARE PARALLEL ZEROING OF MEMORY IN NON-UNIFORM MEMORY ACCESS (NUMA) SERVERS 有权
    非均匀存储器访问(NUMA)服务器中的存储器和存储器的并行存储器

    公开(公告)号:US20160378399A1

    公开(公告)日:2016-12-29

    申请号:US14987151

    申请日:2016-01-04

    IPC分类号: G06F3/06

    摘要: Embodiments disclosed herein generally relate to techniques for zeroing memory in computing systems where access to memory is non-uniform. One embodiment provides a method which includes receiving, via a system call, a request to delete a memory region. The method also includes sorting, after receiving the request, one or more pages of the memory region according to each of the one or more pages associated affinity domain. The method further includes sending requests to zero the sorted one or more pages to one or more zeroing threads that are attached to the respective affinity domain. The method further yet includes waiting, after sending the requests, to return to the system caller until a message is received, from the worker threads in each affinity domain, indicating that all the page zeroing requests have been processed.

    摘要翻译: 本文公开的实施例通常涉及用于对存储器访问不均匀的计算系统中的存储器归零的技术。 一个实施例提供了一种方法,其包括经由系统调用接收删除存储器区域的请求。 所述方法还包括根据所述一个或多个页面相关联的关联域中的每一个,在接收到所述请求之后对所述存储器区域的一个或多个页面进行排序。 该方法还包括发送请求,将排序的一个或多个页面归零到连接到相应的关联域的一个或多个归零线程。 该方法还包括在发送请求之后,从每个关联域中的工作线程等待返回到系统调用者,直到接收到消息,指示已经处理了所有的页面归零请求。