-
公开(公告)号:US20190205155A1
公开(公告)日:2019-07-04
申请号:US16292502
申请日:2019-03-05
Applicant: VMware, Inc.
Inventor: Seongbeom Kim , Haoqiang Zheng , Rajesh Venkatasubramanian , Puneet Zaroo
Abstract: Systems and methods for performing selection of non-uniform memory access (NUMA) nodes for mapping of virtual central processing unit (vCPU) operations to physical processors are provided. A CPU scheduler evaluates the latency between various candidate processors and the memory associated with the vCPU, and the size of the working set of the associated memory, and the vCPU scheduler selects an optimal processor for execution of a vCPU based on the expected memory access latency and the characteristics of the vCPU and the processors. The systems and methods further provide for monitoring system characteristics and rescheduling the vCPUs when other placements provide improved performance and efficiency.
-
公开(公告)号:US10255091B2
公开(公告)日:2019-04-09
申请号:US14492051
申请日:2014-09-21
Applicant: VMware, Inc.
Inventor: Seongbeom Kim , Haoqiang Zheng , Rajesh Venkatasubramanian , Puneet Zaroo
Abstract: Systems and methods for performing selection of non-uniform memory access (NUMA) nodes for mapping of virtual central processing unit (vCPU) operations to physical processors are provided. A CPU scheduler evaluates the latency between various candidate processors and the memory associated with the vCPU, and the size of the working set of the associated memory, and the vCPU scheduler selects an optimal processor for execution of a vCPU based on the expected memory access latency and the characteristics of the vCPU and the processors. The systems and methods further provide for monitoring system characteristics and rescheduling the vCPUs when other placements provide improved performance and efficiency.
-
公开(公告)号:US20170364279A1
公开(公告)日:2017-12-21
申请号:US15183386
申请日:2016-06-15
Applicant: VMware, Inc.
Inventor: Amitabha Banerjee , Rishi Mehta , Xiaochuan Shen , Seongbeom Kim
CPC classification number: G06F3/0611 , G06F3/0659 , G06F3/0664 , G06F3/067 , G06F9/45558 , G06F9/4881 , G06F9/5077 , G06F2009/45579 , G06F2009/45583
Abstract: Systems and methods described herein align various types of hypervisor threads with a non-uniform memory access (NUMA) client of a virtual machine (VM) that is driving I/O transactions from an application so that no remote memory access is required and the I/O transactions can be completed with local accesses to CPUs, caches, and the I/O devices of a same NUMA node of a hardware NUMA system. First, hypervisor of the VM detects whether the VM runs on a single or multiple NUMA nodes. If the VM runs on multiple NUMA nodes, a NUMA client on which the application is executing the I/O transactions is identified and knowledge of resource sharing between the NUMA client and its related hypervisor threads is established. Such knowledge is then utilized to schedule the NUMA client and its related hypervisor threads to the same NUMA node of the NUMA system.
-
4.
公开(公告)号:US20150012722A1
公开(公告)日:2015-01-08
申请号:US13935382
申请日:2013-07-03
Applicant: VMware, Inc.
Inventor: Yury BASKAKOV , Alexander Thomas Garthwaite , Rajesh Venkatasubramanian , Irene Zhang , Seongbeom Kim , Nikhil Bhatia , Kiran Tati
IPC: G06F12/10
CPC classification number: G06F12/121 , G06F9/45558 , G06F12/023 , G06F12/1009 , G06F12/1027 , G06F2009/45583 , G06F2212/1016 , G06F2212/1044
Abstract: Memory performance in a computer system that implements large page mapping is improved even when memory is scarce by identifying page sharing opportunities within the large pages at the granularity of small pages and breaking up the large pages so that small pages within the large page can be freed up through page sharing. In addition, the number of small page sharing opportunities within the large pages can be used to estimate the total amount of memory that could be reclaimed through page sharing.
Abstract translation: 通过在小页面的大小页面上识别大页面中的页面共享机会,并分解大页面,使大页面中的小页面可以被释放,即使在内存不足的情况下,也可以提高实现大页面映射的计算机系统中的内存性能 通过页面共享。 此外,大页面中的小页面共享机会数量可用于估计可通过页面共享回收的总内存量。
-
公开(公告)号:US11340945B2
公开(公告)日:2022-05-24
申请号:US15191415
申请日:2016-06-23
Applicant: VMware, Inc.
Inventor: Seongbeom Kim , Jagadish Kotra , Fei Guo
IPC: G06F9/50
Abstract: In a computer system having multiple memory proximity domains including a first memory proximity domain with a first processor and a first memory and a second memory proximity domain with a second processor and a second memory, latencies of memory access from each memory proximity domain to its local memory as well as to memory at other memory proximity domains are probed. When there is no contention, the local latency will be lower than remote latency. If the contention at the local memory proximity domain increases and the local latency becomes large enough, memory pages associated with a process running on the first processor are placed in the second memory proximity domain, so that after the placement, the process is accessing the memory pages from the memory of the second memory proximity domain during execution.
-
公开(公告)号:US10338822B2
公开(公告)日:2019-07-02
申请号:US15183386
申请日:2016-06-15
Applicant: VMware, Inc.
Inventor: Amitabha Banerjee , Rishi Mehta , Xiaochuan Shen , Seongbeom Kim
Abstract: Systems and methods described herein align various types of hypervisor threads with a non-uniform memory access (NUMA) client of a virtual machine (VM) that is driving I/O transactions from an application so that no remote memory access is required and the I/O transactions can be completed with local accesses to CPUs, caches, and the I/O devices of a same NUMA node of a hardware NUMA system. First, hypervisor of the VM detects whether the VM runs on a single or multiple NUMA nodes. If the VM runs on multiple NUMA nodes, a NUMA client on which the application is executing the I/O transactions is identified and knowledge of resource sharing between the NUMA client and its related hypervisor threads is established. Such knowledge is then utilized to schedule the NUMA client and its related hypervisor threads to the same NUMA node of the NUMA system.
-
7.
公开(公告)号:US09292452B2
公开(公告)日:2016-03-22
申请号:US13935382
申请日:2013-07-03
Applicant: VMware, Inc.
Inventor: Yury Baskakov , Alexander Thomas Garthwaite , Rajesh Venkatasubramanian , Irene Zhang , Seongbeom Kim , Nikhil Bhatia , Kiran Tati
CPC classification number: G06F12/121 , G06F9/45558 , G06F12/023 , G06F12/1009 , G06F12/1027 , G06F2009/45583 , G06F2212/1016 , G06F2212/1044
Abstract: Memory performance in a computer system that implements large page mapping is improved even when memory is scarce by identifying page sharing opportunities within the large pages at the granularity of small pages and breaking up the large pages so that small pages within the large page can be freed up through page sharing. In addition, the number of small page sharing opportunities within the large pages can be used to estimate the total amount of memory that could be reclaimed through page sharing.
Abstract translation: 通过在小页面的大小页面上识别大页面中的页面共享机会,并分解大页面,使大页面中的小页面可以被释放,即使在内存不足的情况下,也可以提高实现大页面映射的计算机系统中的内存性能 通过页面共享。 此外,大页面中的小页面共享机会数量可用于估计可通过页面共享回收的总内存量。
-
公开(公告)号:US10776151B2
公开(公告)日:2020-09-15
申请号:US16292502
申请日:2019-03-05
Applicant: VMware, Inc.
Inventor: Seongbeom Kim , Haoqiang Zheng , Rajesh Venkatasubramanian , Puneet Zaroo
Abstract: Systems and methods for performing selection of non-uniform memory access (NUMA) nodes for mapping of virtual central processing unit (vCPU) operations to physical processors are provided. A CPU scheduler evaluates the latency between various candidate processors and the memory associated with the vCPU, and the size of the working set of the associated memory, and the vCPU scheduler selects an optimal processor for execution of a vCPU based on the expected memory access latency and the characteristics of the vCPU and the processors. The systems and methods further provide for monitoring system characteristics and rescheduling the vCPUs when other placements provide improved performance and efficiency.
-
公开(公告)号:US09977747B2
公开(公告)日:2018-05-22
申请号:US15051940
申请日:2016-02-24
Applicant: VMware, Inc.
Inventor: Yury Baskakov , Alexander Thomas Garthwaite , Rajesh Venkatasubramanian , Irene Zhang , Seongbeom Kim , Nikhil Bhatia , Kiran Tati
IPC: G06F12/00 , G06F12/121 , G06F12/1009 , G06F12/02 , G06F9/455 , G06F12/1027
CPC classification number: G06F12/121 , G06F9/45558 , G06F12/023 , G06F12/1009 , G06F12/1027 , G06F2009/45583 , G06F2212/1016 , G06F2212/1044
Abstract: Memory performance in a computer system that implements large page mapping is improved even when memory is scarce by identifying page sharing opportunities within the large pages at the granularity of small pages and breaking up the large pages so that small pages within the large page can be freed up through page sharing. In addition, the number of small page sharing opportunities within the large pages can be used to estimate the total amount of memory that could be reclaimed through page sharing.
-
公开(公告)号:US20160085571A1
公开(公告)日:2016-03-24
申请号:US14492051
申请日:2014-09-21
Applicant: VMware, Inc.
Inventor: Seongbeom Kim , Haoqiang Zheng , Rajesh Venkatasubramanian , Puneet Zaroo
IPC: G06F9/455
CPC classification number: G06F9/45558 , G06F9/45554 , G06F9/48 , G06F2009/4557 , G06F2009/45583
Abstract: Examples perform selection of non-uniform memory access (NUMA) nodes for mapping of virtual central processing unit (vCPU) operations to physical processors. A CPU scheduler evaluates the latency between various candidate processors and the memory associated with the vCPU, and the size of the working set of the associated memory, and the vCPU scheduler selects an optimal processor for execution of a vCPU based on the expected memory access latency and the characteristics of the vCPU and the processors. Some examples contemplate monitoring system characteristics and rescheduling the vCPUs when other placements may provide improved performance and/or efficiency.
Abstract translation: 示例执行非均匀存储器访问(NUMA)节点的选择,用于将虚拟中央处理单元(vCPU)操作映射到物理处理器。 CPU调度器评估各种候选处理器与与vCPU相关联的存储器之间的延迟以及相关联存储器的工作集合的大小,并且vCPU调度器基于预期的存储器访问延迟选择用于执行vCPU的最优处理器 以及vCPU和处理器的特性。 一些示例考虑监视系统特性并重新安排vCPU,当其他布局可能提供改进的性能和/或效率时。
-
-
-
-
-
-
-
-
-