-
11.
公开(公告)号:US20150268978A1
公开(公告)日:2015-09-24
申请号:US14222260
申请日:2014-03-21
Applicant: VMware, Inc.
Inventor: Lan Vu , Hari Sivaraman , Rishi Bidarkar
IPC: G06F9/455
CPC classification number: G06F9/45525 , G06F9/4484 , G06F9/50
Abstract: Systems and techniques are described for modifying an executable file of an application and executing the application using the modified executable file. A described technique includes receiving, by a virtual machine, a request to perform an initial function of an application and an executable file for the application. The virtual machine modifies the executable file by redirecting the executable file to a custom runtime library that includes a custom function configured to initialize the application and to place the application in a paused state. A custom function call is added to the custom function in the executable file. The virtual machine initializes the application by executing the modified executable file, the executing causing the custom function to initialize the application and place the application in a paused state.
Abstract translation: 描述了用于修改应用程序的可执行文件并使用修改的可执行文件执行应用程序的系统和技术。 所描述的技术包括由虚拟机接收执行应用的初始功能的请求和用于该应用的可执行文件。 虚拟机通过将可执行文件重定向到自定义运行时库来修改可执行文件,该库包括配置为初始化应用程序并将应用程序置于暂停状态的自定义功能。 自定义函数调用将添加到可执行文件中的自定义函数中。 虚拟机通过执行修改的可执行文件来初始化应用程序,执行导致自定义函数初始化应用程序并将应用程序置于暂停状态。
-
公开(公告)号:US11886898B2
公开(公告)日:2024-01-30
申请号:US16833833
申请日:2020-03-30
Applicant: VMware, Inc.
Inventor: Lan Vu , Uday Pundalik Kurkure , Hari Sivaraman
CPC classification number: G06F9/45558 , G06T1/20 , G06F2009/4557 , G06F2009/45595
Abstract: Various aspects are disclosed for graphics processing unit (GPU)-remoting latency aware migration. In some aspects, a host executes a GPU-remoting client that includes a GPU workload. GPU-remoting latencies are identified for hosts of a cluster. A destination host is identified based on having a lower GPU-remoting latency than the host currently executing the GPU-remoting client. The GPU-remoting client is migrated from its current host to the destination host.
-
公开(公告)号:US20210373924A1
公开(公告)日:2021-12-02
申请号:US16882942
申请日:2020-05-26
Applicant: VMware, Inc.
Inventor: Hari Sivaraman , Uday Pundalik Kurkure , Lan Vu
Abstract: Various examples are disclosed for generating heatmaps and plotting utilization of hosts in a datacenter environment. A collector virtual machine can rove the datacenter and collect utilization data. The utilization data can be plotted on a heatmap to illustrate utilization hotspots in the datacenter environment.
-
公开(公告)号:US11055568B2
公开(公告)日:2021-07-06
申请号:US16522710
申请日:2019-07-26
Applicant: VMWARE, INC.
Inventor: Lan Vu , Uday Kurkure , Hari Sivaraman , Aravind Kumar Rao Bappanadu , Mohit Mangal
Abstract: The current document is directed to methods and systems that employ image-recognition and machine learning to directly measure application-program response time from changes in a user interface displayed by the application program in much the same way that application-program users perceive response times when manually issuing commands through the user interface. The currently disclosed methods and systems involve building recognition models, training the recognition models to recognize application-program states from changes in the user interface displayed by the application program, and using the recognition models to monitor the user interface displayed by an application program to detect and assign timestamps to application-program state changes, from which the elapsed time for various different operations is computed. This approach mirrors the methods by which users perceive application-program response time when users initiate operations through the application-program-provided user interface and visually monitor progress of the operations as reflected in changes to the displayed application-program user interface.
-
公开(公告)号:US11048561B2
公开(公告)日:2021-06-29
申请号:US16520080
申请日:2019-07-23
Applicant: VMware, Inc.
Inventor: Uday Pundalik Kurkure , Hari Sivaraman , Lan Vu
IPC: G06F9/50 , G06F9/455 , G06F9/4401 , G06F9/38
Abstract: Various examples are disclosed for avoiding power-on failures during virtualization of graphics processing units. A computing environment can be directed to, in response to a virtual machine being powered on, identify a profile for a virtual graphics processing unit (vGPU) designated for the virtual machine, the profile specifying an amount of memory required by the vGPU, identify that the virtual machine is unable to be assigned to any of a plurality of physical graphics processing units (GPUs) based on the amount of memory required by the vGPU, free fat least the amount of memory required by the vGPU by performing a migration of at least one existing virtual machine from a first one of the physical GPUs to a second one of the physical GPUs, and assign the virtual machine to an available one of the physical GPUs and a corresponding host.
-
公开(公告)号:US20250039093A1
公开(公告)日:2025-01-30
申请号:US18380218
申请日:2023-10-16
Applicant: VMWARE, INC.
Inventor: Avinash Kumar Chaurasia , Anshuj Garg , Uday Pundalik Kurkure , Hari SIVARAMAN , Lan Vu , Sairam Veeraswamy
Abstract: An example computer system includes a hardware platform including a processing unit and software executing on the hardware platform. The software includes a workload and a scheduler, the workload including a network function chain having network functions, the scheduler configured to schedule the network functions for execution on the processing unit. A downstream network function includes a congestion monitor configured to monitor a first receive queue supplying packets to the downstream network function, the congestion monitor configured to compare occupancy of the first receive queue against a queue threshold. An upstream network function including a rate controller configured to receive a notification from the congestion monitor generated in response to the occupancy of the first receive queue exceeding the queue threshold, the rate controller configured to modify a rate of packet flow between a second receive queue and the upstream network function in response to the notification.
-
公开(公告)号:US20220253341A1
公开(公告)日:2022-08-11
申请号:US17733284
申请日:2022-04-29
Applicant: VMware, Inc.
Inventor: Anshuj Garg , Uday Pundalik Kurkure , Hari Sivaraman , Lan Vu
Abstract: Disclosed are aspects of memory-aware placement in systems that include graphics processing units (GPUs) that are virtual GPU (vGPU) enabled. In some examples, graphics processing units (GPU) are identified in a computing environment. Graphics processing requests are received. A graphics processing request includes a GPU memory requirement. The graphics processing requests are processed using a graphics processing request placement model that minimizes a number of utilized GPUs that are utilized to accommodate the requests. Virtual GPUs (vGPUs) are created to accommodate the graphics processing requests according to the graphics processing request placement model. The utilized GPUs divide their GPU memories to provide a subset of the plurality of vGPUs.
-
公开(公告)号:US11263054B2
公开(公告)日:2022-03-01
申请号:US16550327
申请日:2019-08-26
Applicant: VMWARE, INC.
Inventor: Anshuj Garg , Uday Pundalik Kurkure , Hari Sivaraman , Lan Vu
Abstract: Disclosed are aspects of memory-aware placement in systems that include graphics processing units (GPUs) that are virtual GPU (vGPU) enabled. In some embodiments, a computing environment is monitored to identify graphics processing unit (GPU) data for a plurality of virtual GPU (vGPU) enabled GPUs of the computing environment, a plurality of vGPU requests are received. A respective vGPU request includes a GPU memory requirement. GPU configurations are determined in order to accommodate vGPU requests. The GPU configurations are determined based on an integer linear programming (ILP) vGPU request placement model. Configured vGPU profiles are applied for vGPU enabled GPUs, and vGPUs are created based on the configured vGPU profiles. The vGPU requests are assigned to the vGPUs.
-
19.
公开(公告)号:US09529628B2
公开(公告)日:2016-12-27
申请号:US14222260
申请日:2014-03-21
Applicant: VMware, Inc.
Inventor: Lan Vu , Hari Sivaraman , Rishi Bidarkar
CPC classification number: G06F9/45525 , G06F9/4484 , G06F9/50
Abstract: Systems and techniques are described for modifying an executable file of an application and executing the application using the modified executable file. A described technique includes receiving, by a virtual machine, a request to perform an initial function of an application and an executable file for the application. The virtual machine modifies the executable file by redirecting the executable file to a custom runtime library that includes a custom function configured to initialize the application and to place the application in a paused state. A custom function call is added to the custom function in the executable file. The virtual machine initializes the application by executing the modified executable file, the executing causing the custom function to initialize the application and place the application in a paused state.
Abstract translation: 描述了用于修改应用程序的可执行文件并使用修改的可执行文件执行应用程序的系统和技术。 所描述的技术包括由虚拟机接收执行应用的初始功能的请求和用于该应用的可执行文件。 虚拟机通过将可执行文件重定向到自定义运行时库来修改可执行文件,该库包括配置为初始化应用程序并将应用程序置于暂停状态的自定义功能。 自定义函数调用将添加到可执行文件中的自定义函数中。 虚拟机通过执行修改的可执行文件来初始化应用程序,执行导致自定义函数初始化应用程序并将应用程序置于暂停状态。
-
20.
公开(公告)号:US12250159B2
公开(公告)日:2025-03-11
申请号:US17974575
申请日:2022-10-27
Applicant: VMWARE, INC.
Inventor: Avinash Kumar Chaurasia , Lan Vu , Uday Pundalik Kurkure , Hari Sivaraman , Sairam Veeraswamy
IPC: G06F15/173 , H04L47/11 , H04L47/30 , H04L47/50
Abstract: Disclosed are various embodiments for rate proportional scheduling to reduce packet loss in virtualized network function chains. A congestion monitor executed by a first virtual machine executed by a host computing device can detect congestion in a receive queue associated with a first virtualized network function implemented by a first virtual machine. The congestion monitor can send a pause signal to a rate controller executed by a second virtual machine executed by the host computing device. The rate controller can receive the pause signal. In response, the rate controller can pause the processing of packets by a second virtualized network function implemented by the second virtual machine to reduce congestion in the receive queue of the first virtualized network function.
-
-
-
-
-
-
-
-
-