-
21.
公开(公告)号:US11729062B1
公开(公告)日:2023-08-15
申请号:US17944221
申请日:2022-09-14
Applicant: VMWARE, INC.
Inventor: Nilanjan Daw , Sairam Veeraswamy , Raunak Ravindra Singwi , Erol Aygar
IPC: G06F15/173 , H04L41/0897 , H04L41/40
CPC classification number: H04L41/0897 , H04L41/40
Abstract: Computer-implemented methods, media, and systems for context-sensitive defragmentation and aggregation of containerized workloads running on edge devices are disclosed. One example method includes monitoring telemetry data from multiple software defined wide area network (SD-WAN) edge devices that run multiple workloads, where the telemetry data includes at least one of resource utilization at the multiple SD-WAN edge devices, inter-workload trigger dependency, or inter-workload data dependency among the multiple workloads. It is determined, based on the telemetry data, that at least two of the multiple workloads running on at least two SD-WAN edge devices have the inter-workload trigger dependency or the inter-workload data dependency. In response to the determination that the at least two of the multiple workloads have the inter-workload trigger dependency or the inter-workload data dependency, a first process of migrating the at least two of the multiple workloads to a first SD-WAN edge device of is initiated.
-
公开(公告)号:US11436112B1
公开(公告)日:2022-09-06
申请号:US17321673
申请日:2021-05-17
Applicant: VMware, Inc.
Inventor: Keerthi Kumar , Halesh Sadashiv , Sairam Veeraswamy , Rajesh Venkatasubramanian , Kiran Dikshit , Kiran Tati
IPC: G06F11/00 , G06F11/20 , G06F15/173
Abstract: Techniques for implementing RDMA-based recovery of dirty data in remote memory are provided. In one set of embodiments, upon occurrence of a failure at a first (i.e., source) host system, a second (i.e., failover) host system can allocate a new memory region corresponding to a memory region of the source host system and retrieve a baseline copy of the memory region from a storage backend shared by the source and failover host systems. The failover host system can further populate the new memory region with the baseline copy and retrieve one or more dirty page lists for the memory region from the source host system via RDMA, where the one or more dirty page lists identify memory pages in the memory region that include data updates not present in the baseline copy. For each memory page identified in the one or more dirty page lists, the failover host system can then copy the content of that memory page from the memory region of the source host system to the new memory region via RDMA.
-
公开(公告)号:US20220237014A1
公开(公告)日:2022-07-28
申请号:US17224293
申请日:2021-04-07
Applicant: VMWARE, INC.
Inventor: UDAY PUNDALIK KURKURE , Sairam Veeraswamy , Hari Sivaraman , Lan Vu , Avinash Kumar Chaurasia
Abstract: Disclosed are aspects of network function placement in virtual graphics processing unit (vGPU)-enabled environments. In one example a network function request is associated with a network function. A scheduler selects a vGPU-enabled GPU to handle the network function request. The vGPU-enabled GPU is selected in consideration of a network function memory requirement or a network function IO requirement. The network function request is processed using an instance of the network function within a virtual machine that is executed using the selected vGPU-enabled GPU.
-
公开(公告)号:US20220006748A1
公开(公告)日:2022-01-06
申请号:US17019083
申请日:2020-09-11
Applicant: VMWARE, INC.
Inventor: SANTOSH PALLAGATTI KOTRABASAPPA , Sairam Veeraswamy , Abhishek Goliya , Abbas Mohamed
IPC: H04L12/851 , G06F16/245 , G06N20/00
Abstract: In some embodiments, a method receives a set of packets for a flow and determines a set of features for the flow from the set of packets. A classification of an elephant flow or a mice flow is selected based on the set of features. The classification is selected before assigning the flow to a network resource in a plurality of network resources. The method assigns the flow to a network resource in the plurality of network resources based on the classification for the flow and a set of classifications for flows currently assigned to the plurality of network resources. Then, the method sends the set of packets for the flow using the assigned network resource.
-
公开(公告)号:US11113782B2
公开(公告)日:2021-09-07
申请号:US16601831
申请日:2019-10-15
Applicant: VMware, Inc.
Inventor: Chandra Prakash , Anshuj Garg , Uday Pundalik Kurkure , Hari Sivaraman , Lan Vu , Sairam Veeraswamy
Abstract: Various examples are disclosed for dynamic kernel slicing for virtual graphics processing unit (vGPU) sharing in serverless computing systems. A computing device is configured to provide a serverless computing service, receive a request for execution of program code in the serverless computing service in which a plurality of virtual graphics processing units (vGPUs) are used in the execution of the program code, determine a slice size to partition a compute kernel of the program code into a plurality of sub-kernels for concurrent execution by the vGPUs, the slice size being determined for individual ones of the sub-kernels based on an optimization function that considers a load on a GPU, determine an execution schedule for executing the individual ones of the sub-kernels on the vGPUs in accordance with a scheduling policy, and execute the sub-kernels on the vGPUs as partitioned in accordance with the execution schedule.
-
公开(公告)号:US20210110506A1
公开(公告)日:2021-04-15
申请号:US16601831
申请日:2019-10-15
Applicant: VMware, Inc.
Inventor: Chandra Prakash , Anshuj Garg , Uday Pundalik Kurkure , Hari Sivaraman , Lan VU , Sairam Veeraswamy
Abstract: Various examples are disclosed for dynamic kernel slicing for virtual graphics processing unit (vGPU) sharing in serverless computing systems. A computing device is configured to provide a serverless computing service, receive a request for execution of program code in the serverless computing service in which a plurality of virtual graphics processing units (vGPUs) are used in the execution of the program code, determine a slice size to partition a compute kernel of the program code into a plurality of sub-kernels for concurrent execution by the vGPUs, the slice size being determined for individual ones of the sub-kernels based on an optimization function that considers a load on a GPU, determine an execution schedule for executing the individual ones of the sub-kernels on the vGPUs in accordance with a scheduling policy, and execute the sub-kernels on the vGPUs as partitioned in accordance with the execution schedule.
-
27.
公开(公告)号:US20240039806A1
公开(公告)日:2024-02-01
申请号:US17944245
申请日:2022-09-14
Applicant: VMWARE, INC.
Inventor: RAUNAK RAVINDRA SINGWI , Daniel Beveridge , Erol Aygar , Nilanjan Daw , Sairam Veeraswamy
IPC: G06F11/20 , H04L41/0654
CPC classification number: G06F11/203 , H04L41/0654 , G06F2201/85
Abstract: Computer-implemented methods, media, and systems for inter-cluster automated failover and migration of containerized workloads across edges devices are disclosed. One example method includes monitoring telemetry data received from a first software defined wide area network (SD-WAN) edge device that has a workload scheduled, where the telemetry data includes at least one of a health status of the workload or multiple runtime context elements at the first SD-WAN edge device. It is determined that a failure associated with either the first SD-WAN edge device or the workload occurs. A mode of the failure is determined. A remediation process based on the determined mode of the failure and a current state of the workload is performed.
-
公开(公告)号:US11792086B1
公开(公告)日:2023-10-17
申请号:US17945199
申请日:2022-09-15
Applicant: VMWARE, INC.
Inventor: Raunak Ravindra Singwi , Daniel Beveridge , Erol Aygar , Sairam Veeraswamy
IPC: G06F15/173 , H04L41/40 , H04L41/122
CPC classification number: H04L41/40 , H04L41/122
Abstract: Computer-implemented methods, media, and systems for remediation of containerized workloads based on context breach at edge devices are disclosed. One example computer-implemented method includes monitoring telemetry data from a first software defined wide area network (SD-WAN) edge device, where the telemetry data includes multiple context elements at the first SD-WAN edge device. It is determined that a context change occurs for at least one of the context elements at the first SD-WAN edge device. It is determined that due to the context change, the first SD-WAN edge device does not satisfy one or more requirements for running one or more workloads scheduled to run. In response to the determination that the first SD-WAN edge device does not satisfy the one or more requirements, the at least one of the one or more workloads is offloaded from the first SD-WAN edge device to a second SD-WAN edge device.
-
公开(公告)号:US20230315593A1
公开(公告)日:2023-10-05
申请号:US18331019
申请日:2023-06-07
Applicant: VMware, Inc.
Inventor: Keerthi Kumar , Halesh Sadashiv , Sairam Veeraswamy , Rajesh Venkatasubramanian , Kiran Dikshit , Kiran Tati
IPC: G06F11/20 , G06F15/173
CPC classification number: G06F11/2046 , G06F11/2094 , G06F15/17331 , G06F11/2023 , G06F11/2038 , G06F2201/85
Abstract: Techniques for implementing RDMA-based recovery of dirty data in remote memory are provided. In one set of embodiments, upon occurrence of a failure at a first (i.e., source) host system, a second (i.e., failover) host system can allocate a new memory region corresponding to a memory region of the source host system and retrieve a baseline copy of the memory region from a storage backend shared by the source and failover host systems. The failover host system can further populate the new memory region with the baseline copy and retrieve one or more dirty page lists for the memory region from the source host system via RDMA, where the one or more dirty page lists identify memory pages in the memory region that include data updates not present in the baseline copy. For each memory page identified in the one or more dirty page lists, the failover host system can then copy the content of that memory page from the memory region of the source host system to the new memory region via RDMA.
-
公开(公告)号:US11704030B2
公开(公告)日:2023-07-18
申请号:US17481352
申请日:2021-09-22
Applicant: VMWARE, INC.
Inventor: Marcos K. Aguilera , Keerthi Kumar , Pramod Kumar , Pratap Subrahmanyam , Sairam Veeraswamy , Rajesh Venkatasubramanian
IPC: G06F3/06
CPC classification number: G06F3/0631 , G06F3/0604 , G06F3/067 , G06F3/0659
Abstract: Disclosed are various embodiments for improving resiliency and performance of clustered memory. A computing device can acquire a chunk of byte-addressable memory from a cluster memory host. The computing device can then identify an active set of allocated memory pages and an inactive set of allocated memory pages for a process executing on the computing device. Next, the computing device can store the active set of allocated memory pages for the process in the memory of the computing device. Finally, the computing device can store the inactive set of allocated memory pages for the process in the chunk of byte-addressable memory of the cluster memory host.
-
-
-
-
-
-
-
-
-