Context-sensitive defragmentation and aggregation of containerized workloads running on edge devices

    公开(公告)号:US11729062B1

    公开(公告)日:2023-08-15

    申请号:US17944221

    申请日:2022-09-14

    Applicant: VMWARE, INC.

    CPC classification number: H04L41/0897 H04L41/40

    Abstract: Computer-implemented methods, media, and systems for context-sensitive defragmentation and aggregation of containerized workloads running on edge devices are disclosed. One example method includes monitoring telemetry data from multiple software defined wide area network (SD-WAN) edge devices that run multiple workloads, where the telemetry data includes at least one of resource utilization at the multiple SD-WAN edge devices, inter-workload trigger dependency, or inter-workload data dependency among the multiple workloads. It is determined, based on the telemetry data, that at least two of the multiple workloads running on at least two SD-WAN edge devices have the inter-workload trigger dependency or the inter-workload data dependency. In response to the determination that the at least two of the multiple workloads have the inter-workload trigger dependency or the inter-workload data dependency, a first process of migrating the at least two of the multiple workloads to a first SD-WAN edge device of is initiated.

    Remote direct memory access (RDMA)-based recovery of dirty data in remote memory

    公开(公告)号:US11436112B1

    公开(公告)日:2022-09-06

    申请号:US17321673

    申请日:2021-05-17

    Applicant: VMware, Inc.

    Abstract: Techniques for implementing RDMA-based recovery of dirty data in remote memory are provided. In one set of embodiments, upon occurrence of a failure at a first (i.e., source) host system, a second (i.e., failover) host system can allocate a new memory region corresponding to a memory region of the source host system and retrieve a baseline copy of the memory region from a storage backend shared by the source and failover host systems. The failover host system can further populate the new memory region with the baseline copy and retrieve one or more dirty page lists for the memory region from the source host system via RDMA, where the one or more dirty page lists identify memory pages in the memory region that include data updates not present in the baseline copy. For each memory page identified in the one or more dirty page lists, the failover host system can then copy the content of that memory page from the memory region of the source host system to the new memory region via RDMA.

    NETWORK RESOURCE SELECTION FOR FLOWS USING FLOW CLASSIFICATION

    公开(公告)号:US20220006748A1

    公开(公告)日:2022-01-06

    申请号:US17019083

    申请日:2020-09-11

    Applicant: VMWARE, INC.

    Abstract: In some embodiments, a method receives a set of packets for a flow and determines a set of features for the flow from the set of packets. A classification of an elephant flow or a mice flow is selected based on the set of features. The classification is selected before assigning the flow to a network resource in a plurality of network resources. The method assigns the flow to a network resource in the plurality of network resources based on the classification for the flow and a set of classifications for flows currently assigned to the plurality of network resources. Then, the method sends the set of packets for the flow using the assigned network resource.

    Dynamic kernel slicing for VGPU sharing in serverless computing systems

    公开(公告)号:US11113782B2

    公开(公告)日:2021-09-07

    申请号:US16601831

    申请日:2019-10-15

    Applicant: VMware, Inc.

    Abstract: Various examples are disclosed for dynamic kernel slicing for virtual graphics processing unit (vGPU) sharing in serverless computing systems. A computing device is configured to provide a serverless computing service, receive a request for execution of program code in the serverless computing service in which a plurality of virtual graphics processing units (vGPUs) are used in the execution of the program code, determine a slice size to partition a compute kernel of the program code into a plurality of sub-kernels for concurrent execution by the vGPUs, the slice size being determined for individual ones of the sub-kernels based on an optimization function that considers a load on a GPU, determine an execution schedule for executing the individual ones of the sub-kernels on the vGPUs in accordance with a scheduling policy, and execute the sub-kernels on the vGPUs as partitioned in accordance with the execution schedule.

    DYNAMIC KERNEL SLICING FOR VGPU SHARING IN SERVERLESS COMPUTING SYSTEMS

    公开(公告)号:US20210110506A1

    公开(公告)日:2021-04-15

    申请号:US16601831

    申请日:2019-10-15

    Applicant: VMware, Inc.

    Abstract: Various examples are disclosed for dynamic kernel slicing for virtual graphics processing unit (vGPU) sharing in serverless computing systems. A computing device is configured to provide a serverless computing service, receive a request for execution of program code in the serverless computing service in which a plurality of virtual graphics processing units (vGPUs) are used in the execution of the program code, determine a slice size to partition a compute kernel of the program code into a plurality of sub-kernels for concurrent execution by the vGPUs, the slice size being determined for individual ones of the sub-kernels based on an optimization function that considers a load on a GPU, determine an execution schedule for executing the individual ones of the sub-kernels on the vGPUs in accordance with a scheduling policy, and execute the sub-kernels on the vGPUs as partitioned in accordance with the execution schedule.

    Remediation of containerized workloads based on context breach at edge devices

    公开(公告)号:US11792086B1

    公开(公告)日:2023-10-17

    申请号:US17945199

    申请日:2022-09-15

    Applicant: VMWARE, INC.

    CPC classification number: H04L41/40 H04L41/122

    Abstract: Computer-implemented methods, media, and systems for remediation of containerized workloads based on context breach at edge devices are disclosed. One example computer-implemented method includes monitoring telemetry data from a first software defined wide area network (SD-WAN) edge device, where the telemetry data includes multiple context elements at the first SD-WAN edge device. It is determined that a context change occurs for at least one of the context elements at the first SD-WAN edge device. It is determined that due to the context change, the first SD-WAN edge device does not satisfy one or more requirements for running one or more workloads scheduled to run. In response to the determination that the first SD-WAN edge device does not satisfy the one or more requirements, the at least one of the one or more workloads is offloaded from the first SD-WAN edge device to a second SD-WAN edge device.

    REMOTE DIRECT MEMORY ACCESS (RDMA)-BASED RECOVERY OF DIRTY DATA IN REMOTE MEMORY

    公开(公告)号:US20230315593A1

    公开(公告)日:2023-10-05

    申请号:US18331019

    申请日:2023-06-07

    Applicant: VMware, Inc.

    Abstract: Techniques for implementing RDMA-based recovery of dirty data in remote memory are provided. In one set of embodiments, upon occurrence of a failure at a first (i.e., source) host system, a second (i.e., failover) host system can allocate a new memory region corresponding to a memory region of the source host system and retrieve a baseline copy of the memory region from a storage backend shared by the source and failover host systems. The failover host system can further populate the new memory region with the baseline copy and retrieve one or more dirty page lists for the memory region from the source host system via RDMA, where the one or more dirty page lists identify memory pages in the memory region that include data updates not present in the baseline copy. For each memory page identified in the one or more dirty page lists, the failover host system can then copy the content of that memory page from the memory region of the source host system to the new memory region via RDMA.

Patent Agency Ranking