-
公开(公告)号:US20230385094A1
公开(公告)日:2023-11-30
申请号:US17826911
申请日:2022-05-27
Applicant: VMware, Inc.
Inventor: Alex Markuze , Shay Vargaftik , Igor Golikov , Yaniv Ben-Itzhak , Avishay Yanai
IPC: G06F9/455
CPC classification number: G06F9/45558 , G06F2009/45595 , G06F2009/4557 , G06F2009/45583
Abstract: Some embodiments provide a method for sending data messages at a network interface controller (NIC) of a computer. From a network stack executing on the computer, the method receives (i) a header for a data message to send and (ii) a logical memory address of a payload for the data message. The method translates the logical memory address into a memory address for accessing a particular one of multiple devices connected to the computer. The method reads payload data from the memory address of the particular device. The method sends the data message with the header received from the network stack and the payload data read from the particular device.
-
公开(公告)号:US20230281516A1
公开(公告)日:2023-09-07
申请号:US18316147
申请日:2023-05-11
Applicant: VMware, Inc.
Inventor: Yaniv Ben-Itzhak , Shay Vargaftik
CPC classification number: G06N20/00 , G06N5/045 , G06F16/285
Abstract: Techniques for implementing intelligent data partitioning for a distributed machine learning (ML) system are provided. In one set of embodiments, a computer system implementing a data partition module can receive a training data instance for a ML task and identify, using a clustering algorithm, a cluster to which the training data instance belongs, the cluster being one of a plurality of clusters determined via the clustering algorithm that partition a data space of the ML task. The computer system can then transmit the training data instance to a ML worker of the distributed ML system that is assigned to the cluster, where the ML worker is configured to build or update a ML model using the training data instance.
-
3.
公开(公告)号:US20230177381A1
公开(公告)日:2023-06-08
申请号:US17535483
申请日:2021-11-24
Applicant: VMware, Inc.
Inventor: Yaniv Ben-Itzhak , Shay Vargaftik , Boris Shustin
IPC: G06N20/00
CPC classification number: G06N20/00
Abstract: Techniques for accelerating the training of machine learning (ML) models in the presence of network bandwidth constraints via data instance compression. For example, consider a scenario in which (1) a first computer system is configured to train a ML model on a training dataset that is stored on a second computer system remote from the first computer system, and (2) one or more network bandwidth constraints place a cap on the amount of data that may be transmitted between the two computer systems per training iteration. In this and other similar scenarios, the techniques of the present disclosure enable the second computer system to send, according to one of several schemes, a batch of compressed data instances to the first computer system at each training iteration, such that the data size of the batch is less than or equal to the data cap.
-
公开(公告)号:US11620578B2
公开(公告)日:2023-04-04
申请号:US16924020
申请日:2020-07-08
Applicant: VMware, Inc.
Inventor: Yaniv Ben-Itzhak , Shay Vargaftik
Abstract: Techniques for implementing unsupervised anomaly detection via supervised methods are provided. In one set of embodiments, a computer system can train an unsupervised anomaly detection classifier using an unlabeled training data set and classify the unlabeled training data set via the trained version of the unsupervised classifier, where the classifying generates anomaly scores for the data instances in the unlabeled training data set. The computer system can further construct a labeled training data set that includes a first subset of data instances from the unlabeled training data set whose anomaly scores are below a first threshold and a second subset of data instances from the unlabeled training data set whose anomaly scores are above a second threshold. The computer system can then train a supervised anomaly detection classifier using the labeled training data set.
-
公开(公告)号:US20220292342A1
公开(公告)日:2022-09-15
申请号:US17199157
申请日:2021-03-11
Applicant: VMware, Inc.
Inventor: Yaniv Ben-Itzhak , Shay Vargaftik
Abstract: In one set of embodiments, a client can receive from a server a copy of a neural network from a server including N layers. The client can further provide one or more data instances as input to the copy, the one or more data instances being part of a local training data set residing on the client, compute a client gradient comprising gradient values for the N layers, determine a partial client gradient comprising gradient values for a first K out of the N layers, and determine an output of a K-th layer of the copy, the output being a result of processing performed by the first K layers on the one or more data instances. The client can then transmit the partial client gradient and the output of the K-th layer to the server.
-
公开(公告)号:US20220083917A1
公开(公告)日:2022-03-17
申请号:US17021454
申请日:2020-09-15
Applicant: VMware, Inc.
Inventor: Yaniv Ben-Itzhak , Shay Vargaftik
IPC: G06N20/20 , G06F16/2455
Abstract: In one set of embodiments, a computing node in a plurality of computing nodes can train a first ML model on a local training dataset comprising a plurality of labeled training data instances, where the training is performed using a distributed/federated training approach across the plurality of computing nodes and where the training results in a trained version of the first ML model. The computing node can further compute, using the trained version of the first ML model, a training value measure for each labeled training data instance in the local training dataset and identify a subset of the plurality of labeled training data instances based on the computed training value measures. The computing node can then train a second ML model on the subset, where the training of the second ML model is performed using the distributed/federated training approach.
-
公开(公告)号:US20220012535A1
公开(公告)日:2022-01-13
申请号:US16924009
申请日:2020-07-08
Applicant: VMware, Inc.
Inventor: Yaniv Ben-Itzhak , Shay Vargaftik
Abstract: Techniques for augmenting training data sets for machine learning (ML) classifiers using classification metadata are provided. In one set of embodiments, a computer system can train a first ML classifier using a training data set, where the training data set comprises a plurality of data instances, where each data instance includes a set of features, and where the training results in a trained version of the first ML classifier. The computer system can further classify each data instance in the plurality of data instances using the trained version of the first ML classifier, the classifications generating classification metadata for each data instance, and augment the training data set with the classification metadata to create an augmented version of the training data set. The computer system can then train a second ML classifier using the augmented version of the training data set.
-
公开(公告)号:US20250111244A1
公开(公告)日:2025-04-03
申请号:US18479613
申请日:2023-10-02
Applicant: VMware, Inc.
Inventor: Yaniv Ben-Itzhak , Shay Vargaftik
Abstract: A framework for implementing reinforcement learning (RL)-based dynamic aggregation for distributed learning (DL) and federated learning (FL) is provided. In one set of embodiments, the framework includes an RL agent that interacts with the parameter server and clients of a DL/FL system and periodically receives two inputs from the system while the system is executing a training run: a “state” comprising information regarding the current runtime properties of the system and a “reward” comprising information pertaining to one or more training metrics to be optimized. In response to these inputs, the RL agent generates an “action” comprising information for modifying the parameter server's aggregation function in a manner that maximizes future cumulative rewards expected from the DL/FL system based on the state.
-
公开(公告)号:US20230409488A1
公开(公告)日:2023-12-21
申请号:US17845658
申请日:2022-06-21
Applicant: VMware, Inc.
Inventor: Shay Vargaftik , Alex Markuze , Yaniv Ben-Itzhak , Igor Golikov , Avishay Yanai
IPC: G06F12/121 , G06F12/0815 , G06F13/16
CPC classification number: G06F12/121 , G06F12/0815 , G06F13/1668 , G06F2213/3808
Abstract: Some embodiments provide a method for performing data message processing at a smart NIC of a computer that executes a software forwarding element (SFE). The method stores (i) a set of cache entries that the smart NIC uses to process a set of received data messages without providing the data messages to the SFE and (ii) rule updates used by the smart NIC to validate the cache entries. After a period of time, the method determines that the rule updates are incorporated into a data message processing structure of the SFE. Upon incorporating the rule updates, the method deletes from the smart NIC (i) the rule updates and (ii) at least a subset of the cache entries.
-
公开(公告)号:US20230162022A1
公开(公告)日:2023-05-25
申请号:US17535479
申请日:2021-11-24
Applicant: VMware, Inc.
Inventor: Yaniv Ben-Itzhak , Shay Vargaftik , Boris Shustin
Abstract: At an iteration k of a training procedure for training a deep neural network (DNN), a first computer system can sample a batch bk of data instances from a training dataset local to that computer system in a manner that mostly conforms to importance sampling probabilities of the data instances, but also applies a “stiffness” factor with respect to data instances appearing in batch bk−1 of a prior iteration k−1. This stiffness factor makes it more likely, or guarantees, that some portion of the data instances in prior batch bk−1—which is present on a second computer system holding the DNN—will be reused in current batch bk. The first computer system can then transmit the new data instances in batch bk to the second computer system and the second computer system can reconstruct batch bk using the received new data instances and its local copy of prior batch bk−1.
-
-
-
-
-
-
-
-
-