-
公开(公告)号:US20230409225A1
公开(公告)日:2023-12-21
申请号:US17845766
申请日:2022-06-21
Applicant: VMware, Inc.
Inventor: Alex Markuze , Shay Vargaftik , Igor Golikov , Yaniv Ben-Itzhak , Avishay Yanai
IPC: G06F3/06
CPC classification number: G06F3/0647 , G06F3/067 , G06F3/0604
Abstract: Some embodiments provide a method for transmitting data at a network interface controller (NIC) of a computer that operates as a server. The computer includes multiple storage devices. The method receives a request from a client device for a particular file. The method translates the particular file into a memory location corresponding to a particular one of the storage devices at the computer. The method transmits the requested file from the particular storage location to the client device.
-
公开(公告)号:US20230342599A1
公开(公告)日:2023-10-26
申请号:US17727172
申请日:2022-04-22
Applicant: VMware, Inc.
Inventor: Shay Vargaftik , Yaniv Ben-Itzhak , Alex Markuze , Igor Golikov , Avishay Yanai
CPC classification number: G06N3/08 , G06N3/0454
Abstract: Some embodiments provide a method for performing distributed machine learning (ML) across multiple computers. At a smart network interface controller (NIC) of a first computer, the method receives a set of ML parameters from the first computer related to training an ML model. The method compresses the set of ML parameters based on a current state of a connection to a central computer that receives sets of ML parameters from a plurality of the computers. The method sends the compressed set of ML parameters to the central computer for the central computer to process the compressed set of ML parameters along with corresponding sets of ML parameters received from the other computers of the plurality of computers.
-
公开(公告)号:US11748668B2
公开(公告)日:2023-09-05
申请号:US16923988
申请日:2020-07-08
Applicant: VMware, Inc.
Inventor: Yaniv Ben-Itzhak , Shay Vargaftik
IPC: G06N20/20 , G06F9/50 , G06F18/243 , G06F18/214 , G06F18/21
CPC classification number: G06N20/20 , G06F9/505 , G06F9/5083 , G06F18/214 , G06F18/2185 , G06F18/24323
Abstract: Techniques for implementing a tree-based ensemble classifier comprising an internal load balancer are provided. In one set of embodiments, the tree-based ensemble classifier can receive a query data instance and select, via the internal load balancer, a subset of its decision trees for processing the query data instance. The tree-based ensemble classifier can then query each decision tree in the selected subset with the query data instance, combine the per-tree classifications generated by the subset trees to generate a subset classification, and determine whether a confidence level associated with the subset classification is sufficiently high. If the answer is yes, the tree-based ensemble classifier can output the subset classification as a final classification result for the query data instance. If the answer is no, the tree-based ensemble classifier can repeat the foregoing steps until a sufficient confidence level is reached or until all of its decision trees have been selected and queried.
-
公开(公告)号:US11526785B2
公开(公告)日:2022-12-13
申请号:US16908498
申请日:2020-06-22
Applicant: VMware, Inc.
Inventor: Yaniv Ben-Itzhak , Shay Vargaftik
Abstract: Techniques for performing predictability-driven compression of training data sets used for machine learning (ML) are provided. In one set of embodiments, a computer system can receive a training data set comprising a plurality of data instances and can train an ML model using the plurality of data instances, the training resulting in a trained version of the ML model. The computer system can further generate prediction metadata for each data instance in the plurality of data instances using the trained version of the ML model and can compute a predictability measure for each data instance based on the prediction metadata, the predictability measure indicating a training value of the data instance. The computer system can then filter one or more data instances from the plurality of data instances based on the computed predictability measures, the filtering resulting in a compressed version of the training data set.
-
公开(公告)号:US20220012639A1
公开(公告)日:2022-01-13
申请号:US16924035
申请日:2020-07-08
Applicant: VMware, Inc.
Inventor: Yaniv Ben-Itzhak , Shay Vargaftik
Abstract: Techniques for quantizing training data sets using machine learning (ML) model metadata are provided. In one set of embodiments, a computer system can receive a training data set comprising a plurality of features and a plurality of data instances, where each data instance includes a feature value for each of the plurality of features. The computer system can further train a machine learning (ML) model using the training data set, where the training results in a trained version of the ML model, and can extract metadata from the trained version of the ML model pertaining to the plurality of features. The computer system can then quantize the plurality of data instances based on the extracted metadata, the quantizing resulting in a quantized version of the training data set.
-
公开(公告)号:US20210397990A1
公开(公告)日:2021-12-23
申请号:US16908498
申请日:2020-06-22
Applicant: VMware, Inc.
Inventor: Yaniv Ben-Itzhak , Shay Vargaftik
Abstract: Techniques for performing predictability-driven compression of training data sets used for machine learning (ML) are provided. In one set of embodiments, a computer system can receive a training data set comprising a plurality of data instances and can train an ML model using the plurality of data instances, the training resulting in a trained version of the ML model. The computer system can further generate prediction metadata for each data instance in the plurality of data instances using the trained version of the ML model and can compute a predictability measure for each data instance based on the prediction metadata, the predictability measure indicating a training value of the data instance. The computer system can then filter one or more data instances from the plurality of data instances based on the computed predictability measures, the filtering resulting in a compressed version of the training data set.
-
公开(公告)号:US20250111244A1
公开(公告)日:2025-04-03
申请号:US18479613
申请日:2023-10-02
Applicant: VMware, Inc.
Inventor: Yaniv Ben-Itzhak , Shay Vargaftik
Abstract: A framework for implementing reinforcement learning (RL)-based dynamic aggregation for distributed learning (DL) and federated learning (FL) is provided. In one set of embodiments, the framework includes an RL agent that interacts with the parameter server and clients of a DL/FL system and periodically receives two inputs from the system while the system is executing a training run: a “state” comprising information regarding the current runtime properties of the system and a “reward” comprising information pertaining to one or more training metrics to be optimized. In response to these inputs, the RL agent generates an “action” comprising information for modifying the parameter server's aggregation function in a manner that maximizes future cumulative rewards expected from the DL/FL system based on the state.
-
公开(公告)号:US20230409488A1
公开(公告)日:2023-12-21
申请号:US17845658
申请日:2022-06-21
Applicant: VMware, Inc.
Inventor: Shay Vargaftik , Alex Markuze , Yaniv Ben-Itzhak , Igor Golikov , Avishay Yanai
IPC: G06F12/121 , G06F12/0815 , G06F13/16
CPC classification number: G06F12/121 , G06F12/0815 , G06F13/1668 , G06F2213/3808
Abstract: Some embodiments provide a method for performing data message processing at a smart NIC of a computer that executes a software forwarding element (SFE). The method stores (i) a set of cache entries that the smart NIC uses to process a set of received data messages without providing the data messages to the SFE and (ii) rule updates used by the smart NIC to validate the cache entries. After a period of time, the method determines that the rule updates are incorporated into a data message processing structure of the SFE. Upon incorporating the rule updates, the method deletes from the smart NIC (i) the rule updates and (ii) at least a subset of the cache entries.
-
公开(公告)号:US20230162022A1
公开(公告)日:2023-05-25
申请号:US17535479
申请日:2021-11-24
Applicant: VMware, Inc.
Inventor: Yaniv Ben-Itzhak , Shay Vargaftik , Boris Shustin
Abstract: At an iteration k of a training procedure for training a deep neural network (DNN), a first computer system can sample a batch bk of data instances from a training dataset local to that computer system in a manner that mostly conforms to importance sampling probabilities of the data instances, but also applies a “stiffness” factor with respect to data instances appearing in batch bk−1 of a prior iteration k−1. This stiffness factor makes it more likely, or guarantees, that some portion of the data instances in prior batch bk−1—which is present on a second computer system holding the DNN—will be reused in current batch bk. The first computer system can then transmit the new data instances in batch bk to the second computer system and the second computer system can reconstruct batch bk using the received new data instances and its local copy of prior batch bk−1.
-
公开(公告)号:US11645587B2
公开(公告)日:2023-05-09
申请号:US16924035
申请日:2020-07-08
Applicant: VMware, Inc.
Inventor: Yaniv Ben-Itzhak , Shay Vargaftik
Abstract: Techniques for quantizing training data sets using machine learning (ML) model metadata are provided. In one set of embodiments, a computer system can receive a training data set comprising a plurality of features and a plurality of data instances, where each data instance includes a feature value for each of the plurality of features. The computer system can further train a machine learning (ML) model using the training data set, where the training results in a trained version of the ML model, and can extract metadata from the trained version of the ML model pertaining to the plurality of features. The computer system can then quantize the plurality of data instances based on the extracted metadata, the quantizing resulting in a quantized version of the training data set.
-
-
-
-
-
-
-
-
-