-
公开(公告)号:US20230171210A1
公开(公告)日:2023-06-01
申请号:US17590836
申请日:2022-02-02
Applicant: VMWARE, INC.
Inventor: ROHIT PRADEEP SHETTY , Ravish CHAWLA , Adam CHOW
IPC: H04L51/066 , G06F40/284 , G06F40/103 , G06N20/00
CPC classification number: H04L51/066 , G06F40/284 , G06F40/103 , G06N20/00
Abstract: Disclosed herein are examples of systems and methods for formatting electronic messages using machine learning. An electronic message can be obtained, and a processed message can be generated based at least in part on the electronic message. At least one attribute for the processed message can be determined. A formatting specification can be generated based at least in part on the at least one attribute. A reformatted message can be generated based at least in part on the formatting specification.
-
公开(公告)号:US20240005430A1
公开(公告)日:2024-01-04
申请号:US17895142
申请日:2022-08-25
Applicant: VMWARE, INC.
Inventor: Rohit Pradeep SHETTY , Ramani PANCHAPAKESAN , Ravish CHAWLA
CPC classification number: G06Q50/2057 , G06N20/00
Abstract: Disclosed are various approaches for surfacing contextual training programs for users. In some examples, user context data is identified for a user account. The user context data is inputted into a training recommendation model. A training recommendation is generated. The training recommendation recommends a training program that is mapped to the user context data by the training recommendation model. The training recommendation or the training program is surfaced to a client device that is identified by the contextual training service.
-
公开(公告)号:US20240346021A1
公开(公告)日:2024-10-17
申请号:US18301739
申请日:2023-04-17
Applicant: VMware, Inc.
Inventor: Chaoting XUAN , Ravish CHAWLA , Erich STUNTEBECK
IPC: G06F16/2452 , G06F40/20
CPC classification number: G06F16/24522 , G06F40/20
Abstract: The present disclosure provides an approach for training a machine learning model. Embodiments include receiving text comprising a natural language request. Embodiments include providing one or more inputs to a source machine learning model based on the text, wherein the source machine learning model has been trained using source training data corresponding to a plurality of databases. Embodiments include receiving, from the source machine learning model in response to the one or more inputs, a database query in a syntax corresponding to a target database. Embodiments include generating training data for a target machine learning model based on the text and the database query received from the source machine learning model, wherein the target machine learning model has been trained using a smaller amount of training data than the source training data that was used to train the source machine learning model.
-
-