-
公开(公告)号:US20240242114A1
公开(公告)日:2024-07-18
申请号:US18155734
申请日:2023-01-17
Applicant: Microsoft Technology Licensing, LLC
Inventor: Qiang Xiao , Haichao Wei , Hideyuki Winston Inada , Chengming Jiang
IPC: G06N20/00
CPC classification number: G06N20/00
Abstract: Methods, systems, and apparatuses include receiving digital data. Embedded patches are generated using digital data. A transformed patch is generated by applying a transformer block to an embedded patch. Filtered patches are created for the transformed patch by applying wavelet filters to the transformed patch. A combined patch is created by combining the filtered patches. A set of training data is generated using the combined patch. A trained prediction model is generated by applying a prediction model to the set of training data.
-
公开(公告)号:US11481627B2
公开(公告)日:2022-10-25
申请号:US16669283
申请日:2019-10-30
Applicant: Microsoft Technology Licensing, LLC
Inventor: Yuwei Qiu , Chengming Jiang , Huiji Gao , Bee-Chung Chen , Bo Long
Abstract: Computer-implemented techniques for learning composite machine learned models are disclosed. Benefits to implementors of the disclosed techniques include allowing non-machine learning experts to use the techniques for learning a composite machine learned model based on a learning dataset, reducing or eliminating the explorative trial and error process of manually tuning architectural parameters and hyperparameters, and reducing the computing resource requirements and model learning time for learning composite machine learned models. The techniques improve the operation of distributed learning computing systems by reducing or eliminating straggler effects and by reducing or minimizing synchronization latency when executing a composite model search algorithm for learning a composite machine learned model.
-
公开(公告)号:US20210133555A1
公开(公告)日:2021-05-06
申请号:US16669283
申请日:2019-10-30
Applicant: Microsoft Technology Licensing, LLC
Inventor: Yuwei Qiu , Chengming Jiang , Huiji Gao , Bee-Chung Chen , Bo Long
Abstract: Computer-implemented techniques for learning composite machine learned models are disclosed. Benefits to implementors of the disclosed techniques include allowing non-machine learning experts to use the techniques for learning a composite machine learned model based on a learning dataset, reducing or eliminating the explorative trial and error process of manually tuning architectural parameters and hyperparameters, and reducing the computing resource requirements and model learning time for learning composite machine learned models. The techniques improve the operation of distributed learning computing systems by reducing or eliminating straggler effects and by reducing or minimizing synchronization latency when executing a composite model search algorithm for learning a composite machine learned model.
-
公开(公告)号:US20200380407A1
公开(公告)日:2020-12-03
申请号:US16430243
申请日:2019-06-03
Applicant: Microsoft Technology Licensing, LLC
Inventor: Chengming Jiang , Kinjal Basu , Wei Lu , Souvik Ghosh , Mansi Gupta
Abstract: In an example embodiment, training data is obtained, the training data comprising values for a plurality of different features. Then a global machine learned model is trained using a first machine learning algorithm by feeding the training data into the first machine learning algorithm during a fixed effect training process. A non-linear first random effects machine learned model is trained by feeding a subset of the training data into a second machine learning algorithm, the subset of the training data being limited to training data corresponding to a particular value of one of the plurality of different features.
-
公开(公告)号:US20200226496A1
公开(公告)日:2020-07-16
申请号:US16246403
申请日:2019-01-11
Applicant: Microsoft Technology Licensing, LLC
Inventor: Kinjal Basu , Chengming Jiang , Yunbo Ouyang , Josh Fleming
Abstract: Systems and methods determine optimized hyperparameter values for one or more machine-learning models. A sample training data set from a larger corpus of training data is obtained. Initial hyperparameter values are then randomly selected. Using the sample training data set and the randomly chosen hyperparameter values, an initial set of performance metric values are obtained. Maximized hyperparameter values are then determined from the initial set of hyperparameter values based on the corresponding performance metric value. A larger corpus of training data is then evaluated using the maximized hyperparameter values and the corresponding machine-learning model, which yields another corresponding set of performance metric values. The maximized hyperparameter values and their corresponding set of performance metric values are then merged with the prior set of hyperparameter values. The foregoing operations are performed iteratively until it is determined that the hyperparameter values are converging to a particular value.
-
-
-
-