-
公开(公告)号:US20190197422A1
公开(公告)日:2019-06-27
申请号:US15879302
申请日:2018-01-24
Applicant: Microsoft Technology Licensing, LLC
Inventor: Bee-Chung Chen , Deepak Agarwal , Alex Shelkovnykov , Josh Fleming , Yiming Ma
CPC classification number: G06N20/00 , G06F16/903 , G06K9/6256 , G06K9/6286 , G06K9/6287 , G06N7/005 , G06Q10/063112 , G06Q10/1053 , G06Q50/01
Abstract: In an example, predictions/recommendations using machine learned models are made even more accurate by using three models instead of a single Generalized Linear Mixed (GLMix) model. Specifically, rather than having a single GLMix model with different coefficients for users and items, three separate models are used and then combined. Each of these models has different granularities and dimensions. A global model models the similarity between user attributes (e.g., from the member profile or activity history) and item attributes. A per-user model models user attributes and activity history. A per-item model models item attributes and activity history. Such a model may be termed a Generalized Additive Mixed Effect (GAME) model.
-
公开(公告)号:US20190197013A1
公开(公告)日:2019-06-27
申请号:US15879316
申请日:2018-01-24
Applicant: Microsoft Technology Licensing, LLC
Inventor: Bee-Chung Chen , Deepak Agarwal , Alex Shelkovnykov , Josh Fleming , Yiming Ma
CPC classification number: G06N20/00 , G06F16/903 , G06K9/6256 , G06K9/6286 , G06K9/6287 , G06N7/005 , G06Q10/063112 , G06Q10/1053 , G06Q50/01
Abstract: Iterations of a machine learned model training process are performed until a convergence occurs. A fixed effects machine learned model is trained using a first machine learning algorithm. Residuals of the training of the fixed effects machine learned model are determined by comparing results of the trained fixed effects machine learned model to a first set of target results. A first random effects machine learned model is trained using a second machine learning algorithm and the residuals of the training of the fixed effects machine learned model. Residuals of the training of the first random effect machine learned model are determined by comparing results of the trained first random effects machine learned model to a second set of target result, in each subsequent iteration the training of the fixed effects machine learned model uses residuals of the training of a last machine learned model trained in a previous iteration.
-
公开(公告)号:US20200226496A1
公开(公告)日:2020-07-16
申请号:US16246403
申请日:2019-01-11
Applicant: Microsoft Technology Licensing, LLC
Inventor: Kinjal Basu , Chengming Jiang , Yunbo Ouyang , Josh Fleming
Abstract: Systems and methods determine optimized hyperparameter values for one or more machine-learning models. A sample training data set from a larger corpus of training data is obtained. Initial hyperparameter values are then randomly selected. Using the sample training data set and the randomly chosen hyperparameter values, an initial set of performance metric values are obtained. Maximized hyperparameter values are then determined from the initial set of hyperparameter values based on the corresponding performance metric value. A larger corpus of training data is then evaluated using the maximized hyperparameter values and the corresponding machine-learning model, which yields another corresponding set of performance metric values. The maximized hyperparameter values and their corresponding set of performance metric values are then merged with the prior set of hyperparameter values. The foregoing operations are performed iteratively until it is determined that the hyperparameter values are converging to a particular value.
-
公开(公告)号:US11106982B2
公开(公告)日:2021-08-31
申请号:US16109411
申请日:2018-08-22
Applicant: Microsoft Technology Licensing, LLC
Inventor: Yiming Ma , Alex Shelkovnykov , Josh Fleming , Bee-Chung Chen , Bo Long
Abstract: In an example embodiment, a warm-start training solution is used to dramatically reduce the computational resources needed to train when retraining a generalized additive mixed-effect (GAME) model. The problem of retraining time is particularly applicable to GAME models, since these models take much longer to train as the data grows. In the past, the strategy to reduce computational resources during retraining was to use less training data, but this affects the model quality, especially for GAME models, which rely on fine-grained sub-models at, for example, member or item levels. The present solution addresses the computational resources issues without sacrificing GAME model accuracy.
-
公开(公告)号:US20200065678A1
公开(公告)日:2020-02-27
申请号:US16109411
申请日:2018-08-22
Applicant: Microsoft Technology Licensing, LLC
Inventor: Yiming Ma , Alex Shelkovnykov , Josh Fleming , Bee-Chung Chen , Bo Long
Abstract: In an example embodiment, a warm-start training solution is used to dramatically reduce the computational resources needed to train when retraining a generalized additive mixed-effect (GAME) model. The problem of retraining time is particularly applicable to GAME models, since these models take much longer to train as the data grows. In the past, the strategy to reduce computational resources during retraining was to use less training data, but this affects the model quality, especially for GAME models, which rely on fine-grained sub-models at, for example, member or item levels. The present solution addresses the computational resources issues without sacrificing GAME model accuracy.
-
-
-
-