-
公开(公告)号:US20190197422A1
公开(公告)日:2019-06-27
申请号:US15879302
申请日:2018-01-24
发明人: Bee-Chung Chen , Deepak Agarwal , Alex Shelkovnykov , Josh Fleming , Yiming Ma
CPC分类号: G06N20/00 , G06F16/903 , G06K9/6256 , G06K9/6286 , G06K9/6287 , G06N7/005 , G06Q10/063112 , G06Q10/1053 , G06Q50/01
摘要: In an example, predictions/recommendations using machine learned models are made even more accurate by using three models instead of a single Generalized Linear Mixed (GLMix) model. Specifically, rather than having a single GLMix model with different coefficients for users and items, three separate models are used and then combined. Each of these models has different granularities and dimensions. A global model models the similarity between user attributes (e.g., from the member profile or activity history) and item attributes. A per-user model models user attributes and activity history. A per-item model models item attributes and activity history. Such a model may be termed a Generalized Additive Mixed Effect (GAME) model.
-
公开(公告)号:US20190197013A1
公开(公告)日:2019-06-27
申请号:US15879316
申请日:2018-01-24
发明人: Bee-Chung Chen , Deepak Agarwal , Alex Shelkovnykov , Josh Fleming , Yiming Ma
CPC分类号: G06N20/00 , G06F16/903 , G06K9/6256 , G06K9/6286 , G06K9/6287 , G06N7/005 , G06Q10/063112 , G06Q10/1053 , G06Q50/01
摘要: Iterations of a machine learned model training process are performed until a convergence occurs. A fixed effects machine learned model is trained using a first machine learning algorithm. Residuals of the training of the fixed effects machine learned model are determined by comparing results of the trained fixed effects machine learned model to a first set of target results. A first random effects machine learned model is trained using a second machine learning algorithm and the residuals of the training of the fixed effects machine learned model. Residuals of the training of the first random effect machine learned model are determined by comparing results of the trained first random effects machine learned model to a second set of target result, in each subsequent iteration the training of the fixed effects machine learned model uses residuals of the training of a last machine learned model trained in a previous iteration.
-
公开(公告)号:US11106982B2
公开(公告)日:2021-08-31
申请号:US16109411
申请日:2018-08-22
发明人: Yiming Ma , Alex Shelkovnykov , Josh Fleming , Bee-Chung Chen , Bo Long
摘要: In an example embodiment, a warm-start training solution is used to dramatically reduce the computational resources needed to train when retraining a generalized additive mixed-effect (GAME) model. The problem of retraining time is particularly applicable to GAME models, since these models take much longer to train as the data grows. In the past, the strategy to reduce computational resources during retraining was to use less training data, but this affects the model quality, especially for GAME models, which rely on fine-grained sub-models at, for example, member or item levels. The present solution addresses the computational resources issues without sacrificing GAME model accuracy.
-
公开(公告)号:US20200065678A1
公开(公告)日:2020-02-27
申请号:US16109411
申请日:2018-08-22
发明人: Yiming Ma , Alex Shelkovnykov , Josh Fleming , Bee-Chung Chen , Bo Long
摘要: In an example embodiment, a warm-start training solution is used to dramatically reduce the computational resources needed to train when retraining a generalized additive mixed-effect (GAME) model. The problem of retraining time is particularly applicable to GAME models, since these models take much longer to train as the data grows. In the past, the strategy to reduce computational resources during retraining was to use less training data, but this affects the model quality, especially for GAME models, which rely on fine-grained sub-models at, for example, member or item levels. The present solution addresses the computational resources issues without sacrificing GAME model accuracy.
-
-
-