Human-in-the-Loop Interactive Model Training

    公开(公告)号:US20210358579A1

    公开(公告)日:2021-11-18

    申请号:US16618656

    申请日:2017-09-29

    申请人: Google LLC

    IPC分类号: G16H10/60 G06N20/00

    摘要: A method is described for training a predictive model which increases the interpretability and trustworthiness of the model for end-users. The model is trained from data having multitude of features. Each feature is associated with a real value and a time component. Many predicates (atomic elements for training the model) are defined as binary functions operating on the features, and typically time sequences of the features or logical combinations thereof. The predicates can be limited to those functions which have human understandability or encode expert knowledge relative to a predication task of the model. We iteratively train a boosting model with input from an operator or human-in-the-loop. The human-in-the-loop is provided with tools to inspect the model as it is iteratively built and remove one or more of the predicates in the model, e.g. if it does not have indicia of trustworthiness, is not causally related to a prediction of the model, or is not understandable. We repeat the iterative process several times ultimately generate a final boosting model. The final model is then evaluated, e.g., for accuracy, complexity, trustworthiness and post-hoc explainability.

    CONTENT KEYWORD IDENTIFICATION
    3.
    发明申请

    公开(公告)号:US20210125222A1

    公开(公告)日:2021-04-29

    申请号:US17140721

    申请日:2021-01-04

    申请人: Google LLC

    摘要: In general, in one aspect, a method includes compiling user interaction statistics for a set of content items displayed in association with a first target media document having a non-textual portion, at least some of the content items associated with one or more keywords, based on the interaction statistics, associating the first target media document with at least some of the keywords associated with the content items, and based on a common attribute of the first target media document and a second target media document having a non-textual portion, associating the second target media document with at least some of the keywords assigned to the first target media document. Other aspects include corresponding systems, apparatus, and computer programs stored on computer storage devices.

    Using embedding functions with a deep network

    公开(公告)号:US10679124B1

    公开(公告)日:2020-06-09

    申请号:US15368460

    申请日:2016-12-02

    申请人: Google LLC

    摘要: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for using embedded function with a deep network. One of the methods includes receiving an input comprising a plurality of features, wherein each of the features is of a different feature type; processing each of the features using a respective embedding function to generate one or more numeric values, wherein each of the embedding functions operates independently of each other embedding function, and wherein each of the embedding functions is used for features of a respective feature type; processing the numeric values using a deep network to generate a first alternative representation of the input, wherein the deep network is a machine learning model composed of a plurality of levels of non-linear operations; and processing the first alternative representation of the input using a logistic regression classifier to predict a label for the input.

    Computing numeric representations of words in a high-dimensional space

    公开(公告)号:US10922488B1

    公开(公告)日:2021-02-16

    申请号:US16363460

    申请日:2019-03-25

    申请人: Google LLC

    摘要: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for computing numeric representations of words. One of the methods includes obtaining a set of training data, wherein the set of training data comprises sequences of words; training a classifier and an embedding function on the set of training data, wherein training the embedding function comprises obtained trained values of the embedding function parameters; processing each word in the vocabulary using the embedding function in accordance with the trained values of the embedding function parameters to generate a respective numerical representation of each word in the vocabulary in the high-dimensional space; and associating each word in the vocabulary with the respective numeric representation of the word in the high-dimensional space.

    Training a model using parameter server shards

    公开(公告)号:US10733535B1

    公开(公告)日:2020-08-04

    申请号:US15665236

    申请日:2017-07-31

    申请人: Google LLC

    摘要: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a model using parameter server shards. One of the methods includes receiving, at a parameter server shard configured to maintain values of a disjoint partition of the parameters of the model, a succession of respective requests for parameter values from each of a plurality of replicas of the model; in response to each request, downloading a current value of each requested parameter to the replica from which the request was received; receiving a succession of uploads, each upload including respective delta values for each of the parameters in the partition maintained by the shard; and updating values of the parameters in the partition maintained by the parameter server shard repeatedly based on the uploads of delta values to generate current parameter values.