TRAINING ENCODER MODEL AND/OR USING TRAINED ENCODER MODEL TO DETERMINE RESPONSIVE ACTION(S) FOR NATURAL LANGUAGE INPUT

    公开(公告)号:US20200104746A1

    公开(公告)日:2020-04-02

    申请号:US16611725

    申请日:2018-12-14

    申请人: Google LLC

    摘要: Systems, methods, and computer readable media related to: training an encoder model that can be utilized to determine semantic similarity of a natural language textual string to each of one or more additional natural language textual strings (directly and/or indirectly); and/or using a trained encoder model to determine one or more responsive actions to perform in response to a natural language query. The encoder model is a machine learning model, such as a neural network model. In some implementations of training the encoder model, the encoder model is trained as part of a larger network architecture trained based on one or more tasks that are distinct from a “semantic textual similarity” task for which the encoder model can be used.

    Training encoder model and/or using trained encoder model to determine responsive action(s) for natural language input

    公开(公告)号:US10783456B2

    公开(公告)日:2020-09-22

    申请号:US16611725

    申请日:2018-12-14

    申请人: Google LLC

    摘要: Systems, methods, and computer readable media related to: training an encoder model that can be utilized to determine semantic similarity of a natural language textual string to each of one or more additional natural language textual strings (directly and/or indirectly); and/or using a trained encoder model to determine one or more responsive actions to perform in response to a natural language query. The encoder model is a machine learning model, such as a neural network model. In some implementations of training the encoder model, the encoder model is trained as part of a larger network architecture trained based on one or more tasks that are distinct from a “semantic textual similarity” task for which the encoder model can be used.

    COOPERATIVELY TRAINING AND/OR USING SEPARATE INPUT AND SUBSEQUENT CONTENT NEURAL NETWORKS FOR INFORMATION RETRIEVAL

    公开(公告)号:US20220036197A1

    公开(公告)日:2022-02-03

    申请号:US17502343

    申请日:2021-10-15

    申请人: Google LLC

    摘要: Systems, methods, and computer readable media related to information retrieval. Some implementations are related to training and/or using a relevance model for information retrieval. The relevance model includes an input neural network model and a subsequent content neural network model. The input neural network model and the subsequent content neural network model can be separate, but trained and/or used cooperatively. The input neural network model and the subsequent content neural network model can be “separate” in that separate inputs are applied to the neural network models, and each of the neural network models is used to generate its own feature vector based on its applied input. A comparison of the feature vectors generated based on the separate network models can then be performed, where the comparison indicates relevance of the input applied to the input neural network model to the separate input applied to the subsequent content neural network model.

    TRAINING ENCODER MODEL AND/OR USING TRAINED ENCODER MODEL TO DETERMINE RESPONSIVE ACTION(S) FOR NATURAL LANGUAGE INPUT

    公开(公告)号:US20200380418A1

    公开(公告)日:2020-12-03

    申请号:US16995149

    申请日:2020-08-17

    申请人: Google LLC

    摘要: Systems, methods, and computer readable media related to: training an encoder model that can be utilized to determine semantic similarity of a natural language textual string to each of one or more additional natural language textual strings (directly and/or indirectly); and/or using a trained encoder model to determine one or more responsive actions to perform in response to a natural language query. The encoder model is a machine learning model, such as a neural network model. In some implementations of training the encoder model, the encoder model is trained as part of a larger network architecture trained based on one or more tasks that are distinct from a “semantic textual similarity” task for which the encoder model can be used.