Systems and methods for unifying question answering and text classification via span extraction

    公开(公告)号:US11657233B2

    公开(公告)日:2023-05-23

    申请号:US17673709

    申请日:2022-02-16

    CPC classification number: G06F40/30 G06F40/284 G06F16/3329 G06N3/08

    Abstract: Systems and methods for unifying question answering and text classification via span extraction include a preprocessor for preparing a source text and an auxiliary text based on a task type of a natural language processing task, an encoder for receiving the source text and the auxiliary text from the preprocessor and generating an encoded representation of a combination of the source text and the auxiliary text, and a span-extractive decoder for receiving the encoded representation and identifying a span of text within the source text that is a result of the NLP task. The task type is one of entailment, classification, or regression. In some embodiments, the source text includes one or more of text received as input when the task type is entailment, a list of classifications when the task type is entailment or classification, or a list of similarity options when the task type is regression.

    Multitask learning as question answering

    公开(公告)号:US11600194B2

    公开(公告)日:2023-03-07

    申请号:US16006691

    申请日:2018-06-12

    Abstract: Approaches for natural language processing include a multi-layer encoder for encoding words from a context and words from a question in parallel, a multi-layer decoder for decoding the encoded context and the encoded question, a pointer generator for generating distributions over the words from the context, the words from the question, and words in a vocabulary based on an output from the decoder, and a switch. The switch generates a weighting of the distributions over the words from the context, the words from the question, and the words in the vocabulary, generates a composite distribution based on the weighting of the distribution over the first words from the context, the distribution over the second words from the question, and the distribution over the words in the vocabulary, and selects words for inclusion in an answer using the composite distribution.

    Multitask Learning As Question Answering
    4.
    发明申请

    公开(公告)号:US20200380213A1

    公开(公告)日:2020-12-03

    申请号:US16996726

    申请日:2020-08-18

    Abstract: Approaches for multitask learning as question answering include an input layer for encoding a context and a question, a self-attention based transformer including an encoder and a decoder, a first bi-directional long-term short-term memory (biLSTM) for further encoding an output of the encoder, a long-term short-term memory (LSTM) for generating a context-adjusted hidden state from the output of the decoder and a hidden state, an attention network for generating first attention weights based on an output of the first biLSTM and an output of the LSTM, a vocabulary layer for generating a distribution over a vocabulary, a context layer for generating a distribution over the context, and a switch for generating a weighting between the distributions over the vocabulary and the context, generating a composite distribution based on the weighting, and selecting a word of an answer using the composite distribution.

    SYSTEMS AND METHODS FOR UNIFYING QUESTION ANSWERING AND TEXT CLASSIFICATION VIA SPAN EXTRACTION

    公开(公告)号:US20220171943A1

    公开(公告)日:2022-06-02

    申请号:US17673709

    申请日:2022-02-16

    Abstract: Systems and methods for unifying question answering and text classification via span extraction include a preprocessor for preparing a source text and an auxiliary text based on a task type of a natural language processing task, an encoder for receiving the source text and the auxiliary text from the preprocessor and generating an encoded representation of a combination of the source text and the auxiliary text, and a span-extractive decoder for receiving the encoded representation and identifying a span of text within the source text that is a result of the NLP task. The task type is one of entailment, classification, or regression. In some embodiments, the source text includes one or more of text received as input when the task type is entailment, a list of classifications when the task type is entailment or classification, or a list of similarity options when the task type is regression.

    Cross-lingual regularization for multilingual generalization

    公开(公告)号:US11003867B2

    公开(公告)日:2021-05-11

    申请号:US16399429

    申请日:2019-04-30

    Abstract: Approaches for cross-lingual regularization for multilingual generalization include a method for training a natural language processing (NLP) deep learning module. The method includes accessing a first dataset having a first training data entry, the first training data entry including one or more natural language input text strings in a first language; translating at least one of the one or more natural language input text strings of the first training data entry from the first language to a second language; creating a second training data entry by starting with the first training data entry and substituting the at least one of the natural language input text strings in the first language with the translation of the at least one of the natural language input text strings in the second language; adding the second training data entry to a second dataset; and training the deep learning module using the second dataset.

    Multitask Learning As Question Answering
    8.
    发明申请

    公开(公告)号:US20190251168A1

    公开(公告)日:2019-08-15

    申请号:US15974118

    申请日:2018-05-08

    Abstract: Approaches for multitask learning as question answering include an input layer for encoding a context and a question, a self-attention based transformer including an encoder and a decoder, a first bi-directional long-term short-term memory (biLSTM) for further encoding an output of the encoder, a long-term short-term memory (LSTM) for generating a context-adjusted hidden state from the output of the decoder and a hidden state, an attention network for generating first attention weights based on an output of the first biLSTM and an output of the LSTM, a vocabulary layer for generating a distribution over a vocabulary, a context layer for generating a distribution over the context, and a switch for generating a weighting between the distributions over the vocabulary and the context, generating a composite distribution based on the weighting, and selecting a word of an answer using the composite distribution.

    Machine-learned hormone status prediction from image analysis

    公开(公告)号:US11508481B2

    公开(公告)日:2022-11-22

    申请号:US16895983

    申请日:2020-06-08

    Abstract: An analytics system uses one or more machine-learned models to predict a hormone receptor status from a H&E stain image. The system partitions H&E stain images each into a plurality of image tiles. Bags of tiles are created through sampling of the image tiles. The analytics system trains one or more machine-learned models with training H&E stain images having a positive or negative receptor status. The analytics system generates, via a tile featurization model, a tile feature vector for each image tile a test bag for a test H&E stain image. The analytics system generates, via an attention model, an aggregate feature vector for the test bag by aggregating the tile feature vectors of the test bag, wherein an attention weight is determined for each tile feature vector. The analytics system predicts a hormone receptor status by applying a prediction model to the aggregate feature vector for the test bag.

    Hybrid training of deep networks
    10.
    发明授权

    公开(公告)号:US11276002B2

    公开(公告)日:2022-03-15

    申请号:US15926768

    申请日:2018-03-20

    Abstract: Hybrid training of deep networks includes a multi-layer neural network. The training includes setting a current learning algorithm for the multi-layer neural network to a first learning algorithm. The training further includes iteratively applying training data to the neural network, determining a gradient for parameters of the neural network based on the applying of the training data, updating the parameters based on the current learning algorithm, and determining whether the current learning algorithm should be switched to a second learning algorithm based on the updating. The training further includes, in response to the determining that the current learning algorithm should be switched to a second learning algorithm, changing the current learning algorithm to the second learning algorithm and initializing a learning rate of the second learning algorithm based on the gradient and a step used by the first learning algorithm to update the parameters of the neural network.

Patent Agency Ranking