Three-dimensional (3D) convolution with 3D batch normalization

    公开(公告)号:US11416747B2

    公开(公告)日:2022-08-16

    申请号:US16355290

    申请日:2019-03-15

    Abstract: A method of classifying three-dimensional (3D) data includes receiving three-dimensional (3D) data and processing the 3D data using a neural network that includes a plurality of subnetworks arranged in a sequence and the data is processed through each of the subnetworks. Each of the subnetworks is configured to receive an output generated by a preceding subnetwork in the sequence, process the output through a plurality of parallel 3D convolution layer paths of varying convolution volume, process the output through a parallel pooling path, and concatenate output of the 3D convolution layer paths and the pooling path to generate an output representation from each of the subnetworks. Following processing the data through the subnetworks, the method includes processing the output of a last one of the subnetworks in the sequence through a vertical pooling layer to generate an output and classifying the received 3D data based upon the generated output.

    Systems and methods for unifying question answering and text classification via span extraction

    公开(公告)号:US11281863B2

    公开(公告)日:2022-03-22

    申请号:US16518905

    申请日:2019-07-22

    Abstract: Systems and methods for unifying question answering and text classification via span extraction include a preprocessor for preparing a source text and an auxiliary text based on a task type of a natural language processing task, an encoder for receiving the source text and the auxiliary text from the preprocessor and generating an encoded representation of a combination of the source text and the auxiliary text, and a span-extractive decoder for receiving the encoded representation and identifying a span of text within the source text that is a result of the NLP task. The task type is one of entailment, classification, or regression. In some embodiments, the source text includes one or more of text received as input when the task type is entailment, a list of classifications when the task type is entailment or classification, or a list of similarity options when the task type is regression.

    Adaptive attention model for image captioning

    公开(公告)号:US11244111B2

    公开(公告)日:2022-02-08

    申请号:US16668333

    申请日:2019-10-30

    Abstract: The technology disclosed presents a novel spatial attention model that uses current hidden state information of a decoder long short-term memory (LSTM) to guide attention and to extract spatial image features for use in image captioning. The technology disclosed also presents a novel adaptive attention model for image captioning that mixes visual information from a convolutional neural network (CNN) and linguistic information from an LSTM. At each timestep, the adaptive attention model automatically decides how heavily to rely on the image, as opposed to the linguistic model, to emit the next caption word. The technology disclosed further adds a new auxiliary sentinel gate to an LSTM architecture and produces a sentinel LSTM (Sn-LSTM). The sentinel gate produces a visual sentinel at each timestep, which is an additional representation, derived from the LSTM's memory, of long and short term visual and linguistic information.

    SYSTEMS AND METHODS FOR LEARNING FOR DOMAIN ADAPTATION

    公开(公告)号:US20210389736A1

    公开(公告)日:2021-12-16

    申请号:US17460691

    申请日:2021-08-30

    Abstract: A method for training parameters of a first domain adaptation model. The method includes evaluating a cycle consistency objective using a first task specific model associated with a first domain and a second task specific model associated with a second domain, and evaluating one or more first discriminator models to generate a first discriminator objective using the second task specific model. The one or more first discriminator models include a plurality of discriminators corresponding to a plurality of bands that corresponds domain variable ranges of the first and second domains respectively. The method further includes updating, based on the cycle consistency objective and the first discriminator objective, one or more parameters of the first domain adaptation model for adapting representations from the first domain to the second domain.

    TRAINING A JOINT MANY-TASK NEURAL NETWORK MODEL USING SUCCESSIVE REGULARIZATION

    公开(公告)号:US20210279551A1

    公开(公告)日:2021-09-09

    申请号:US17331337

    申请日:2021-05-26

    Abstract: The technology disclosed provides a so-called “joint many-task neural network model” to solve a variety of increasingly complex natural language processing (NLP) tasks using growing depth of layers in a single end-to-end model. The model is successively trained by considering linguistic hierarchies, directly connecting word representations to all model layers, explicitly using predictions in lower tasks, and applying a so-called “successive regularization” technique to prevent catastrophic forgetting. Three examples of lower level model layers are part-of-speech (POS) tagging layer, chunking layer, and dependency parsing layer. Two examples of higher level model layers are semantic relatedness layer and textual entailment layer. The model achieves the state-of-the-art results on chunking, dependency parsing, semantic relatedness and textual entailment.

    Agent persona grounded chit-chat generation framework

    公开(公告)号:US11087092B2

    公开(公告)日:2021-08-10

    申请号:US16399871

    申请日:2019-04-30

    Abstract: Approaches for determining a response for an agent in an undirected dialogue are provided. The approaches include a dialogue generating framework comprising an encoder neural network, a decoder neural network, and a language model neural network. The dialogue generating framework generates a sketch sentence response with at least one slot. The sketch sentence response is generated word by word and takes into account the undirected dialogue and agent traits of the agent making the response. The dialogue generating framework generates sentence responses by filling the slot with words from the agent traits. The dialogue generating framework ranks the sentence responses according to perplexity by passing the sentence responses through a language model and selects a final response which is a sentence response that has a lowest perplexity.

    NEURAL NETWORK BASED TRANSLATION OF NATURAL LANGUAGE QUERIES TO DATABASE QUERIES

    公开(公告)号:US20200301925A1

    公开(公告)日:2020-09-24

    申请号:US16894495

    申请日:2020-06-05

    Abstract: A computing system uses neural networks to translate natural language queries to database queries. The computing system uses a plurality of machine learning based models, each machine learning model for generating a portion of the database query. The machine learning models use an input representation generated based on terms of the input natural language query, a set of columns of the database schema, and the vocabulary of a database query language, for example, structured query language SQL. The plurality of machine learning based models may include an aggregation classifier model for determining an aggregation operator in the database query, a result column predictor model for determining the result columns of the database query, and a condition clause predictor model for determining the condition clause of the database query. The condition clause predictor is based on reinforcement learning.

    Multitask learning as question answering

    公开(公告)号:US10776581B2

    公开(公告)日:2020-09-15

    申请号:US15974118

    申请日:2018-05-08

    Abstract: Approaches for multitask learning as question answering include an input layer for encoding a context and a question, a self-attention based transformer including an encoder and a decoder, a first bi-directional long-term short-term memory (biLSTM) for further encoding an output of the encoder, a long-term short-term memory (LSTM) for generating a context-adjusted hidden state from the output of the decoder and a hidden state, an attention network for generating first attention weights based on an output of the first biLSTM and an output of the LSTM, a vocabulary layer for generating a distribution over a vocabulary, a context layer for generating a distribution over the context, and a switch for generating a weighting between the distributions over the vocabulary and the context, generating a composite distribution based on the weighting, and selecting a word of an answer using the composite distribution.

Patent Agency Ranking