-
41.
公开(公告)号:US11544470B2
公开(公告)日:2023-01-03
申请号:US17005316
申请日:2020-08-28
Applicant: salesforce.com, inc.
Inventor: Jianguo Zhang , Kazuma Hashimoto , Chien-Sheng Wu , Wenhao Liu , Richard Socher , Caiming Xiong
Abstract: An online system allows user interactions using natural language expressions. The online system uses a machine learning based model to infer an intent represented by a user expression. The machine learning based model takes as input a user expression and an example expression to compute a score indicating whether the user expression matches the example expression. Based on the scores, the intent inference module determines a most applicable intent for the expression. The online system determines a confidence threshold such that user expressions indicating a high confidence are assigned the most applicable intent and user expressions indicating a low confidence are assigned an out-of-scope intent. The online system encodes the example expressions using the machine learning based model. The online system may compare an encoded user expression with encoded example expressions to identify a subset of example expressions used to determine the most applicable intent.
-
公开(公告)号:US11526507B2
公开(公告)日:2022-12-13
申请号:US16894495
申请日:2020-06-05
Applicant: salesforce.com, inc.
Inventor: Victor Zhong , Caiming Xiong , Richard Socher
IPC: G06F16/2452 , G06N3/04 , G06N3/08 , G06N7/00
Abstract: A computing system uses neural networks to translate natural language queries to database queries. The computing system uses a plurality of machine learning based models, each machine learning model for generating a portion of the database query. The machine learning models use an input representation generated based on terms of the input natural language query, a set of columns of the database schema, and the vocabulary of a database query language, for example, structured query language SQL. The plurality of machine learning based models may include an aggregation classifier model for determining an aggregation operator in the database query, a result column predictor model for determining the result columns of the database query, and a condition clause predictor model for determining the condition clause of the database query. The condition clause predictor is based on reinforcement learning.
-
43.
公开(公告)号:US20220171943A1
公开(公告)日:2022-06-02
申请号:US17673709
申请日:2022-02-16
Applicant: salesforce.com, inc.
Inventor: Nitish Shirish Keskar , Bryan McCann , Richard Socher , Caiming Xiong
IPC: G06F40/30 , G06F40/284
Abstract: Systems and methods for unifying question answering and text classification via span extraction include a preprocessor for preparing a source text and an auxiliary text based on a task type of a natural language processing task, an encoder for receiving the source text and the auxiliary text from the preprocessor and generating an encoded representation of a combination of the source text and the auxiliary text, and a span-extractive decoder for receiving the encoded representation and identifying a span of text within the source text that is a result of the NLP task. The task type is one of entailment, classification, or regression. In some embodiments, the source text includes one or more of text received as input when the task type is entailment, a list of classifications when the task type is entailment or classification, or a list of similarity options when the task type is regression.
-
公开(公告)号:US11250311B2
公开(公告)日:2022-02-15
申请号:US15853570
申请日:2017-12-22
Applicant: salesforce.com, inc.
Inventor: Alexander Rosenberg Johansen , Bryan McCann , James Bradbury , Richard Socher
IPC: G06N3/04 , G06K9/62 , G06N20/00 , G06F15/76 , G06F40/30 , G06F40/16 , G06N3/08 , G06N5/04 , G06F40/169
Abstract: The technology disclosed proposes using a combination of computationally cheap, less-accurate bag of words (BoW) model and computationally expensive, more-accurate long short-term memory (LSTM) model to perform natural processing tasks such as sentiment analysis. The use of cheap, less-accurate BoW model is referred to herein as “skimming”. The use of expensive, more-accurate LSTM model is referred to herein as “reading”. The technology disclosed presents a probability-based guider (PBG). PBG combines the use of BoW model and the LSTM model. PBG uses a probability thresholding strategy to determine, based on the results of the BoW model, whether to invoke the LSTM model for reliably classifying a sentence as positive or negative. The technology disclosed also presents a deep neural network-based decision network (DDN) that is trained to learn the relationship between the BoW model and the LSTM model and to invoke only one of the two models.
-
公开(公告)号:US20220036884A1
公开(公告)日:2022-02-03
申请号:US17500855
申请日:2021-10-13
Applicant: salesforce.com, inc.
Abstract: Embodiments described herein provide safe policy improvement (SPI) in a batch reinforcement learning framework for a task-oriented dialogue. Specifically, a batch reinforcement learning framework for dialogue policy learning is provided, which improves the performance of the dialogue and learns to shape a reward that reasons the invention behind human response rather than just imitating the human demonstration.
-
公开(公告)号:US11232308B2
公开(公告)日:2022-01-25
申请号:US16394964
申请日:2019-04-25
Applicant: salesforce.com, inc.
Inventor: Mingfei Gao , Richard Socher , Caiming Xiong
Abstract: Embodiments described herein provide a two-stage online detection of action start system including a classification module and a localization module. The classification module generates a set of action scores corresponding to a first video frame from the video, based on the first video frame and video frames before the first video frames in the video. Each action score indicating a respective probability that the first video frame contains a respective action class. The localization module is coupled to the classification module for receiving the set of action scores from the classification module and generating an action-agnostic start probability that the first video frame contains an action start. A fusion component is coupled to the localization module and the localization module for generating, based on the set of action scores and the action-agnostic start probability, a set of action-specific start probabilities, each action-specific start probability corresponding to a start of an action belonging to the respective action class.
-
公开(公告)号:US20210375269A1
公开(公告)日:2021-12-02
申请号:US16999426
申请日:2020-08-21
Applicant: salesforce.com, inc.
Inventor: Semih Yavuz , Kazuma Hashimoto , Wenhao Liu , Nitish Shirish Keskar , Richard Socher , Caiming Xiong
IPC: G10L15/183 , G06N20/00 , G10L15/06 , G06F17/18
Abstract: Embodiments described herein utilize pre-trained masked language models as the backbone for dialogue act tagging and provide cross-domain generalization of the resulting dialogue acting taggers. For example, a pre-trained MASK token of BERT model may be used as a controllable mechanism for augmenting text input, e.g., generating tags for an input of unlabeled dialogue history. The pre-trained MASK model can be trained with semi-supervised learning, e.g., using multiple objectives from supervised tagging loss, masked tagging loss, masked language model loss, and/or a disagreement loss.
-
公开(公告)号:US11042796B2
公开(公告)日:2021-06-22
申请号:US15421431
申请日:2017-01-31
Applicant: salesforce.com, inc.
Inventor: Kazuma Hashimoto , Caiming Xiong , Richard Socher
IPC: G06N3/04 , G06N3/08 , G06F40/30 , G06F40/205 , G06F40/216 , G06F40/253 , G06F40/284 , G06N3/063 , G10L15/18 , G10L25/30 , G10L15/16 , G06F40/00
Abstract: The technology disclosed provides a so-called “joint many-task neural network model” to solve a variety of increasingly complex natural language processing (NLP) tasks using growing depth of layers in a single end-to-end model. The model is successively trained by considering linguistic hierarchies, directly connecting word representations to all model layers, explicitly using predictions in lower tasks, and applying a so-called “successive regularization” technique to prevent catastrophic forgetting. Three examples of lower level model layers are part-of-speech (POS) tagging layer, chunking layer, and dependency parsing layer. Two examples of higher level model layers are semantic relatedness layer and textual entailment layer. The model achieves the state-of-the-art results on chunking, dependency parsing, semantic relatedness and textual entailment.
-
公开(公告)号:US20210073459A1
公开(公告)日:2021-03-11
申请号:US17027130
申请日:2020-09-21
Applicant: salesforce.com, inc.
Inventor: Bryan McCann , Caiming Xiong , Richard Socher
IPC: G06F40/126 , G06N3/08 , G06N3/04 , G06F40/30 , G06F40/47 , G06F40/205 , G06F40/289
Abstract: A system is provided for natural language processing. In some embodiments, the system includes an encoder for generating context-specific word vectors for at least one input sequence of words. The encoder is pre-trained using training data for performing a first natural language processing task. A neural network performs a second natural language processing task on the at least one input sequence of words using the context-specific word vectors. The first natural language process task is different from the second natural language processing task and the neural network is separately trained from the encoder. In some embodiments, the first natural processing task can be machine translation, and the second natural processing task can be one of sentiment analysis, question classification, entailment classification, and question answering
-
50.
公开(公告)号:US10565306B2
公开(公告)日:2020-02-18
申请号:US15817165
申请日:2017-11-18
Applicant: salesforce.com, inc.
Inventor: Jiasen Lu , Caiming Xiong , Richard Socher
Abstract: The technology disclosed presents a novel spatial attention model that uses current hidden state information of a decoder long short-term memory (LSTM) to guide attention and to extract spatial image features for use in image captioning. The technology disclosed also presents a novel adaptive attention model for image captioning that mixes visual information from a convolutional neural network (CNN) and linguistic information from an LSTM. At each timestep, the adaptive attention model automatically decides how heavily to rely on the image, as opposed to the linguistic model, to emit the next caption word. The technology disclosed further adds a new auxiliary sentinel gate to an LSTM architecture and produces a sentinel LSTM (Sn-LSTM). The sentinel gate produces a visual sentinel at each timestep, which is an additional representation, derived from the LSTM's memory, of long and short term visual and linguistic information.
-
-
-
-
-
-
-
-
-