摘要:
A method is presented comprising assigning each of a plurality of segments comprising a received corpus to a node in a data structure denoting dependencies between nodes, and calculating a transitional probability between each of the nodes in the data structure.
摘要:
A method for the joint optimization of language model performance and size is presented comprising developing a language model from a tuning set of information, segmenting at least a subset of a received textual corpus and calculating a perplexity value for each segment and refining the language model with one or more segments of the received corpus based, at least in part, on the calculated perplexity value for the one or more segments.
摘要:
A search engine architecture is designed to handle a full range of user queries, from complex sentence-based queries to simple keyword searches. The search engine architecture includes a natural language parser that parses a user query and extracts syntactic and semantic information. The parser is robust in the sense that it not only returns fully-parsed results (e.g., a parse tree), but is also capable of returning partially-parsed fragments in those cases where more accurate or descriptive information in the user query is unavailable. A question matcher is employed to match the fully-parsed output and the partially-parsed fragments to a set of frequently asked questions (FAQs) stored in a database. The question matcher then correlates the questions with a group of possible answers arranged in standard templates that represent possible solutions to the user query. The search engine architecture also has a keyword searcher to locate other possible answers by searching on any keywords returned from the parser. The answers returned from the question matcher and the keyword searcher are presented to the user for confirmation as to which answer best represents the user's intentions when entering the initial search query. The search engine architecture logs the queries, the answers returned to the user, and the user's confirmation feedback in a log database. The search engine has a log analyzer to evaluate the log database to glean information that improves performance of the search engine over time by training the parser and the question matcher.
摘要:
A handwriting signal processing front-end method and apparatus for a handwriting training and recognition system which includes non-uniform segmentation and feature extraction in combination with multiple vector quantization. In a training phase, digitized handwriting samples are partitioned into segments of unequal length. Features are extracted from the segments and are grouped to form feature vectors for each segment. Groups of adjacent from feature vectors are then combined to form input frames. Feature-specific vectors are formed by grouping features of the same type from each of the feature vectors within a frame. Multiple vector quantization is then performed on each feature-specific vector to statistically model the distributions of the vectors for each feature by identifying clusters of the vectors and determining the mean locations of the vectors in the clusters. Each mean location is represented by a codebook symbol and this information is stored in a codebook for each feature. These codebooks are then used to train a recognition system. In the testing phase, where the recognition system is to identify handwriting, digitized test handwriting is first processed as in the training phase to generate feature-specific vectors from input frames. Multiple vector quantization is then performed on each feature-specific vector to represent the feature-specific vector using the codebook symbols that were generated for that feature during training. The resulting series of codebook symbols effects a reduced representation of the sampled handwriting data and is used for subsequent handwriting recognition.
摘要:
The branching decision for each node in a vector quantization (VQ) binary tree is made by a simple comparison of a pre-selected element of the candidate vector with a stored threshold resulting in a binary decision for reaching the next lower level. Each node has a preassigned element and threshold value. Conventional centroid distance training techniques (such as LBG and k-means) are used to establish code-book indices corresponding to a set of VQ centroids. The set of training vectors are used a second time to select a vector element and threshold value at each node that approximately splits the data evenly. After processing the training vectors through the binary tree using threshold decisions, a histogram is generated for each code-book index that represents the number of times a training vector belonging to a given index set appeared at each index. The final quantization is accomplished by processing and then selecting the nearest centroid belonging to that histogram. Accuracy comparable to that achieved by conventional binary tree VQ is realized but with almost a full magnitude increase in processing speed.
摘要:
A method for optimizing a language model is presented comprising developing an initial language model from a lexicon and segmentation derived from a received corpus using a maximum match technique, and iteratively refining the initial language model by dynamically updating the lexicon and re-segmenting the corpus according to statistical principles until a threshold of predictive capability is achieved.
摘要:
A system for automatic subcharacter unit and lexicon generation for handwriting recognition comprises a processing unit, a handwriting input device, and a memory wherein a segmentation unit, a subcharacter generation unit, a lexicon unit, and a modeling unit reside. The segmentation unit generates feature vectors corresponding to sample characters. The subcharacter generation unit clusters feature vectors and assigns each feature vector associated with a given cluster an identical label. The lexicon unit constructs a lexical graph for each character in a character set. The modeling unit generates a Hidden Markov Model for each set of identically-labeled feature vectors. After a first set of lexical graphs and Hidden Markov Models have been created, the subcharacter generation unit determines for each feature vector which Hidden Markov Model produces a highest likelihood value. The subcharacter generation unit relabels each feature vector according to the highest likelihood value, after which the lexicon unit and the modeling unit generate a new set of lexical graphs and a new set of Hidden Markov models, respectively. The feature vector relabeling, lexicon generation, and Hidden Markov Model generation are performed iteratively until a convergence criterion is met. The final set of Hidden Markov Model model parameters provide a set of subcharacter units for handwriting recognition, where the subcharacter units are derived from information inherent in the sample characters themselves.
摘要:
A speech recognition system for continuous Mandarin Chinese speech comprises a microphone, an A/D converter, a syllable recognition system, an integrated tone classifier, and a confidence score augmentor. The syllable recognition system generates N-best theories with initial confidence scores. The integrated tone classifier has a pitch estimator to estimate the pitch of the input once and a long-term tone analyzer to segment the estimated pitch according to the syllables of each of the N-best theories. The long-term tone analyzer performs long-term tonal analysis on the segmented, estimated pitch and generates a long-term tonal confidence signal. The confidence score augmentor receives the initial confidence scores and the long-term tonal confidence signals, modifies each initial confidence score according to the corresponding long-term tonal confidence signal, re-ranks the N-best theories according to the augmented confidence scores, and outputs the N-best theories.
摘要:
A speech recognition memory compression method and apparatus subpartitions probability density function (pdf) space along the hidden Markov model (HMM) index into packets of typically 4 to 8 log-pdf values. Vector quantization techniques are applied using a logarithmic distance metric and a probability weighted logarithmic probability space for the splitting of clusters. Experimental results indicate a significant reduction in memory can be obtained with little increase in overall speech recognition error.