Abstract:
Structure of conversations between users and agents and/or systems is discovered and interactively displayed to analysts, thereby better supporting development of automated conversation handling systems for different domains. A corpus of prior dialogs of users with agents (without preexisting semantic labels indicating purposes for different parts of the dialogs) is taken as input, and embeddings are generated for textual units (e.g., rounds) of the dialogs. The embeddings are used to cluster the textual units, and the clusters and their relationships are visualized within a user interface that analysts may use to explore and fine-tune the structure of the conversations.
Abstract:
Within an environment in which users converse at least partly with human agents to accomplish a desired task, a server assists the agents by identifying workflows that are most applicable to the current conversation. Workflow selection functionality identifies one or more candidate workflows based on techniques such as user intent inference, conversation state tracking, or search, according to various embodiments. The identified candidate workflows are either automatically selected on behalf of the agent, or are presented to the agent for manual selection.
Abstract:
A computer-implemented method for providing agent assisted transcriptions of user utterances. A user utterance is received in response to a prompt provided to the user at a remote client device. An automatic transcription is generated from the utterance using a language model based upon an application or context, and presented to a human agent. The agent reviews the transcription and may replace at least a portion of the transcription with a corrected transcription. As the agent inputs the corrected transcription, accelerants are presented to the user comprising suggested texted to be inputted. The accelerants may be determined based upon an agent input, an application or context of the transcription, the portion of the transcription being replaced, or any combination thereof. In some cases, the user provides textual input, to which the agent transcribes an intent associated with the input with the aid of one or more accelerants.
Abstract:
An interactive response system combines human intelligence (HI) subsystems with artificial intelligence (AI) subsystems to facilitate overall capability of multi-channel user interfaces. The system permits imperfect AI subsystems to nonetheless lessen the burden on HI subsystems. A combined AI and HI proxy is used to implement an interactive omnichannel system, and the proxy dynamically determines how many AI and HI subsystems are to perform recognition for any particular utterance, based on factors such as confidence thresholds of the AI recognition and availability of HI resources. Furthermore the system uses information from prior recognitions to automatically build, test, predict confidence, and maintain AI models and HI models for system recognition improvements.
Abstract:
Disclosed herein are methods for presenting speech from a selected text that is on a computing device. This method includes presenting text on a touch-sensitive display and having that text size within a threshold level so that the computing device can accurately determine the intent of the user when the user touches the touch screen. Once the user touch has been received, the computing device identifies and interprets the portion of text that is to be selected, and subsequently presents the text audibly to the user.
Abstract:
Disclosed herein are systems, computer-implemented methods, and computer-readable media for recognizing speech. The method includes receiving speech from a user, perceiving at least one speech dialect in the received speech, selecting at least one grammar from a plurality of optimized dialect grammars based on at least one score associated with the perceived speech dialect and the perceived at least one speech dialect, and recognizing the received speech with the selected at least one grammar. Selecting at least one grammar can be further based on a user profile. Multiple grammars can be blended. Predefined parameters can include pronunciation differences, vocabulary, and sentence structure. Optimized dialect grammars can be domain specific. The method can further include recognizing initial received speech with a generic grammar until an optimized dialect grammar is selected. Selecting at least one grammar from a plurality of optimized dialect grammars can be based on a certainty threshold.
Abstract:
Disclosed herein are systems, methods, and computer-readable storage media for selecting a speech recognition model in a standardized speech recognition infrastructure. The system receives speech from a user, and if a user-specific supervised speech model associated with the user is available, retrieves the supervised speech model. If the user-specific supervised speech model is unavailable and if an unsupervised speech model is available, the system retrieves the unsupervised speech model. If the user-specific supervised speech model and the unsupervised speech model are unavailable, the system retrieves a generic speech model associated with the user. Next the system recognizes the received speech from the user with the retrieved model. In one embodiment, the system trains a speech recognition model in a standardized speech recognition infrastructure. In another embodiment, the system handshakes with a remote application in a standardized speech recognition infrastructure.
Abstract:
An interactive voice and data response system that directs input to a voice, text, and web-capable software-based router, which is able to intelligently respond to the input by drawing on a combination of human agents, advanced speech recognition and expert systems, connected to the router via a TCP/IP network. The digitized input is broken down into components so that the customer interaction is managed as a series of small tasks performed by a pool of human agents, rather than one ongoing conversation between the customer and a single agent. The router manages the interactions and keeps pace with a real-time conversation. The system utilizes both speech recognition and human intelligence for purposes of interpreting customer utterances or customer text, where the role of the human agent(s) is to input the intent of caller utterances, and where the computer system—not the human agent—determines which response to provide given the customer's stated intent (as interpreted/captured by the human agents). The system may use more than one human agent, or both human agents and speech recognition software, to interpret simultaneously the same component for error-checking and interpretation accuracy.
Abstract:
A virtual assistant system for communicating with customers uses human intelligence to correct any errors in the system AI, while collecting data for machine learning and future improvements for more automation. The system may use a modular design, with separate components for carrying out different system functions and sub-functions, and with frameworks for selecting the component best able to respond to a given customer conversation.
Abstract:
A speech interpretation module interprets the audio of user utterances as sequences of words. To do so, the speech interpretation module parameterizes a literal corpus of expressions by identifying portions of the expressions that correspond to known concepts, and generates a parameterized statistical model from the resulting parameterized corpus. When speech is received the speech interpretation module uses a hierarchical speech recognition decoder that uses both the parameterized statistical model and language sub-models that specify how to recognize a sequence of words. The separation of the language sub-models from the statistical model beneficially reduces the size of the literal corpus needed for training, reduces the size of the resulting model, provides more fine-grained interpretation of concepts, and improves computational efficiency by allowing run-time incorporation of the language sub-models.