Abstract:
A virtual assistant system for communicating with customers uses human intelligence to correct any errors in the system AI, while collecting data for machine learning and future improvements for more automation. The system may use a modular design, with separate components for carrying out different system functions and sub-functions, and with frameworks for selecting the component best able to respond to a given customer conversation. The system may have agent assistance functionality that uses natural language processing to identity concepts in a user conversation and to illustrate that concepts within a graphical user interface of a human agent so that the human agent can more accurately and more rapidly assist the user in accomplishing the user's conversational objectives.
Abstract:
A virtual assistant system for communicating with customers uses human intelligence to correct any errors in the system AI, while collecting data for machine learning and future improvements for more automation. The system may use a modular design, with separate components for carrying out different system functions and sub-functions, and with frameworks for selecting the component best able to respond to a given customer conversation.
Abstract:
Systems and methods are disclosed for determining a move driven by an interaction. In some embodiments, a processor determines an operational state of an interaction with a user based on parameter values of a data structure. The processor identifies a plurality of candidate moves for changing the operational state by determining a domain in which the interaction is occurring, retrieving a set of candidate moves that correspond to the domain from a knowledge graph, and adding the set to the plurality of candidate moves. The processor encodes input of the user received during the interaction into encoded terms, and determines a move for changing the operational state based on a match of the encoded terms to the set of candidate moves. The processor updates the parameter values of the data structure based on the move to reflect a current operational state led to by the move.
Abstract:
A speech interpretation module interprets the audio of user utterances as sequences of words. To do so, the speech interpretation module parameterizes a literal corpus of expressions by identifying portions of the expressions that correspond to known concepts, and generates a parameterized statistical model from the resulting parameterized corpus. When speech is received the speech interpretation module uses a hierarchical speech recognition decoder that uses both the parameterized statistical model and language sub-models that specify how to recognize a sequence of words. The separation of the language sub-models from the statistical model beneficially reduces the size of the literal corpus needed for training, reduces the size of the resulting model, provides more fine-grained interpretation of concepts, and improves computational efficiency by allowing run-time incorporation of the language sub-models.
Abstract:
A speech interpretation module interprets the audio of user utterances as sequences of words. To do so, the speech interpretation module parameterizes a literal corpus of expressions by identifying portions of the expressions that correspond to known concepts, and generates a parameterized statistical model from the resulting parameterized corpus. When speech is received the speech interpretation module uses a hierarchical speech recognition decoder that uses both the parameterized statistical model and language sub-models that specify how to recognize a sequence of words. The separation of the language sub-models from the statistical model beneficially reduces the size of the literal corpus needed for training, reduces the size of the resulting model, provides more fine-grained interpretation of concepts, and improves computational efficiency by allowing run-time incorporation of the language sub-models.
Abstract:
Disclosed herein are systems, methods, and non-transitory computer-readable storage media for approximating responses to a user speech query in voice-enabled search based on metadata that include demographic features of the speaker. A system practicing the method recognizes received speech from a speaker to generate recognized speech, identifies metadata about the speaker from the received speech, and feeds the recognized speech and the metadata to a question-answering engine. Identifying the metadata about the speaker is based on voice characteristics of the received speech. The demographic features can include age, gender, socio-economic group, nationality, and/or region. The metadata identified about the speaker from the received speech can be combined with or override self-reported speaker demographic information.
Abstract:
Structure of conversations between users and agents and/or systems is discovered and interactively displayed to analysts, thereby better supporting development of automated conversation handling systems for different domains. A corpus of prior dialogs of users with agents (without preexisting semantic labels indicating purposes for different parts of the dialogs) is taken as input, and embeddings are generated for textual units (e.g., rounds) of the dialogs. The embeddings are used to cluster the textual units, and the clusters and their relationships are visualized within a user interface that analysts may use to explore and fine-tune the structure of the conversations.
Abstract:
Within an environment in which users converse at least partly with human agents to accomplish a desired task, a server assists the agents by identifying workflows that are most applicable to the current conversation. Workflow selection functionality identifies one or more candidate workflows based on techniques such as user intent inference, conversation state tracking, or search, according to various embodiments. The identified candidate workflows are either automatically selected on behalf of the agent, or are presented to the agent for manual selection.
Abstract:
A computer-implemented method for providing agent assisted transcriptions of user utterances. A user utterance is received in response to a prompt provided to the user at a remote client device. An automatic transcription is generated from the utterance using a language model based upon an application or context, and presented to a human agent. The agent reviews the transcription and may replace at least a portion of the transcription with a corrected transcription. As the agent inputs the corrected transcription, accelerants are presented to the user comprising suggested texted to be inputted. The accelerants may be determined based upon an agent input, an application or context of the transcription, the portion of the transcription being replaced, or any combination thereof. In some cases, the user provides textual input, to which the agent transcribes an intent associated with the input with the aid of one or more accelerants.
Abstract:
A virtual assistant system for communicating with customers uses human intelligence to correct any errors in the system AI, while collecting data for machine learning and future improvements for more automation. The system may use a modular design, with separate components for carrying out different system functions and sub-functions, and with frameworks for selecting the component best able to respond to a given customer conversation.