Abstract:
There is provided a method for automatically determining an output utterance for a virtual agent based on output of two or more conversational interfaces. A candidate output utterance from each of the two or more conversational interfaces can be received, and one candidate output utterance from all received candidate outputs can be selected based on a predetermined priority factor. The selected utterance can be output by the virtual agent.
Abstract:
Systems and methods for automatically generating at least one of facial expressions, body gestures, vocal expressions, or verbal expressions for a virtual agent based on emotion, mood and/or personality of a user and/or the virtual agent are provided. Systems and method for determining a user's emotion, mood and/or personality are also provided.
Abstract:
Systems and methods for automatically generating at least one of facial expressions, body gestures, vocal expressions, or verbal expressions for a virtual agent based on emotion, mood and/or personality of a user and/or the virtual agent are provided. Systems and method for determining a user's emotion, mood and/or personality are also provided.
Abstract:
A method for a virtual agent to process natural language utterances from a user is provided. The method can include receiving natural language utterance from the user, determining a type of the utterance, and based on the utterance type, determining an action for the virtual agent to take. The virtual agent can execute the action.