Abstract:
A dialog with a conversational virtual assistant includes a sequence of user queries and systems responses. Queries are received and interpreted by a natural language understanding system. Dialog context information gathered from user queries and system responses is stored in a layered context data structure. Incomplete queries, which do not have sufficient information to result in an actionable interpretation, become actionable with use of context data. The system recognizes the need to access context data, and retrieves from context layers information required to transform the query into an executable one. The system may then act on the query and provide an appropriate response to the user. Context data buffers forget information, perhaps selectively, with the passage of time, and after a sufficient number and type of intervening queries.
Abstract:
A client, such as a mobile phone, receives an audio signal from a microphone; the sound comes from a broadcast signal such as a radio or television program. The client sends a segment of audio data from the broadcast program to a detection system, such as a server. A broadcast monitoring system receives many broadcast audio signals and encodes their fingerprints in a database for matching. The detection system compares the client's audio data fingerprints to the content fingerprints to identify which broadcast station broadcast the signal having the sampled content. This information enables the client to resume the experience of the broadcast from one of a number of possible media sources.