Abstract:
A user is identified and an in-place personalized interactive display provided by detecting, via a first imaging system, one or more unique characteristics of a user's palm, identifying the user via the one or more unique characteristics and a database containing mappings between detectable unique characteristics and user identities, retrieving user-specific interactive content as a function of the identity of the user, projecting, via a second imaging system, the user-specific interactive content onto the user's palm, and detecting, via a third imaging system, a user's interaction with the projected user-specific interactive content. The user may be identified by transmitting the one or more unique characteristics to a remote authentication server and receiving, in response, an identity of the user. User-specific content as a function of the identity of the user may be retrieved from a remote interactive content server.
Abstract:
An apparatus, method, and computer program for initiating a word spotting algorithm (220) on one or more wireless communication devices in a first power mode to detect a keyword data sequence (224) embedded within a sampled audio signal (222). In response to detecting the keyword data sequence (226), the word spotting algorithm is terminated and a plurality of identification algorithms (230) are initiated on the one or more wireless communication devices operating in a second power mode to detect the presence of identification data (240). If identification data is detected on a particular wireless communication device it is activated to accept speech and/or voice commands (242). On the other hand, if identification data is not detected, the plurality of identification algorithms are terminated, and the word spotting algorithm is reinitiated on the one or more wireless communication devices that are then operating in the first power mode (244).
Abstract:
A method and user terminal are provided that graphically formulate a search query. The method and user terminal display, via a display screen, a multi-dimensional graphical representation of a search query space, receive a plurality of parameters from a user, wherein the parameters define the search query space, position a multi-dimensional icon in the multi-dimensional representation of the search query space, associate one or more of a keyword and multimedia content with the icon, and generate a search query based on the keyword and the position of the icon in the multi-dimensional representation of the search query space. The method and user terminal further may graphically display the results of the corresponding database search, wherein the retrieved content is displayed as one or more icons positioned in a multi-dimensional graph having a plurality of axes associated with the plurality of parameters defining a context of the search query.
Abstract:
A method for determining a relatedness between a query video and a database video is provided. A processor extracts an audio stream from the query video to produce a query audio stream, extracts an audio stream from the database video to produce a database audio stream, produces a first-sized snippet from the query audio stream, and produces a first-sized snippet from the database audio stream. An estimation is made of a first most probable sequence of latent evidence probability vectors generating the first-sized audio snippet of the query audio stream. An estimation is made of a second most probable sequence of latent evidence probability vectors generating the first-sized audio snippet of the database audio stream. A similarity is measured between the first sequence and the second sequence producing a score of relatedness between the two snippets. Finally a relatedness is determined between the query video and a database video.