Abstract:
Systems and methods for recognizing sounds are provided herein. User input relating to one or more sounds is received from a computing device. Instructions, which are stored in memory, are executed by a processor to discriminate the one or more sounds, extract music features from the one or more sounds, analyze the music features using one or more databases, and obtain information regarding the music features based on the analysis. Further, information regarding the music features of the one or more sounds may be transmitted to display on the computing device.
Abstract:
Technologies described relate to coordination between audio and tracking of the corresponding text in an audio recognition mode and an audio playback mode. Optionally, audio recognition includes receiving a signal corresponding to an audio track; determining an identity of the audio track and a current audio position within the audio track; displaying on a display a portion of a text that is linked and synchronized to the audio track identified and automatically scrolling the portion of the text displayed in pace with the audio track playing; and displaying a current text indicator that emphasizes current text, wherein the current text indicator is visually synchronized on the display to current audio playing from the audio track. Optionally include redetermining the current audio position in the audio track from the signal received and updating synchronization of the current text indicator with the current audio playing.
Abstract:
The technology disclosed relates to a system and method for fast, accurate and parallelizable speech search, called Crystal Decoder. It is particularly useful for search applications, as opposed to dictation. It can achieve both speed and accuracy, without sacrificing one for the other. It can search different variations of records in the reference database without a significant increase in elapsed processing time. Even the main decoding part can be parallelized as the number of words increase to maintain a fast response time.
Abstract:
A method for searching a database to produce search results from queries likely to contain errors. The process begins by identifying database features likely to be useful in searching, and those features are employed to index the database. After receiving a query from a user, the system develops a rough score for the query, by extracting features from the query, assigning match scores to query features matching database features; and assigning approximation scores to query features amenable to approximation analysis with database features. The rough score is used to identify identifying a set of database records for further analysis. Those records are then subjected to a more detailed rescoring process, based on correspondence between individual query elements and individual record elements, and between the query and the database record content, taken as a whole. Based on the rescoring process, output is provided to the user.
Abstract:
A method for non-text-based identification of a selected item of stored music. The first broad portion of the method focuses on building a music identification database. That process requires capturing a tag of the selected musical item, and processing the tag to develop reference key to the same. Then the tag is stored, together with the reference key and an association to the stored music. The database is built by collecting a multiplicity of tags. The second broad portion of the method is retrieving a desired item of stored music from the database. That process calls for capturing a query tag from a user, and processing the query tag to develop a query key to the same. The query tag is compared to reference keys stored in the database to identify the desired item of stored music.
Abstract:
A method for employing pitch in a speech recognition engine. The process begins by building training models of selected speech samples, a process which begins by analyzing each sample as a sequential series of frames, each frame having a selected duration and overlap with adjacent frames. A pitch estimate of each frame is detected and recorded, and the pitch data is normalized, and the speech recognition parameters of the model are determined, after which the model is stored. Models are stored and updated for each of the set of training samples. The system is then employed to recognizing the speech content of a subject, which begins by analyzing the subject as a sequential series of frames, each frame having a selected duration and overlap with adjacent frames. A pitch estimate for each frame is detected and recorded, and the pitch data is normalized. Speech recognition techniques are then employed to recognize the content of the subject, employing the stored models.
Abstract:
The present invention relates to the continuous monitoring of an audio signal and identification of audio items within an audio signal. The technology disclosed utilizes predictive caching of fingerprints to improve efficiency. Fingerprints are cached for tracking an audio signal with known alignment and for watching an audio signal without known alignment, based on already identified fingerprints extracted from the audio signal. Software running on a smart phone or other battery-powered device cooperates with software running on an audio identification server.
Abstract:
Systems and methods for recognizing sounds are provided herein. User input relating to one or more sounds is received from a computing device. Instructions, which are stored in memory, are executed by a processor to discriminate the one or more sounds, extract music features from the one or more sounds, analyze the music features using one or more databases, and obtain information regarding the music features based on the analysis. Further, information regarding the music features of the one or more sounds may be transmitted to display on the computing device.
Abstract:
In one implementation, a method is described of retrying matching of an audio query against audio references. The method includes receiving a follow-up query that requests a retry at matching a previously submitted audio query. In some implementations, this follow-up query is received without any recognition hint that suggests how to retry matching. The follow-up query includes the audio query or a reference to the audio query to be used in the retry. The method further includes retrying matching the audio query using retry matching resources that include an expanded group of audio references, identifying at least one match and transmitting a report of the match. Optionally, the method includes storing data that correlates the follow-up query, the audio query or the reference to the audio query, and the match after retrying.
Abstract:
A method for non-text-based identification of a selected item of stored music. The first broad portion of the method focuses on building a music identification database. That process requires capturing a tag of the selected musical item, and processing the tag to develop reference key to the same. Then the tag is stored, together with the reference key and an association to the stored music. The database is built by collecting a multiplicity of tags. The second broad portion of the method is retrieving a desired item of stored music from the database. That process calls for capturing a query tag from a user, and processing the query tag to develop a query key to the same. The query tag is compared to reference keys stored in the database to identify the desired item of stored music.