摘要:
To increase the accuracy and the flexibility of a method for recognizing speech which employs a keyword spotting process on the basis of a combination of a keyword model (KM) and a garbage model (GM) it is suggested to associate at least one variable penalty value (Ptrans, P1, . . . , P6) with a global penalty (Pglob) so as to increase the recognition of keywords (Kj).
摘要:
The present invention discloses an apparatus for automatic extraction of important events in audio signals comprising: signal input means for supplying audio signals; audio signal fragmenting means for partitioning audio signals supplied by the signal input means into audio fragments of a predetermined length and for allocating a sequence of one or more audio fragments to a respective audio window; feature extracting means for analyzing acoustic characteristics of the audio signals comprised in the audio fragments and for analyzing acoustic characteristics of the audio signals comprised in the audio windows; and important event extraction means for extracting important events in audio signals supplied by the audio signal fragmenting means based on predetermined important event classifying rules depending on acoustic characteristics of the audio signals comprised in the audio fragments and on acoustic characteristics of the audio signals comprised in the audio windows, wherein each important event extracted by the important event extraction means comprises a discrete sequence of cohesive audio fragments corresponding to an important event included in the audio signals.
摘要:
An audio data segmentation apparatus for segmenting of audio data including for supplying audio data, dividing the audio data supplied into audio clips of a predetermined length, discriminating the audio clips into predetermined audio classes, the audio classes identifying a kind of audio data included in the respective audio clip and segmenting for segmenting the audio data into audio meta patterns based on a sequence of audio classes of consecutive audio clips, each meta pattern being allocated to a predetermined type of contents of the audio data. It is difficult to achieve good results with known methods for segmentation of audio data into meta patterns since the rules for the allocation of the meta patterns are dissatisfying. This problem is solved by the inventive audio data segmentation apparatus further including a program database including program data units to identify a certain kind of program, a plurality of respective audio meta patterns being allocated to each program data unit, wherein the segmenting segments the audio data into corresponding audio meta patterns on the basis of the program data units of the program database 5.
摘要:
The present invention discloses an apparatus for automatic extraction of important events in audio signals comprising: signal input means for supplying audio signals; audio signal fragmenting means for partitioning audio signals supplied by the signal input means into audio fragments of a predetermined length and for allocating a sequence of one or more audio fragments to a respective audio window; feature extracting means for analysing acoustic characteristics of the audio signals comprised in the audio fragments and for analysing acoustic characteristics of the audio signals comprised in the audio windows; and important event extraction means for extracting important events in audio signals supplied by the audio signal fragmenting means based on predetermined important event classifying rules depending on acoustic characteristics of the audio signals comprised in the audio fragments and on acoustic characteristics of the audio signals comprised in the audio windows, wherein each important event extracted by the important event extraction means comprises a discrete sequence of cohesive audio fragments corresponding to an important event included in the audio signals.
摘要:
There is proposed a method that may be universally used for controlling a man-machine interface unit. A learning sample is used in order at least to derive and/or initialize a target action (t) to be carried out and to lead the user from an optional current status (ec) to an optional desired target status (et) as the final status (ef). This learning sample (l) is formed by a data triple made up by an initial status (ei) before an optional action (a) carried out by the user, a final status (ef) after the action taken place (a).
摘要:
An apparatus for automatic dissection of segmented audio signals, wherein at least one information signal for identifying programs included in said audio signals and for identifying contents included in said programs. Content detection device detects programs and contents belonging to the respective programs in the information signal. Program weighting device weights each program includes in the information signal based on the contents of the respective program detected by the content detection device. Program ranking device indentifies programmers of the same category and ranking said programs based on a weighting result for each program provided by the program weighting device.
摘要:
A user profile and/or the suggestions computed based thereon are obtained taking a special set of user features into account. The user features are defined to represent a typical general behaviour of an individual user in respect to the application where the user profile is used. In other words, for each application where a user profile is used a special set of user features are defined which are able to represent a typical general behaviour of an individual user. Based on these user features the weights in the list of word-weight pairs or weighted keywords which represents the user profile are computed or influenced during the creation of the user profile, and/or a mufti-user profile is split during the creation of an individual user profile from a mufti-user profile, and/or during specification of a suggestion a user history which is used to create the user profile, and/or the user profile, and/or the suggestion results are filtered.
摘要:
To reduce the error rate when classifying emotions from an acoustical speech input (SI) only, it is suggested to include a process of speaker identification to obtain certain speaker identification data (SID) on the basis of which the process of recognizing an emotional state is adapted and/or configured. In particular, speaker-specific feature extractors (FE) and/or emotion classifiers (EC) are selected based on said speaker identification data (SID).
摘要:
A user profile and/or the suggestions computed based thereon are obtained taking a special set of user features into account. The user features are defined to represent a typical general behaviour of an individual user in respect to the application where the user profile is used. In other words, for each application where a user profile is used a special set of user features are defined which are able to represent a typical general behaviour of an individual user. Based on these user features the weights in the list of word-weight pairs or weighted keywords which represents the user profile are computed or influenced during the creation of the user profile, and/or a multi-user profile is split during the creation of an individual user profile from a multi-user profile, and/or during specification of a suggestion a user history which is used to create the user profile, and/or the user profile, and/or the suggestion results are filtered.
摘要:
An apparatus for classifying audio signals comprises audio signal clipping means for partitioning audio signals into audio clips, and class discrimination means for discriminating the audio clips provided by the audio signal clipping means into predetermined audio classes based on predetermined audio class classifying rules, by analysing acoustic characteristics of the audio signals comprised in the audio clips, wherein a predetermined audio class classifying rule is provided for each audio class, and each audio class represents a respective kind of audio signals comprised in the corresponding audio clip. The determination process to find acceptable audio class classifying rules for each audio class according to the prior art is depending on both the used raw audio signals and the personal experience of the person conducting the determination process. Thus, the determination process usually is very difficult, time consuming and subjective. Furthermore, there is a high risk that not all possible peculiarities of the different programmes and the different categories the audio signal can belong to is sufficiently accounted for. This problem is solved in the inventive apparatus for classifying audio signals by class discrimination means calculating an audio class confidence value for each audio class assigned to an audio clip, wherein the audio class confidence value indicates the likelihood the respective audio class characterises the respective kind of audio signals comprised in the respective audio clip correctly. Furthermore, the class discrimination means use acoustic characteristics of audio clips of audio classes having a high audio class confidence value to train the respective audio class classifying rule.