Abstract:
An apparatus, method, and medium for distinguishing a vocal sound. The apparatus includes: a framing unit dividing an input signal into frames, each frame having a predetermined length; a pitch extracting unit determining whether each frame is a voiced frame or an unvoiced frame and extracting a pitch contour from the voiced and unvoiced frames; a zero-cross rate calculator respectively calculating a zero-cross rate for each frame; a parameter calculator calculating parameters including a time length ratio of the voiced frame and the unvoiced frame determined by the pitch extracting unit, statistical information of the pitch contour, and spectral characteristics; and a classifier inputting the zero-cross rates and the parameters output from the parameter calculator and determining whether the input signal is a vocal sound.
Abstract:
A formant tracking apparatus and a formant tracking method are provided. The formant tracking apparatus includes: a framing unit dividing an input voice signal into a plurality of frames; a linear prediction analyzing unit obtaining linear prediction coefficients for each frame; a segmentation unit segmenting each of the linear prediction coefficients into a plurality of segments; a formant candidate determining unit obtaining formant candidates by using the linear prediction coefficients, and summing the formant candidates for each segment to determine formant candidates for each segment; a formant number determining unit determining a number of tracking formants for each segment among the formant candidates satisfying a predetermined condition; and a tracking unit searching the tracking formants as many as the number of the tracking formants determined in the formant number determining unit among the formant candidates belonging to each segment.
Abstract:
A pitch estimating method and apparatus in which mixture Gaussian distributions based on candidate pitches having high period estimating values are generated, a mixture Gaussian distribution having a high likelihood is selected and dynamic programming is executed so that the pitch of the speech signal can be accurately estimated. The pitch estimating method comprises computing a normalized autocorrelation function of a windowed signal obtained by multiplying a frame of a speech signal by a window signal and determining candidate pitches from a peak value of the normalized autocorrelation function of the windowed signal, interpolating a period of the determined candidate pitches and a period estimating value representing a length of the period, generating Gaussian distributions for the candidate pitches for each frame for which the interpolated period estimating value is greater than a first threshold value, mixing the Gaussian distributions which are located at a distance less than a second threshold value to generate mixture Gaussian distributions and selecting at least one of the mixture Gaussian distributions that a likelihood exceeding a third threshold value, and executing dynamic programming for the frames to estimate the pitch of each frame, based on the candidate pitches of each of the frames and the selected mixture Gaussian distributions.
Abstract:
A method and apparatus for classifying mood of music at high speed. The method includes: extracting a Modified Discrete Cosine Transformation-based timbre feature from a compressed domain of a music file; extracting a Modified Discrete Cosine Transformation-based tempo feature from the compressed domain of the music file; and classifying the mood of the music file based on the extracted timbre feature and the extracted tempo feature.