Abstract:
An audio signal processing method for a mixing apparatus including a plurality of channels, the audio signal processing method includes selecting at least a first channel among the plurality of channels, inputting an audio signal of the selected first channel, specifying setting data to be set to the mixing apparatus, based on time-series sound volume data for the first channel, or data on a second channel different from the first channel, among the plurality of channels, and outputting the specified setting data.
Abstract:
A machine learning apparatus includes a memory storing instructions and a processor that implements the stored instructions to execute a plurality of tasks. The tasks include an obtaining task that obtains a mixture signal containing a first component and a second component, a first generating task that generates a first signal that emphasize the first component inputting a mixture signal to a neural network, a second generating task that generates a second signal by modifying the first signal, a calculating task that calculates an evaluation index from the second signal, and a training task that trains the neural network with the evaluation index.
Abstract:
A signal processing method includes receiving a signal that rises in response to a physical change and falls in response to an opposite physical change that is opposite to the physical change from a sensor that outputs the signal, and correcting a signal lag as either a rising of a received signal that has been received from the sensor lags with respect to a falling of the received signal, or the falling of the received signal lags with respect to the rising of the received signal.
Abstract:
In a sound processing apparatus, a likelihood calculation unit calculates an in-region coefficient and an out-of-region coefficient indicating likelihood of generation of each frequency component of a sound signal inside and outside a target localization range, respectively, according to localization of each frequency component. A reverberation analysis unit calculates a reverberation index value according to the ratio of a reverberation component for each frequency component. A coefficient setting unit generates a process coefficient for suppressing or emphasizing a reverberation component generated inside or outside the target localization range, for each frequency component of the sound signal, on the basis of the in-region coefficient, the out-of-region coefficient and the reverberation index value. A signal processing unit applies the process coefficient of each frequency component to each frequency component of the sound signal.
Abstract:
An audio processing method that is realized by a computer system includes acquiring a first audio signal including percussive components and non-percussive components, and serially executing a plurality of stages of adaptive notch filter processing on the first audio signal, thereby generating a second audio signal in which the non-percussive components in the first audio signal are suppressed.
Abstract:
A processing apparatus includes one or more processors and one or more memories operatively coupled to the one or more processors. The one or more processors are configured to acquire a spectrogram of a sound signal. The one or more processors are also configured to perform a first convolution on the spectrogram at every predetermined width on one of a frequency axis or a time axis. The one or more processors are also configured to combine results of the first convolution to obtain one-dimensional first feature data. The one or more processors are also configured to perform at least one second convolution on the one-dimensional first feature data to obtain one-dimensional second feature data indicating a feature of the spectrogram.
Abstract:
A musical-performance analysis device includes: an acquisition section that acquires performance information of a player; a determination section that determines, by comparing the performance information acquired by the acquisition section with reference information indicating a reference of a performance, among performance segments different from one another, a performance segment in which a difference degree between the performance information acquired by the acquisition section and the reference information is large and a performance segment in which the difference degree between the performance information acquired by the acquisition section and the reference information is small; and the specification section that specifies a tendency of the performance on the basis of the difference degree of the performance segment in which the difference degree has been determined to be small by the determination section.
Abstract:
A sound signal processing method includes accepting sound signals of a plurality of channels, adjusting a level of each of the sound signals of the plurality of channels, mixing the sound signals of the plurality of channels after the adjusting, outputting the mixed sound signal, acquiring a first acoustic feature of the mixed sound signal, acquiring a second acoustic feature that is a target acoustic feature, and determining a gain of each of the plurality of channels for the adjusting of the level, based on the first acoustic feature and the second acoustic feature.
Abstract:
A method of outputting a parameter of a sound processing device receives an audio signal, obtains information of the parameter of the sound processing device, which corresponds to the received audio signal, by using a trained model obtained by performing training of a relationship among a training output sound of the sound processing device, a training input sound of the sound processing device, and a parameter of sound processing performed by the sound processing device, the parameter of the sound processing device being receivable by a user of the sound processing device, and outputs obtained information of the parameter of the sound processing device corresponding to the received audio signal.
Abstract:
An audio processing method obtains observed envelopes of picked-up sound signals including a first observed envelope representing a contour of a first sound signal including a first target sound from a first sound source and a second spill sound from a second sound source and a second observed envelope representing a contour of a second sound signal including a second target sound from the second sound source and a first spill sound from the first sound source; and generates, based on the observed envelopes, output envelopes including a first output envelope representing a contour of the first target sound in the first observed envelope and a second output envelope representing a contour of the second target sound in the second observed envelope, using a mix matrix including a mix proportion of the second spill sound in the first sound signal and a mix proportion of the first spill sound in the second sound signal.