Abstract:
An information processing method implemented by a computer, the information processing method including, generating pedal data representing an operation period of a pedal that extends sound production by key depression, from playing data representing a playing content.
Abstract:
An apparatus is provided that accurately estimates a point at which a musical performance is started by a player. The apparatus includes the musical performance analysis unit 32, and the musical performance analysis unit 32 obtains action data that includes a time series of feature data representing actions made by a player during a musical performance for a reference period and estimating a sound-production point based on the action data at an estimated point using a learned model L.
Abstract:
An information processing method according to the present invention includes providing first musical piece information representing contents of a musical piece and performance information relating to a past performance prior to one unit period within the musical piece to a learner that has undergone learning relating to a specific tendency that relates to a performance, and generating, for the one unit period, performance information that is based on the specific tendency with the learner.
Abstract:
The information providing method includes: sequentially identifying a performance speed at which a user performs a piece of music; identifying, in the piece of music, a performance position that is performed by the user; setting an adjustment amount in accordance with a temporal variation in the identified performance speed; and providing the user with music information corresponding to a time point that is later, by the adjustment amount, than a time point that corresponds to the performance position identified in the piece of music.
Abstract:
A musical-performance-information transmission method using a first instrument and a second instrument, wherein the first instrument produces sounds in accordance with a user's musical performance and generates musical-performance data in accordance with the produced sounds and the second instrument produces sounds by receiving the musical performance data via a communication means. In the musical-performance-information transmission method, a mixed-sound signal is generated in accordance with a mixture of sounds produced by the first instrument and sounds that are different from the sounds produced by the first instrument, a reference signal is generated in accordance with the sounds produced by the first instrument, and on the basis of the mixed-sound signal and the reference signal, the reference signal is removed from the mixed-sound signal, generating a separated signal, and sound is emitted on the basis of said separated signal.
Abstract:
A distribution system includes a device control circuit that receives a first sound signal and a second sound signal that are related to a performance sound to be distributed. The device control circuit also receives meta-data indicating a type of the first sound signal and a type of the second sound signal. The device control circuit also receives sound environment data indicating a sound characteristic of a sound appliance. Based on a combination of the type of the first sound signal and the sound characteristic or a combination of the type of the second sound signal and the sound characteristic, the device control circuit controls the first sound signal or the second sound signal to be output to the sound appliance.
Abstract:
A signal processing device includes an electronic controller including at least one processor. The electronic controller is configured to execute a reception unit, a generation unit, and a processing unit. The reception unit is configured to receive first time-series data that include sound data, and second time-series data that are generated based on the first time-series data and that include at least data indicating a timing of a human action. The generation unit is configured to generate, based on the second time-series data, third time-series data notifying of the timing of the human action. The processing unit is configured to synchronize and output an output signal based on the first time-series data and an output signal based on the third time-series data, such that the timing of the human action for the first time-series data and the timing of the human action for the third time-series data match.
Abstract:
A performance agent training method realized by at least one computer includes observing a first performance of a musical piece by a performer, generating, by a performance agent, performance data of a second performance to be performed in parallel with the first performance, outputting the performance data such that the second performance is performed in parallel with the first performance of the performer, acquiring a degree of satisfaction of the performer with respect to the second performance performed based on the output performance data, and training the performance agent by reinforcement learning, using the degree of satisfaction as a reward.
Abstract:
A trained model establishment method realized by a computer includes acquiring a plurality of datasets each of which is formed by a combination of first performance data of a first performance by a performer, second performance data of a second performance performed together with the first performance, and a satisfaction label indicating a degree of satisfaction of the performer, and executing machine learning of a satisfaction estimation model by using the plurality of datasets. In the machine learning, the satisfaction estimation model is trained such that, for each of the datasets, a result of estimating a degree of satisfaction the performer from the first performance data and the second performance data matches the degree of the satisfaction indicated by the satisfaction label.
Abstract:
An estimation model construction method realized by a computer includes preparing a plurality of training data that include first training data that include first feature amount data that represent a first feature amount of a performance sound of a musical instrument and first onset data that represent a pitch at which an onset exists, and second training data that include second feature amount data that represent a second feature amount of sound generated by a sound source of a type different than the musical instrument, and second onset data that represent that an onset does not exist, and constructing, by machine learning using the plurality of training data, an estimation model that estimates, from a feature amount data that represent a feature amount of a performance sound of the musical instrument, estimated onset data that represent a pitch at which an onset exists.