Abstract:
The present invention receives sound wave data from a sound inputted into a microphone and samples the sound wave data received using a CPU to obtain sampled data as a digitized tone data, which is then stored in a sampling memory. The CPU performs auto-play of a sound using the digitized tone data sampled by the sampling and stored in the sampling memory. Thus, a result of the sampling is automatically provided to the user after the sampling takes place and the user can intuitively understand what can be done through sampling.
Abstract:
An electronic musical instrument in one aspect of the disclosure includes; a plurality of operation elements to be performed by a user for respectively specifying different pitches; a memory that stores musical piece data that includes data of a vocal part, the vocal part including at least a first note with a first pitch and an associated first lyric part that are to be played at a first timing; and at least one processor, wherein if the user does not operate any of the plurality of operation elements in accordance with the first timing, the at least one processor digitally synthesizes a default first singing voice that includes the first lyric part and that has the first pitch in accordance with data of the first note stored in the memory, and causes the digitally synthesized default first singing voice to be audibly output at the first timing.
Abstract:
An electronic musical instrument includes an operation unit that receives a user performance; and at least one processor. wherein the at least one processor performs the following: in accordance with a user operation specifying a chord on the operation unit, obtaining lyric data of a lyric and obtaining a plurality of pieces of waveform data respectively corresponding to a plurality of pitches indicated by the specified chord; inputting the obtained lyric data to a trained model that has been trained and learned singing voices of a singer so as to cause the trained model to output acoustic feature data in response thereto; synthesizing each of the plurality of pieces of waveform data with the acoustic feature data so as to generate a plurality of pieces of synthesized waveform data; and outputting a polyphonic synthesized singing voice based on the generated plurality of pieces of synthesized waveform data.
Abstract:
An electronic musical instrument includes at least one processor that, in accordance with a user operation on an operation unit, obtains lyric data and waveform data corresponding to a first tone color; inputs the obtained lyric data to a trained model so as to cause the trained model to output acoustic feature data in response thereto; generates waveform data corresponding to a singing voice of a singer and corresponding to a second tone color that is different from the first tone color, based on the acoustic feature data outputted from the trained model and the obtained waveform data corresponding to the first tone color; and outputs a singing voice based on the generated waveform data corresponding to the second tone color.
Abstract:
An electronic musical instrument includes: a memory that stores lyric data including lyrics for a plurality of timings, pitch data including pitches for said plurality of timings, and a trained model that has been trained and learned singing voice features of a singer; and at least one processor, wherein at each of said plurality of timings, the at least one processor: if the operation unit is not operated, obtains, from the trained model, a singing voice feature associated with a lyric indicated by the lyric data and a pitch indicated by the pitch data; if the operation unit is operated, obtains, from the trained model, a singing voice feature associated with the lyric indicated by the lyric data and a pitch indicated by the operation of the operation unit; and synthesizes and outputs singing voice data based on the obtained singing voice feature of the singer.
Abstract:
An electronic musical instrument in one aspect of the disclosure includes a keyboard, a processor and a memory that stores musical piece data that includes data of a vocal part, the vocal part including at least first and second notes together with associated first and second lyric parts that are to be successively played at the first and second timings, respectively, wherein if while a digitally synthesized first signing voice corresponding to the first note is being output, a user specifies, via keyboard, a third pitch that is different from the first and second notes prior to the arrival of the second timing, the at least one processor synthesizes a modified first singing voice having the third pitch in accordance with the data of the first lyric part, and causes the digitally synthesized modified first singing voice to be audibly output at the third timing.
Abstract:
An electronic musical instrument includes: a memory that stores a trained acoustic model obtained by performing machine learning on training musical score data and training singing voice data of a singer; and at least one processor, wherein the at least one processor: in accordance with a user operation on an operation element in a plurality of operation elements, inputs prescribed lyric data and pitch data corresponding to the user operation of the operation element to the trained acoustic model so as to cause the trained acoustic model to output the acoustic feature data in response to the inputted prescribed lyric data and the inputted pitch data, and digitally synthesizes and outputs inferred singing voice data that infers a singing voice of the singer on the basis of the acoustic feature data output by the trained acoustic model.
Abstract:
An electronic musical instrument includes: a memory that stores a trained acoustic model obtained by performing machine learning on training musical score data and training singing voice data of a singer; and at least one processor, wherein the at least one processor: in accordance with a user operation on an operation element in a plurality of operation elements, inputs prescribed lyric data and pitch data corresponding to the user operation of the operation element to the trained acoustic model, and digitally synthesizes and outputs inferred singing voice data that infers a singing voice of the singer on the basis of at least a portion of acoustic feature data output by the trained acoustic model, and on the basis of instrument sound waveform data that are synthesized in accordance with the pitch data corresponding to the user operation of the operation element.
Abstract:
In a recording and playback device of the present invention, when input data exceeding a threshold value is supplied, the CPU records input data for an amount of time corresponding to a single beat in an area specified by syllable number SPLIT in an input buffer IB of the RAM, and after incrementing the syllable number SPLIT, waits until the recorded data becomes silent. The CPU repeats this series of processing until the value of the incremented syllable number SPLIT reaches “4”, and thereby stores input data recorded for an amount of time corresponding to a single beat in each input buffer IB(1) to IB(4) corresponding to syllable numbers SPLIT1 to SPLIT4. Then, the CPU copies the input data to the recording area of the RAM such that these input data are sequentially connected and formed into loop data for an amount of time corresponding to a single bar.
Abstract:
According to the present invention, there is provided an electronic musical instrument that allows a player to learn a certain range covering a key of correct pitch and to feel as if he or she were playing the music.The instrument includes a controller to perform a pitches determination process of, based upon first timing and first pitch included in music data, determining pitches within a fixed range from the first pitch, which is allowed to be designated in accordance with the first timing, a display process of displaying an identifier to identify the pitches determined, and an automatic playing process of advancing automatic playing of the music data by producing sound corresponding to the first pitch from a sound producing unit when one of the pitches identified by the identifier displayed by the display process is designated.