Abstract:
A system for controlling for at least one string of a musical instrument by selectively exciting or damping vibration of the string is provided. The system includes at least one transducer configured to sense a lateral vibration of the string and/or to apply an actuating force to the string. A controller is configured to determine an actuating signal for driving the actuator to apply a longitudinal actuating force to the string at a termination point of the string. The longitudinal actuating force are operable to modulate a tension of the string that increases and/or damps the lateral vibration and/or selected harmonics thereof.
Abstract:
Methods and a system for providing electronic musical instruments are disclosed. Through novel combinations of sensor inputs and processing, they allow simulation of acoustic instruments including but not limited to a Trombone, Trumpet, and Saxophone. Sensor inputs are configured to trigger playback and transitioning of sound and control its various attributes alone, or in combination.
Abstract:
Based on the understanding that time-varying characteristics of a tone element, such as an amplitude and pitch, in waveform data acquired through a live performance of a musical instrument include a variation component intended or controllable by a human player and a variation component not intended or non-controllable by the human player, the present invention allows the two components to be adjusted/controlled separately and independently of each other, so as to achieve effective and high-quality control. Discrete variation value train is acquired for at least one particular tone element in original waveform data, and the acquired variation value train is separated, in accordance with a time constant factor, into a “swell” value train of a relatively great time constant and a “fluctuation” value train of a relatively small time constant. The “swell” value train and “fluctuation” value train are variably controlled independently of each other. In this way, high-quality control can be performed on tone elements, such as an amplitude and pitch, included in the sampled waveform data.
Abstract:
A system and method for correction of finger positions for an electronic musical instrument. By adding a correction step in the direction of a nearest grid value, the system can perform correction in a manner that allows for gradual position correction while maintaining a vibrato or glissando shape that is similar to vibrato or glissando shape of the actual finger positions over time. The system and method may be used for pitch correction for a continuous-pitch electronic musical instrument.
Abstract:
An electronic musical instrument which can realize a choking effect by a simple operation. The electronic musical instrument is constructed such that a neck provided with a fingerboard is fixed to a body. A plurality of (twelve) fret operating elements are provided for each of six sounding channels. The body is provided with a string input section and an arm, and six stringed operating elements are provided for the respective sounding channels. For each sounding channel, a tone generator generates a musical tone at a pitch determined by the corresponding fret operating element and in sounding timing determined by the corresponding stringed operating element. When the arm is operated, a CPU provides control to apply a choking effect to a musical tone for a sounding channel, in which the musical tone is being sounded, by raising the pitch of the musical tone by a predetermined amount.
Abstract:
Based on the understanding that time-varying characteristics of a tone element, such as an amplitude and pitch, in waveform data acquired through a live performance of a musical instrument include a variation component intended or controllable by a human player and a variation component not intended or non-controllable by the human player, the present invention allows the two components to be adjusted/controlled separately and independently of each other, so as to achieve effective and high-quality control. Discrete variation value train is acquired for at least one particular tone element in original waveform data, and the acquired variation value train is separated, in accordance with a time constant factor, into a “swell” value train of a relatively great time constant and a “fluctuation” value train of a relatively small time constant. The “swell” value train and “fluctuation” value train are variably controlled independently of each other. In this way, high-quality control can be performed on tone elements, such as an amplitude and pitch, included in the sampled waveform data.
Abstract:
There are provided a singing voice-synthesizing method and apparatus which is capable of performing synthesis of natural singing voices close to human singing voices based on performance data being input in real time. Performance data is inputted for each phonetic unit constituting a lyric, to supply phonetic unit information, singing-starting time point information, singing length information, etc. thereof. The singing-starting time point information represents the actual singing-starting time point. Each performance data is inputted in timing earlier than the actual singing-starting time point, and has its phonetic unit information converted to a phonetic unit transition time length. The phonetic unit transition time length is formed by a first phoneme generation time length and a second phoneme generation time length, for a phonetic unit formed by a first phoneme and a second phoneme. By using the phonetic unit transition time, the singing-starting time point information, and the singing length information, the singing-starting time points and singing duration times of the first and second phonemes are determined. The singing-starting time point of a consonant (first phoneme) is set to be earlier than the actual singing-starting time point. The singing-starting time point of a vowel (second phoneme) is made coincident with or earlier or later than the actual singing-starting time point. In the singing voice synthesis, for each phoneme, a singing voice is generated at the determined singing-starting time point and continues to be generated for the determined singing duration time. State transition characteristics and effects characteristics may be controlled according to input control information.
Abstract:
When any of push-button switches on a handheld controller is pressed in a sound input mode, a video game machine generates and temporarily stores frequency data of a tone corresponding to the depressed switch. When a joystick on the controller is tilted to a predetermined direction, the video game machine changes the generated frequency data according to the amount of tilt of the joystick. It is therefore possible to input various sounds in tone using a limited number of switches. The frequency data stored in the video game machine is read later to be converted into audio signals, and outputted from a speaker incorporated in a CRT display. When a melody based on the inputted sound coincides with a predetermined melody set, the video game machine makes various changes in the progress of the game. For example, a hero character can be warped to a position that is different from the present position, or provided with various items.
Abstract:
A succession of performance sounds is sampled, and the sampled performance sounds are divided into a plurality of time sections of variable lengths in accordance with respective characteristics of performance expression therein, to extract waveform data of each of the time sections as an articulation element. The waveform data of each of the extracted articulation elements are analyzed in terms of a plurality of predetermined tonal factors to thereby create template data of the individual tonal factors, and the thus-created template data are stored in a data base. Tone performance to be executed is designated by a time-serial sequence of a plurality of articulation elements, in response to which the respective waveform data of the individual articulation elements are read out from the data base to thereby synthesize a tone on the basis of the waveform data. Thus, it is possible to freely execute editing, such as replacement, modification or deletion, of the element corresponding to any desired time section. This arrangement facilitates realistic reproduction of the articulation (style-of-rendition) and control of such articulation reproduction, and achieves an interactive high-quality-tone making technique which provides for free sound making and editing operations by a user.
Abstract:
Analysis data are provided which are indicative of plural components making up an original sound waveform. The analysis data are analyzed to obtain a characteristic concerning a predetermined element, and then data indicative of the obtained characteristic is extracted as a sound or musical parameter. The characteristic corresponding to the extracted musical parameter is removed from the analysis data, and the original sound waveform is represented by a combination of the thus-modified analysis data and the musical parameter. These data are stored in a memory. The user can variably control the musical parameter. A characteristic corresponding to the controlled musical parameter is added to the analysis data. In this manner, a sound waveform is synthesized on the basis of the analysis data to which the controlled characteristic has been added. In such a sound synthesis technique of the analysis type, it is allowed to apply free controls to various sound elements such as a formant and a vibrato.