Abstract:
A characteristic-data-extraction unit 13 extracts characteristic data containing changing information from song data, then an impression-data-conversion unit 14 uses a pre-learned hierarchical neural network to convert the characteristic data extracted by the characteristic-data-extraction unit 13 to impression data and stores it together with song data into a song database 15. A song search unit 18 searches the song database 15 based on impression data input from a PC-control unit 19, and outputs the search results to a search-results-output unit 21.
Abstract:
A system and methods for the dynamic generation of playlists to a user are provided. In connection with a system that convergently merges perceptual and digital signal processing analysis of media entities for purposes of classifying the media entities, various means are provided to a user for automatically generating playlists of closely related and/or similarly situated media entities for distribution to participating users. Techniques for providing a dynamic recommendation engine and techniques for rating media entities are also included. In an illustrative implementation, the playlists may be generated and stored allowing for user persistence from experience to experience.
Abstract:
A game device is provided which has a function to automatically compose background music without forcing a player to input complicated operations, while allowing the player to enjoy the game. A game processor (22) receives operational signals from a control pad (10) and thereby performs the game processing. An accompaniment parameter generator (25) receives, from the game processor (22), parameters relating to the status of the game and then generates an accompaniment parameter which corresponds to the status of the game. A melody parameter generator (24) receives the operational signals from the control pad (10), and then decides on scales, sound production starting time, note lengths and other necessary conditions by considering the operational signals as sound producing factors for a melody. The starting time for producing the melody is determined by referring to the sound producing timing of the melody which is included in the accompaniment parameter. A sound processor (26) reproduces the background music which is specified by the accompaniment parameter and the melody parameter.
Abstract:
A video game system comprises memory for storing data defining graphical objects for use in a video game. The system further comprises logic configured to enable a user to select at least one musical song to be played during a run of the video game. The logic is further configured to control at least one of the graphical objects during the run of the video game based on an attribute correlated with the selected song.
Abstract:
A karaoke reproducing apparatus is equipped a measure for easily checking whether a music piece selected by the user has correctly been accepted or not. The apparatus has a memory in which music title name information of each of the music pieces recorded on a recording medium has previously been stored. When one of the plurality of music pieces is selected by an operation, the music title name information of the selected music piece is soon displayed by characters. The apparatus also has memory means in which music piece classification information of each of the music pieces recorded on the recording medium has previously been stored. An item content selection command to designate the contents of at least one of a plurality of different items included in the music piece classification information is generated in accordance with an operation. The music piece corresponding to the contents of at least one item indicated by the item content selection command is searched for by using the information from the memory means.
Abstract:
A method is designed for feeding karaoke data representative of karaoke performance to a karaoke apparatus having an audio section and a video section. The method is conducted by the following steps. The initial step is formatting the karaoke data containing various kinds of data items including music control data and word control data into a plurality of packets such that each packet is formed of a body containing a segment of the karaoke data and a header containing identification information indicating the kind of the karaoke data contained in the body of each packet. The next step is delivering the plurality of the packets in a stream to the karaoke apparatus according to a predetermined order by which the karaoke apparatus time-sequentially processes the stream of the packets. The further step is selectively distributing the music control data contained in the processed packets to the audio section in accordance with the identification information to thereby enable the audio section to generate music tones of the karaoke performance. The last step is selectively distributing the word control data contained in the processed packets to the video section in accordance with the identification information to thereby enable the video section to display lyric words of the karaoke performance in synchronization with the music tones.
Abstract:
A music data processing system having: a storage unit for storing performance data and text data or auxiliary data in a first storage format or in a second storage format; a first data search unit for searching the text data or auxiliary data stored in the first storage unit; a second data search unit for searching the text data or auxiliary data stored in the second storage unit; and a processing unit for processing text data or auxiliary data if the text data or auxiliary data of the first storage format can be searched by said first data search unit, and if the text data or auxiliary data of the first storage format cannot be searched, processing text data or auxiliary data if the text data or auxiliary data of the second storage format can be searched by the second data search unit.
Abstract:
An electronic musical instrument apparatus includes a pad and fret switches. When the pad is stricken while any one of the fret switches is depressed, at least a musical tone is generated. The apparatus has different modes of performance operation including a first melody mode, a second melody mode, a base mode, a first chord mode, a second chord mode, an ad-lib mode and the like. In the first melody mode, each time the pad is operated, a note event is read out from a memory, a previous tone that has been generated is muted and, at the same time, a next tone for the note event read out is immediately generated so that a legato-like performance can be performed. If desired, a mute switch is operated to mute a previous tone that is being generated so that a staccato like performance is performed. In the second melody mode, the base mode, the first chord mode and the second chord mode, performance data for note events is successively and continuously read out from a memory. Each time the pad is operated, the apparatus generates at least a tone for a note event that is read out at a moment of operation of the pad. In the ad-lib mode, different tones for ad-lib performance are assigned to the fret switches. When the pad is operated while any of the fret switches is depressed, at least a tone having a pitch assigned to the fret switch is generated.
Abstract:
A hand held electronic music reference machine includes a platform having a keyboard and a display for displaying text. A database removably or permanently mounted to the platform has a first memory portion storing, for each of a multiplicity of songs, selected lyrics and identification information including a title. The database has a second memory portion storing a segment from each of the songs. A user actuated selection component is operatively connected to the first memory portion of the database and to the display for permitting operator selection of a song from a list of song titles shown on the display and inducing display of the lyrics stored in the first memory portion for the selected song. In addition, a user actuated audio production element provided on the platform is operatively coupled to selection component and the database for enabling an audible reproduction of the segment stored in the second memory portion for the selected song. Search filters are provided for enabling a user to determine a song title from bits of ancillary information, including a series of relative note or pitch values, i.e., a melody line which is rising, falling or remaining the same in pitch value.
Abstract:
A timepiece movement comprises a rotor driven for intermittent rotation by a stepping motor, a speed-reducing wheel train and a second stop device. The speed-reducing wheel train has a movement conversion mechanism for transmitting intermittent rotary movement of the rotor to continuous rotary movement of a second hand wheel in a smooth manner. The second stop device is moveable between a first position, for immediately stopping the second hand wheel, and a second position, for immediately moving the second hand wheel at a predetermined speed.