Abstract:
An electric instrument music control device is provided having at least two multi-axis position sensors. One sensor is a reference multi-axis position sensor retained in a fixed position the reference multi-axis position sensor having at least one axis held in a fixed position. Another sensor is a moveable multi-axis position sensor rotatable about at least one axis corresponding to the at least one axis of the reference multi-axis position sensor, wherein the moveable multi-axis position sensor is in communication with the reference multi-axis position sensor. The device may include a processor that processes the differentiation between the angular position of the at least one axis of the reference multi-axis position sensor and the at least one axis of the moveable multi-axis position sensor, wherein the angular differentiation correlates to a music effect of an electric instrument.
Abstract:
An audio/visual system (e.g., such as an entertainment console or other computing device) plays a base audio track, such as a portion of a pre-recorded song or notes from one or more instruments. Using a depth camera or other sensor, the system automatically detects that a user (or a portion of the user) enters a first collision volume of a plurality of collision volumes. Each collision volume of the plurality of collision volumes is associated with a different audio stem. In one example, an audio stem is a sound from a subset of instruments playing a song, a portion of a vocal track for a song, or notes from one or more instruments. In response to automatically detecting that the user (or a portion of the user) entered the first collision volume, the appropriate audio stem associated with the first collision volume is added to the base audio track or removed from the base audio track.
Abstract:
In an embodiment, a device which comprises means for generating an audio signal based on sound data, the audio signal configured to produce sound from an audio producing device; means for generating a haptic command based on the sound data, the haptic command configured to cause a haptic feedback device to output a haptic sensation, the haptic sensation being associated with at least one characteristic of the sound data; and means for receiving a navigation command from a user experiencing the haptic sensation via the haptic feedback device, the navigation command associated with the sound data and based, at least in part, on the haptic sensation.
Abstract:
A system and method of generating a representation or alteration of a subject. One or more devices may be attached to a subject, and a first signal transmitted towards the subject, the first signal interacting with the one or more devices to produce a second signal. The second signal may be received from the subject and data therein processed. A representation or alteration of the subject may then be generated as a function of the processed data.
Abstract:
The present invention aims at the production of musical sounds by calculating motion data based on inputted image data using a simple technique without preliminarily preparing playing information or the like and by producing musical sounds based on the calculated data. A musical sound producing apparatus includes an operation part specifying means which extracts motion data indicative of motions from differentials of respective pixels corresponding to image data of a plurality of frames using image data for respective frames as an input; a musical sound producing means which produces musical sound data containing a sound source, a sound scale and a sound level in accordance with the motion data specified by the motion part specifying means; and an output means which outputs the musical sound data produced by the musical sound producing means, wherein an image database in which patterns are registered and an image matching means are provided, and a musical sound synthesizing means is provided to the musical sound producing means, in the musical sound producing means, so as to synthesize the musical sound data with other sound data, thereby producing the musical sound data.
Abstract:
One describes an electronic device for the production, playing, accompaniment and evaluation of sounds, comprising means do be associated to audio system and, the device comprising a) a processing unit which (i) produces musical instrument sounds from an user's touches; (ii) plays music sounds, adds musical effects, alters reproduction parameters of the music playing; (iii) mixes sounds produced from the user's touches with music sounds played; and (iv) comprises music parameters able to evaluate an instrumental accompaniment performance resulted from the instrumental music sounds produced by the user's touches; b) the processing unit comprising a touch sensitive surface which comprises: (i) touch sensors arranged under said surface providing regions sensitive to touches; and (ii) Leds distributed under said surface and controlled by a microprocessor providing a luminous indication sequence according to the music sounds played, said luminous indication sequence being following by touches of the user in this surface.
Abstract:
In order to reproduce music suitable for a situation where a user listens to the music while performing repetitive exercise, if a walking tempo value sensed by a walking tempo sensing portion 3 falls outside a certain range defined on the basis of a music tempo value of a music data file currently being reproduced by a music data reproduction portion 6, a music tempo specifying portion 4 specifies a music tempo value agreeing with the walking tempo value. A reproduction control portion 5 selects a music data file having a music tempo value corresponding to the music tempo specified by the music tempo specifying portion 4 from among a plurality of music data files stored along with data on music tempo of the respective music data files in a data storage portion 2, and causes the music data reproduction portion 6 to start the reproduction of the selected music data file.
Abstract:
An audio signal generating apparatus is provided which makes it possible to supply a new music entertainment that enables users to actively participate in performance of musical compositions based on audio data that is coded in a predetermined format from audio signals, and a program for implementing the method, and a storage medium storing the program. Motion information corresponding to a motion of a user is acquired. An audio signal is acquired from audio data coded from the audio signal according to the predetermined format, and processing is performed on the acquired audio signal according to acquired motion information.
Abstract:
A motion-based sound setting apparatus and method and motion-based sound generating apparatus and method. The motion-based sound setting apparatus includes a mode selection recognizing unit, a motion sensing unit, a motion pattern recognizing unit, and a sound signal setting controlling unit. The mode selection recognizing unit recognizes a user's action with respect to a sound setting mode or a sound generating mode. The motion sensing unit senses a motion of a predetermined device and outputting a result of the sensing as a sensing signal. The motion pattern recognizing unit recognizes a motion pattern of the predetermined device, which corresponds to the sensing signal, in response to a result of the recognition made by the mode selection recognizing unit. The sound signal setting controlling unit sets a sound signal corresponding to the motion pattern recognized by the motion pattern recognizing unit.
Abstract:
An end-user system (10) for transforming real-time streams of content into an output presentation includes a user interface (30) that allows a user to interact with the streams. The user interface (30) includes sensors (32a-f) that monitor an interaction area (36) to detect movements and/or sounds made by a user. The sensors (32a-f) are distributed among the interaction area (36) such that the user interface (30) can determine a three-dimensional location within the interaction area (36) where the detected movement or sound occurred. Different streams of content can be activated in a presentation based on the type of movement or sound detected, as well as the determined location. The present invention allows a user to interact with and adapt the output presentation according to his/her own preferences, instead of merely being a spectator.