Abstract:
An adaptive music playback system is disclosed. The system includes a composition system that receives information corresponding to user activity levels. The composition system modifies the composition of a song in response to changes in user activity. The modifications are made according to a set of composition rules to facilitate smooth musical transitions
Abstract:
Methods and apparatus for choreographing movement of individuals for a performance event are disclosed. In an embodiment, a method includes providing a performance event configuration having a plurality of location indicators to assist in the placing or movement of individuals conducting a performance event. The method also includes implementing the real time application that implements a process executable on a wireless audio unit and that synchronously transmits body movement instruction signals to the audio units of an individual that is participating in the performance. The wireless audio unit is preferably a wireless, cellular, or mobile telephone that is configured with the appropriate software to receive the signals and to play audio directions for each individual that correspond to choreographed and coordinated body movements to direct the individuals at, towards or away from the location indicators to carry out the performance event.
Abstract:
Various embodiments are provided generally relating to providing a strum pad on a user device. In some embodiments, a musical arrangement may be provided on a user device. A score of the musical arrangement may be tracked as it plays. In some embodiments, gestures may be detected on the user device while the user device plays the musical arrangement. In response to detecting the gesture, a component piece for the musical arrangement may be outputted. In some embodiments, the outputted component piece may correspond to the tracked score of the musical arrangement.
Abstract:
A wearable wireless device. The wireless device has musical functionality. The wireless device captures, processes, and transmits movement information to an app based device.
Abstract:
A sound generating system in the embodiment of the invention includes an information processing terminal displaying a screen relating to a setting for controlling an electronic musical instrument determined based on a positional relation with the electronic musical instrument on a display screen and transmitting control information based on an operation performed on a touch sensor, and the electronic musical instrument performing the setting relating to sound generation according to the received control information to generate an audio signal based on the performed setting.
Abstract:
A computer-implemented method including generating a user interface implemented on a touch-sensitive display configured to generate a virtual dual flywheel system for modulating a lifecycle of a musical note or chord. The dual flywheel system (DFS) includes a first VFS and a second VFS, where the first virtual flywheel system series connected to the second virtual flywheel system such that an output of the first virtual flywheel system is coupled to an input of the second virtual flywheel system. Upon receiving a user input on the user interface, the dual flywheel system determines a virtual momentum for the first virtual flywheel based on the user input and a predetermined mass coefficient of the first virtual flywheel system, and determines a virtual momentum for the second virtual flywheel based on the virtual momentum of the first virtual flywheel system and a predetermined mass coefficient of the second virtual flywheel.
Abstract:
An electric instrument music control device is provided having a foot pedal comprising a base portion and a treadle, wherein the treadle moves with respect to the base portion. The device further has a magnetic displacement sensor coupled to the base portion and a magnet coupled to the treadle. The magnet is located adjacent the magnetic displacement sensor to place the sensor in a field-saturated mode, wherein the magnet moves with respect to the magnetic displacement sensor in response to movement of the treadle with respect to the base portion. A sound characteristic of the electric instrument is modified in response to moving the magnet with respect to the magnetic displacement sensor.
Abstract:
An audio/visual system (e.g., such as an entertainment console or other computing device) plays a base audio track, such as a portion of a pre-recorded song or notes from one or more instruments. Using a depth camera or other sensor, the system automatically detects that a user (or a portion of the user) enters a first collision volume of a plurality of collision volumes. Each collision volume of the plurality of collision volumes is associated with a different audio stem. In one example, an audio stem is a sound from a subset of instruments playing a song, a portion of a vocal track for a song, or notes from one or more instruments. In response to automatically detecting that the user (or a portion of the user) entered the first collision volume, the appropriate audio stem associated with the first collision volume is added to the base audio track or removed from the base audio track.
Abstract:
A sensor obtains an angular rate of a stick member. A sound source map includes plural areas disposed in a virtual space. CPU presumes a direction of a turning axis of the stick member based on the angular rate obtained by the sensor, while a user is operating the stick member, calculates an angular rate of a top of the stick member based on the angular rate obtained by the sensor with the angular rate in the longitudinal direction of the stick member removed, calculates a position of the holding member in the virtual space after a predetermined time, based on the recent direction of the turning axis of the stick member and recent angular rate of the top of the stick member, and sends a sound generating unit a note-on event of a musical tone assigned to an area corresponding to the calculated position among the plural areas.
Abstract:
Methods are provided for adjusting a music output of a vehicle audio system based on driver movements performed while listening to the music. In one embodiment, a music output of a vehicle infotainment system is adjusted responsive to monitored user behavior inside a vehicle, the monitored user behavior manifesting a user mood. In another embodiment, the music output is adjusted based on each of a user mood and a control gesture, wherein both the user mood and the gesture are identified based on user information gathered from vehicle biometric sensors and cameras while music is playing inside the vehicle.