Abstract:
Systems and methods may be used to provide effects corresponding to movement of instrument objects or other objects. A method may include receiving sensor data from an object based on movement of the object, recognizing a gesture from the sensor data, and determining an effect, such as a visualization or audio effect corresponding to the gesture. The method may include causing the effect to be output in response to the determination.
Abstract:
Processing techniques and device configurations for performing and controlling output effects at a plurality of wearable devices are generally described herein. In an example, a processing technique may include receiving, at a computing device, an indication of a triggering gesture that occurs at a first wearable device, determining an output effect corresponding to the indication of the triggering gesture, and in response to determining the output effect, transmitting commands to computing devices that are respectively associated with a plurality of wearable devices, the commands causing the plurality of wearable devices to generate the output effect at the plurality of wearable devices. In further examples, output effects such as haptic feedback, light output, or sound output, may be performed by the plurality of wearable devices, associated computing devices, or other controllable equipment.
Abstract:
An electronic device and a method for reproducing a sound in the electronic device are provided. The electronic device includes a touchscreen displaying a keyboard having a plurality of keys and a plurality of sound source buttons corresponding respectively to a plurality of different sound sources, a processor connected electrically to the touchscreen, and a memory connected electrically to the processor, wherein the memory stores instructions that are executed to cause the processor to perform control such that when an input to at least one key among the plurality of keys is received, the sound source corresponding to at least one sound source button selected among the plurality of sound source buttons is reproduced as a sound corresponding to the received input.
Abstract:
An electronic device is provided. The electronic device includes a display; a memory for storing at least one audio signal; a communication circuit configured to establish wireless communication with an external device; and a processor electrically connected with the display, the memory, and the communication circuit, wherein the memory stores instructions for, when executed, causing the processor to: produce the at least one audio signal, receive data associated with a gesture through the communication circuit from the external device apply a sound effect, selected based at least in part on the data associated with the gesture, to the produced at least one audio source, and output or store a resulting audio signal, wherein the resulting audio signal represents application of the sound effect to the produced at least one audio signal.
Abstract:
An adaptive music playback system is disclosed. The system includes a composition system that receives information corresponding to user activity levels. The composition system modifies the composition of a song in response to changes in user activity. The modifications are made according to a set of composition rules to facilitate smooth musical transitions.
Abstract:
A method for controlling an audio output comprises playing a first audio file having a first tempo, measuring a first heart rate of a user, determining whether the first heart rate of the user is greater than a target heart rate, and playing a second audio file having a second tempo, the second tempo is slower than the first tempo, responsive to determining that the first heart rate of the user is greater than the target heart rate.
Abstract:
A wireless sensor network for musical instruments is provided that will allow a musician to communicate natural performance gestures (orientation, pressure, tilt, etc) to a computer. User interfaces and computing modules are also provided that enable a user to utilize the data communicated by the wireless sensor network to supplement and/or augment the artistic expression.
Abstract:
An audio/visual system (e.g., such as an entertainment console or other computing device) plays a base audio track, such as a portion of a pre-recorded song or notes from one or more instruments. Using a depth camera or other sensor, the system automatically detects that a user (or a portion of the user) enters a first collision volume of a plurality of collision volumes. Each collision volume of the plurality of collision volumes is associated with a different audio stem. In one example, an audio stem is a sound from a subset of instruments playing a song, a portion of a vocal track for a song, or notes from one or more instruments. In response to automatically detecting that the user (or a portion of the user) entered the first collision volume, the appropriate audio stem associated with the first collision volume is added to the base audio track or removed from the base audio track.
Abstract:
Example apparatus and methods provide a gamified adaptive digital disc jockey (DDJ) that optimizes a media presentation based on an audience response according to a gamification process. The DDJ receives data about audience members and determines a state and dynamic of the audience in response to a portion of the media presentation or the dynamics of the media presentation. The DDJ identifies audience leaders or laggards from gamification data or patterns about audience members. The gamification scores may be computed from the reactions or behaviors of audience members. The DDJ automatically adapts the media presentation based on the state and dynamic of the audience in general and/or based on the reactions of people with certain gamification scores. Data relating states, dynamics, gamification scores, and tracks or sequences of tracks from previous presentations may help plan and optimize the presentation and may be stored for planning future presentations.
Abstract:
A system for using body motion capture for musical performances. A motion detection camera captures a series of body movements which are assigned to begin one of more songs, to activate musical filters, and to active sound effects. Once the movements are captured and assignment, the user begins the performance.