Abstract:
The present invention uses the power of music to provide both feedback and motivation to physical, recreational, occupational therapy patients to enable music expression, therapeutic engagement and compliance.
Abstract:
A musical brain fitness system according to an exemplary embodiment of the present invention includes: a display device configured to display a visual signal according to selected music; a control box configured to control the display device by receiving input of a user; and a controller including an acceleration sensor, and configured to drive the acceleration sensor according to a movement of the user to transmit a signal to the control box, in which the control box compares the signal received from the controller with a signal controlling the display device, processes the comparison, and outputs a result of the comparison.
Abstract:
The present invention relates to an interactive sound-and-light art device with wireless transmission and sensing functions, which is primarily composed of a plurality of acoustic sensor nodes in artistic shapes, whereas each of the plural acoustic sensor nodes is designed to interact with people through the detection of multi-track music playing or voice-based exhibition of twitter conversations. Substantially, each acoustic sensor node is an artistically-shaped frame having a plurality of sensors embedded therein, which includes sensors for detecting environmental information, and sensors for detection human motion. Moreover, each artistically-shaped frame can further be embedded with interactive components, using which each acoustic sensor node is able to interact with people through multi-track music playing or exhibition of LED light variations, according to the detection of its environment sensors and human motion sensors.
Abstract:
A training apparatus includes adjustable resistance means; means for continuously determining an actual configuration of the training apparatus during exercise; means for generating a control signal for an audio device, the control signal being at least partially based on the actual configuration of the training apparatus; and means for outputting the control signal.
Abstract:
An end-user system (10) for transforming real-time streams of content into an output presentation includes a user interface (30) that allows a user to interact with the streams. The user interface (30) includes sensors (32a-f) that monitor an interaction area (36) to detect movements and/or sounds made by a user. The sensors (32a-f) are distributed among the interaction area (36) such that the user interface (30) can determine a three-dimensional location within the interaction area (36) where the detected movement or sound occurred. Different streams of content can be activated in a presentation based on the type of movement or sound detected, as well as the determined location. The present invention allows a user to interact with and adapt the output presentation according to his/her own preferences, instead of merely being a spectator.
Abstract:
Various embodiments related to providing user feedback in an electronic entertainment system are disclosed herein. For example, one disclosed embodiment provides a method of providing user feedback in a karaoke system, comprising inviting a microphone gesture input from a user, receiving the microphone gesture input from the user via one or more motion sensors located on a microphone, comparing the microphone gesture input to an expected gesture input, rating the microphone gesture input based upon comparing the microphone gesture input to the expected gesture input, and providing feedback to the user based upon the rating.
Abstract:
The free-space gesture MIDI controller technique described herein marries the technologies embodied in a free-space gesture controller with MIDI controller technology, allowing a user to control an infinite variety of electronic musical instruments through body gesture and pose. One embodiment of the free-space gesture MIDI controller technique described herein uses a human body gesture recognition capability of a free-space gesture control system and translates human gestures into musical actions. Rather than directly connecting a specific musical instrument to the free-space gesture controller, the technique generalizes its capability and instead outputs standard MIDI signals, thereby allowing the free-space gesture control system to control any MIDI-capable instrument.
Abstract:
A sound generating system in the embodiment of the invention includes an information processing terminal displaying a screen relating to a setting for controlling an electronic musical instrument determined based on a positional relation with the electronic musical instrument on a display screen and transmitting control information based on an operation performed on a touch sensor, and the electronic musical instrument performing the setting relating to sound generation according to the received control information to generate an audio signal based on the performed setting.
Abstract:
There are disclosed systems and methods for measuring the bowing parameters and the bowed string dynamics of a player playing a bowed string instrument. A system for measuring the bowing parameters and the bowed string dynamics may comprise a computer, a bow system and a base component. The bow system may comprise a force sensing mechanism and a bow board. The bow board may comprise an acceleration and angular velocity sensing mechanism, a position and speed sensing mechanism, a data communication module, and a power module. The base component may comprise an acceleration and angular velocity sensing mechanism, a position and speed sensing mechanism, a data communication module, and a power module.
Abstract:
A performance apparatus 11 extends in its longitudinal direction to be held by a player with his or her hand, and is provided with an acceleration sensor 23 for detecting an acceleration sensor value and an angular rate sensor 22 for detecting an angular rate of rotation of the apparatus 11 about its longitudinal axis. CPU 21 detects a sound-generation timing based on the acceleration sensor value. Using the angular rate, CPU 21 calculates a rotation angle of the performance apparatus 11 made about its longitudinal axis in a period from a first and a second timing, wherein the first and second timing correspond to starting and finishing of swinging motion of the performance apparatus, respectively. CPU 21 determines to increase or decrease a sound volume level, in accordance with the direction and amount of the calculated rotation angle, thereby adjusting a sound volume level of musical tone.