Abstract:
Described are methods, systems, and apparatuses, including computer program products, for dynamically determining a musical part performed by a player of a rhythm-action game. In one aspect of a rhythm-action game, microphones are not tied to a particular part and therefore any player can play any of a number of parts, e.g., melody or harmony, lead or rhythm, guitar or bass, without switching instruments. This is accomplished by displaying, on a display, a plurality of target music data associated with a musical composition, receiving a music performance input data via the input device, determining which of the plurality of target music data has a degree of matching with the music performance input data, and assigning the music performance input data to the determined target music data.
Abstract:
The present application relates to a karaoke system (1 ). The system comprises a motion tracking system (3) for tracking the movements of an individual (5) performing karaoke. A character animation unit (11 ) is provided for animating a virtual puppet (7) based on an output of the motion tracking system (3). The karaoke system (1 ) may also be suitable for animating virtual puppets (7) for a plurality of individuals (5).
Abstract:
A system for wireless transmission between a device such as a game controller and a base transceiver linked with a host device. Multiple controllers can be linked with the base transceiver through radio frequency transmission, and audio signals can be selectively transmitted there between. Audio commands can be introduced with manual controls at the controller for transmission to the base transceiver. Audio signals can be generated, mixed or otherwise processed by the combination of the host device and base transceiver for selective transmission to one or more controllers. The combination of manual switches, audio commands, and radio frequency transmission provides unique combinations of interaction between a person and the host device.
Abstract:
A method for effecting biofeedback regulation of at least one physiological variable characteristic of a subject's emotional state, comprising the steps of monitoring at least one speech parameter characteristic of the subject's emotional state so as to produce an indication signal, and using the indication signal to provide the subject with an indication of the at least one physiological variable. A system (10, 30, 70, 90, 110, 125, 130) permits the method to be carried out in standalone mode or via the telephone line (40, 74, 94) in which case the indication signal may be derived at a location remote from the subject. Likewise, information relating to the subject's emotional state can be conveyed vocally to a remote party or textually through the Internet (128), and then processed as required.
Abstract:
The conveying of an audio message to mobile devices is disclosed. A graphics station (109), an internet server (108) and a production device (110) are provided. At the graphics station, a character data file is created for a character having animatable lips; a speech animation loop is generated having lips control for moving the animatable lips in response to a control signal and the character data file and the speech animation loop are uploaded to the internet server. At a production device, the character data file is obtained along with the speech animation loop from the internet server; local audio is received to produce associated audio data and a control signal to animate the lips, a primary animation data file is constructed with lip movement and this file is transmitted, along with associated audio data, to the internet server. At each mobile display device, the character data is received from the internet server. The primary animation data file and the associated audio data are also accepted from the internet server. The character data file and the primary animation data file are processed to produce primary rendered video data, and the primary rendered video data is played with the associated audio data, such that the movement of the lips shown in the primary rendered video data when played is substantially in synchronism with the audio being played.
Abstract:
A computer-implemented system and method are described for managing audio chat for an online video game or application. For example, a system according to one embodiment comprises: an online video game or application execution engine to execute an online video game or application in response to input from one or more users of the video game or application and to responsively generate audio and video of the video game or application; and a chat subsystem to establish audio chat sessions with the one or more users and one or more spectators to the video game or application, the chat subsystem establishing a plurality of audio chat channels including a spectator channel over which the spectators participate in audio chat and a user channel over which the users participate in audio chat.
Abstract:
Various embodiments provide techniques for implementing speech recognition for context switching. In at least some embodiments, the techniques can enable a user to switch between different contexts and/or user interfaces of an application via speech commands. In at least some embodiments, a context menu is provided that lists available contexts for an application that may be navigated to via speech commands. In implementations, the contexts presented in the context menu include a subset of a larger set of contexts that are filtered based on a variety of context filtering criteria. A user can speak one of the contexts presented in the context menu to cause a navigation to a user interface associated with one of the contexts.
Abstract:
Methods and systems for beam forming an audio signal based on a location of an object relative to the listening device, the location being determined from positional data deduced from an optical image including the object. In an embodiment, an object's position is tracked based on video images of the object and the audio signal received from a microphone array located at a fixed position is filtered based on the tracked object position. Beam forming techniques may be applied to emphasize portions of an audio signal associated with sources near the object.