Abstract:
Embodiments described herein relate generally to systems comprising a display device, a display device-coupled computing platform, a mobile device in communication with the computing platform, and a content server in which methods and techniques of capture and/or processing of audiovisual performances are described and, in particular, description of techniques suitable for use in connection with display device connected computing platforms for rendering vocal performance captured by a handheld computing device.
Abstract:
Vocal musical performances may be captured and, in some cases or embodiments, pitch-corrected and/or processed in accord with a user selectable vocal effects schedule for mixing and rendering with backing tracks in ways that create compelling user experiences. In some cases, the vocal performances of individual users are captured on mobile devices in the context of a karaoke-style presentation of lyrics in correspondence with audible renderings of a backing track. Such performances can be pitch-corrected in real-time at the mobile device in accord with pitch correction settings. Vocal effects schedules may also be selectively applied to such performances. In these ways, even amateur user/performers with imperfect pitch are encouraged to take a shot at “stardom” and/or take part in a game play, social network or vocal achievement application architecture that facilitates musical collaboration on a global scale and/or, in some cases or embodiments, to initiate revenue generating in-application transactions.
Abstract:
Notwithstanding practical limitations imposed by mobile device platforms and applications, truly captivating musical instruments may be synthesized in ways that allow musically expressive performances to be captured and rendered in real-time. Synthetic musical instruments that provide a game, grading or instructional mode are described in which one or more qualities of a user's performance are assessed relative to a musical score. By providing a range of modes (from score-assisted to fully user-expressive), user interactions with synthetic musical instruments are made more engaging and tend to capture user interest over generally longer periods of time. Synthetic musical instruments are described in which force dynamics of user gestures (such as finger contact forces applied to a multi-touch sensitive display or surface and/or the temporal extent and applied pressure of sustained contact thereon) are captured and drive the digital synthesis in ways that enhance expressiveness of user performances.