Abstract:
A user can interface with a computer system to interact with a computer program using an input device. The device includes one or more tracking devices and an input mode control. The one or more tracking devices are configured to communicate information relating to a position, orientation, or motion of one or more controllers to the computer system. The input mode control is configured to communicate an input mode signal to the computer system during interaction with the computer program. The input mode signal is configured to cause the computer program to interpret the information relating to the position, orientation or motion of the one or more controllers according to a particular input mode of a plurality of different input modes.
Abstract:
Computer-implemented management of a distributed televised and/or online entertainment events and contests involves using a contestant module executing on a mobile device with the facility to record an audio-visual performance by a contestant associated with the mobile device; via a communications network, transmitting the recorded audio-visual performance from the contestant module executing on the mobile device to a producer module executing on a computer system remote from the mobile device; evaluating the recorded audio-visual performance to determine whether it satisfies one or more predetermined criteria; if the recorded audio-visual performance is determined to satisfy the one or more predetermined criteria, making the recorded audio-visual performance available for viewing by an audience; receiving, from viewer modules executing on respective audience member devices, votes relating to the recorded audio-visual performance; and determining a winner of the distributed online contest based at least in part on the received votes.
Abstract:
In some embodiments, the instant invention provides for a computer system that includes at least one server and software stored on a computer readable medium accessible by the at least one server; at least one database accessible by the at least one server and is configured to store the game data of the at least one personalized game; a plurality of specifically programmed input devices, where the at least one server, the at least one database and the plurality of specifically programmed input devices are connected through a computer network; where the plurality of specifically programmed input devices are at least a thousand of specifically programmed input devices; and where the at least one server is configured to manage, in real-time, the at least a thousand of specifically programmed input devices.
Abstract:
A system and method are disclosed for providing input to a software application such as a gaming application. The inputs may be received from one or both of a hand-held controller and a NUI system accepting gestural inputs. In an example, a gesture and controller API receives skeletal pose data from the NUI system and converts that data to Boolean or floating point data. In this way, data from both the controller and the NUI system may be exposed to the software application as easily consumable data in the same format.
Abstract:
A three-dimensional (30) sound gaming application can include an ultrasonic sound system, one or more gamers, a gaming console and a throat microphone set. The ultrasonic sound system can include a digital signal processing (OSP) that can adjust the phase, delay, reverb, echo, gain, magnitude or other audio signal component of an audio signal or audio signal components received from the gaming console; an amplifier which can amplify the processed audio signal; and a pair of emitters which can emit ultrasonic signals to each of the gamer's ears to produce a 30 sound effect.
Abstract:
A computer-implemented game resident on a device has a game logic module for controlling operation of the game to create sensory output for presentation to a player. One or more inputs responsive to an external environment provide external input data. An effect generator modifies the sensory output determined by the game logic based on the external input data independently of the game logic module. The sensory data could be the video output, the audio output or both. For example, if the external input relates to ambient light level, the effect generator might dim the display and quieten the audio output.
Abstract:
A device and a method facilitating generation of one or more intuitive gesture sets for the interpretation of a specific purpose are disclosed. Data is captured in a scalar and a vector form which is further fused and stored. The intuitive gesture sets generated after the fusion are further used by one or more components/devices/modules for one or more specific purpose. Also incorporated is a system for playing a game. The system receives one or more actions in a scalar and a vector form from one or more user in order to map the action with at least one prestored gesture to identify a user in control amongst a plurality of users and interpret the action of user for playing the game. In accordance with the interpretation, an act is generated by the one or more component of the system for playing the game.
Abstract:
Gestures of a computer user are observed with a depth camera. A first gesture of the computer user is identified as one of a plurality of different action selection gestures, each action selection gesture associated with a different action performable within an interactive interface controlled by gestures of the computer user. A second gesture is identified as a triggering gesture that causes performance of the action associated with the action selection gesture.
Abstract:
Methods, computer programs, and systems for interfacing a user with a computer program, utilizing gaze detection and voice recognition, are provided. One method includes an operation for determining if a gaze of a user is directed towards a target associated with the computer program. The computer program is set to operate in a first state when the gaze is determined to be on the target, and set to operate in a second state when the gaze is determined to be away from the target. When operating in the first state, the computer program processes voice commands from the user, and, when operating in the second state, the computer program omits processing of voice commands.