Abstract:
A method of generating a view of a computer-generated environment using a location in a real-world environment, comprising receiving real-time data regarding the location of a device in the real- world environment; mapping the real-time data regarding the device into a virtual camera within a directly-correlating volume of space in the computer-generated environment; updating the virtual camera location using the real-time data, such that the virtual camera is assigned a location in the computer-generated environment which corresponds to the location of the device in the real-world environment; and using the virtual camera to generate a view of the computer-generated environment from the assigned location in the computer-generated environment.
Abstract:
A computer implemented method for inputting transient data into a persistent world is provided. The method includes capturing sensor data from a sensor. The method further includes detecting a condition, wherein the detection is based at least in part on the match of a detection criteria from a database of a plurality of detection criteria to the captured sensor data. The method includes interpreting the detected condition, wherein the interpretation is based at least in part on the match of an interpretation criteria from a database of a plurality of interpretation criteria to the detected condition. And, the method includes registering the interpretation of the detected condition with a virtual object in a simulation.
Abstract:
Systems and methods for rendering an entertainment animation. The system can comprise a user input unit for receiving a non-binary user input signal; an auxiliary signal source for generating an auxiliary signal; a classification unit for classifying the non-binary user input signal with reference to the auxiliary signal; and a rendering unit for rendering the entertainment animation based on classification results from the classification unit.
Abstract:
An audio scene is created for an avatar in a virtual environment of multiple avatars. A link structure is created between the avatars. An audio scene is created for each avatar, based on an avatar's associations with other linked avatars.
Abstract:
The present invention relates to games on electronic game devices. More specifically the present invention relates to a method and a device for generating game control data for an electronic game dependent from context related data. The present invention is provided to execute a game in relation to present or selected external circumstances that can be perceived by a player. The method of the present invention is based on accessing context data such as e.g. a piece of music, and generating game control data on the basis of said accessed context data. The game control data can be used to control the execution of the game, which can be in turn perceived by the player as providing more realism in gaming.
Abstract:
A portable telephone having a portable telephone function and a game machine function such as for a music game using a vibration device such as a vibration sensor or a vibration motor. When the user shakes the portable telephone like a maraca in rhythm with a predetermined tune, a vibration device generates a vibration pulse, which is compared with a rhythm pulse. Then the shaking of the portable telephone by the user is rated according to the difference in time between the vibration pulse and a rhythm pulse, and the score is displayed on the screen of a display.
Abstract:
An image capture device includes: a housing; a first camera defined along a front surface of the housing; a first camera controller configured to control the first camera to capture images of an interactive environment during user interactivity at a first exposure setting; a second camera defined along the front surface of the housing; a second camera controller configured to control the second camera to capture images of the interactive environment during the user interactivity at a second exposure setting lower than the first exposure setting, the captured images from the second camera being analyzed to identify and track an illuminated object in the interactive environment.
Abstract:
An image capture device includes: a housing; a first camera defined along a front surface of the housing; a first camera controller configured to control the first camera to capture images of an interactive environment during user interactivity at a first exposure setting; a second camera defined along the front surface of the housing; a second camera controller configured to control the second camera to capture images of the interactive environment during the user interactivity at a second exposure setting lower than the first exposure setting, the captured images from the second camera being analyzed to identify and track an illuminated object in the interactive environment.
Abstract:
Remote-controller assemblies for touchscreens provide for the capture, translation and/or transmission (both directly, in a conductive channel, and indirectly) of the control input of a user -- user motions, thematically -- for corresponding capacitive discharge at a touchscreen. A remote motion- sensing input device plurality register a user motion input or input plurality for respective output to an intermediary-transceiver device for processing and transmission of a capacitive load to attached output ends connected to a touchscreen. The attached output ends act as a capacitive input in controlling an on-screen actionable object or object plurality seeking said capacitive input. Various specialty controllers are introduced as mats, musical instruments, steering-wheel assemblies, hockey sticks, golf clubs, baseball bats and gloves, bowling balls and DJ stations.