Abstract:
Systems and methods for executing a game presented on a screen of a head mounted display include executing a game. The execution of the game renders interactive scenes of the game on the screen of the HMD. Images identifying a shift in gaze direction of the user wearing the HMD, are received. The gaze shift is detected during viewing of the interactive scenes presented on the HMD screen. Real-world images that are in line with the gaze direction of the user, are captured from a forward-facing camera of the HMD. A portion of the screen is transitioned from a non-transparent mode to a semi-transparent mode in response to the shift in the gaze direction such that at least part of the real world images are presented in the portion of the screen rendering the interactive scenes of the game. The transparent mode is discontinued after a period of time.
Abstract:
Systems and methods for executing content to be rendered on a screen of a head mounted display (HMD) are provided. One method includes executing the content to render interactive scenes on the screen and tracking an orientation direction of the HMD when worn on a head of a user and the interactive scenes are being rendered on the screen. The method includes changing view directions into the interactive scenes based on changes in the orientation direction of the HMD, such that movements of the head of the user causes the changes in the view directions into the interactive scenes. The method further includes receiving images of a real world space using a camera of the HMD. The camera of the HMD is configured to capture a location of real world objects in the real world space relative to the user of the HMD. The method includes detecting that at least one real world object is becoming proximate to the user of the HMD and generating a warning or message to be presented to the HMD, the warning or message indicating that the user is likely to bump or contact the at least one real world object. The method further includes transitioning at least a portion of the screen to a transparent mode. The transparent mode provides at least a partial view into the real world space using the camera of the HMD.
Abstract:
A system for interfacing with an interactive program is provided, including: a computing device for executing the interactive program; a display device for enabling user control and input to the interactive program, the display device being configured to be attached to the user; wherein the computing device is configured to receive data from an image capture device to determine and track a position of the display device; wherein the computing device is configured to define interactive zones, each interactive zone being defined by a spatial region having an associated specified function for an action of the display device when the display device is positioned within that interactive zone; and, wherein the computing device is configured to set the functionality of the action of the display device to the specified function associated with the interactive zone within which the display device is located.
Abstract:
A method for determining the position of a controller device, comprises: receiving dimensions of the display input by a user of the computer-based system; capturing successive images of the display at the controller device; determining a position of the controller device relative to the display based on the dimensions of the display and a perspective distortion of the display in the captured successive images of the display; providing the determined position of the controller to the computer-based system to interface with the interactive program to cause an action by the interactive program.
Abstract:
Methods and systems for providing input to an interactive program are presented. An interactive system includes a server for executing the interactive program, and a game client interfaced with the server over a network. The game client is configured to send, over the network, position data defining a position of a controller device. The server is configured to define interactive zones, each interactive zone being defined by a spatial region having an associated specified function for an action of the controller device when the controller device is located within that interactive zone. The server is further configured to set the functionality of the action of the controller device to the specified function associated with the interactive zone within which the controller device is located.
Abstract:
A method for determining the position of a controller device, comprises: receiving dimensions of the display input by a user of the computer-based system; capturing successive images of the display at the controller device; determining a position of the controller device relative to the display based on the dimensions of the display and a perspective distortion of the display in the captured successive images of the display; providing the determined position of the controller to the computer-based system to interface with the interactive program to cause an action by the interactive program.
Abstract:
Consumer electronic devices have been developed with enormous information processing capabilities, high quality audio and video outputs, large amounts of memory, and may also include wired and/or wireless networking capabilities. Additionally, relatively unsophisticated and inexpensive sensors, such as microphones, video camera, GPS or other position sensors, when coupled with devices having these enhanced capabilities, can be used to detect subtle features about users and their environments. A variety of audio, video, simulation and user interface paradigms have been developed to utilize the enhanced capabilities of these devices. These paradigms can be used separately or together in any combination. One paradigm automatically creating user identities using speaker identification. Another paradigm includes a control button with 3-axis pressure sensitivity for use with game controllers and other input devices.
Abstract:
A game controller includes an image capture unit; a body; at least one input device assembled with the body, the input device manipulable by a user to register an input from the user; an inertial sensor operable to produce information for quantifying a movement of said body through space; and at least one light source assembled with the body; and a processor coupled to the image capture unit and the inertial sensor. The processor is configured to track the body by analyzing a signal from the inertial sensor and analyzing an image of the light source from the image capture unit. The processor is configured to establish a gearing between movement of the body and actions to be applied by a computer program.
Abstract:
Consumer electronic devices have been developed with enormous information processing capabilities, high quality audio and video outputs, large amounts of memory, and may also include wired and/or wireless networking capabilities. Additionally, relatively unsophisticated and inexpensive sensors, such as microphones, video camera, GPS or other position sensors, when coupled with devices having these enhanced capabilities, can be used to detect subtle features about users and their environments. A variety of audio, video, simulation and user interface paradigms have been developed to utilize the enhanced capabilities of these devices. These paradigms can be used separately or together in any combination. One paradigm automatically creating user identities using speaker identification. Another paradigm includes a control button with 3-axis pressure sensitivity for use with game controllers and other input devices.
Abstract:
A method for sharing content with other HMDs includes rendering content of a virtual environment scene on a display screen of a head-mounted display associated with a first user. The display screen rendering the virtual environment scene represents a virtual reality space of the first user. A request to share the virtual reality space of the first user is detected. The request targets a second user. In response to detecting acceptance of the request to share, the virtual reality space of the first user is shared with the second user. The sharing allows synchronizing the virtual environment scene rendered on the head mounted display of the first and the second users.