Abstract:
A method for transitioning gameplay is provided, the method including: receiving a signal to interrupt gameplay of a video game, the gameplay being presented on a head-mounted display; in response to receiving the signal, transitioning the gameplay from an active state to a paused state; wherein transitioning the gameplay includes identifying an intensity of a gameplay aspect, and progressively reducing the intensity of the gameplay aspect before entering the paused state.
Abstract:
Systems and methods for executing a game presented on a screen of a head mounted display include executing a game. The execution of the game renders interactive scenes of the game on the screen of the HMD. Images identifying a shift in gaze direction of the user wearing the HMD, are received. The gaze shift is detected during viewing of the interactive scenes presented on the HMD screen. Real-world images that are in line with the gaze direction of the user, are captured from a forward-facing camera of the HMD. A portion of the screen is transitioned from a non-transparent mode to a semi-transparent mode in response to the shift in the gaze direction such that at least part of the real world images are presented in the portion of the screen rendering the interactive scenes of the game. The transparent mode is discontinued after a period of time.
Abstract:
Methods, systems, and computer programs are presented for managing the display of images on a head mounted device (HMD). One method includes an operation for tracking the gaze of a user wearing the HMD, where the HMD is displaying a scene of a virtual world. In addition, the method includes an operation for detecting that the gaze of the user is fixed on a predetermined area for a predetermined amount of time. In response to the detecting, the method fades out a region of the display in the HMD, while maintaining the scene of the virtual world in an area of the display outside the region. Additionally, the method includes an operation for fading in a view of the real world in the region as if the HMD were transparent to the user while the user is looking through the region. The fading in of the view of the real world includes maintaining the scene of the virtual world outside the region.
Abstract:
A sensor generates signals representing whether a computer game headset is being worn properly so that the wearer may be advised. The sensor may be a pressure sensor or motion sensor or stretch sensor on the headset, or it may be a camera that images the wearer and uses image recognition to determine if the headset is on correctly.
Abstract:
A method for transitioning gameplay is provided, the method including: receiving a signal to interrupt gameplay of a video game, the gameplay being presented on a head-mounted display; in response to receiving the signal, transitioning the gameplay from an active state to a paused state; wherein transitioning the gameplay includes identifying an intensity of a gameplay aspect, and progressively reducing the intensity of the gameplay aspect before entering the paused state.
Abstract:
One or more chat servers receives voice signals and pose (location and orientation) signals from devices such as VR headsets associated with respective chat participants. For each participant, the server renders a single stream representing the voices of the other participants, with the voice data in each stream being modified to account for the orientation of the head of the receiving participant. The server sends the streams to the participants for whom the streams are tailored. The voice information representing the chat of the other participants in a stream intended for a particular participant can also be modified to account for the distances between participants and orientations of speakers' heads relative to the particular participant for whom the stream is tailored.
Abstract:
A system and method of tracking a location of a head mounted display and generating additional virtual reality scene data to provide the user with a seamless virtual reality experience as the user interacts with and moves relative to the virtual reality scene. An initial position and pose of the HMD is determined using a camera or similar sensor mounted on or in the HMD. As the HMD is moved into a second position and pose, images of two or more fixed points are captured by the camera or sensor to determine a difference in position and pose of the HMD. The difference in position and pose of the HMD is used to predict corresponding movement in the virtual reality scene and generate corresponding additional virtual reality scene data for rendering on the HMD.
Abstract:
A glove interface object is provided, comprising: a plurality of electromagnets positioned at a wrist area of the glove interface object; a plurality of magnetic sensors respectively positioned at fingertip areas of the glove interface object, wherein each magnetic sensor is configured to generate data indicating distances to each of the electromagnets when each of the electromagnets is activated; a controller configured to control activation of the electromagnets and reading of the magnetic sensors in a time-division multiplexed arrangement, wherein each of the magnetic sensors is read during activation of a single magnetic sensor; a transmitter configured to transmit data derived from the reading of the magnetic sensors to a computing device for processing to generate data representing a pose of a virtual hand, the virtual hand capable of being rendered in a virtual environment presented on a head-mounted display.
Abstract:
Consumer electronic devices have been developed with enormous information processing capabilities, high quality audio and video outputs, large amounts of memory, and may also include wired and/or wireless networking capabilities. Additionally, relatively unsophisticated and inexpensive sensors, such as microphones, video camera, GPS or other position sensors, when coupled with devices having these enhanced capabilities, can be used to detect subtle features about users and their environments. A variety of audio, video, simulation and user interface paradigms have been developed to utilize the enhanced capabilities of these devices. These paradigms can be used separately or together in any combination. One paradigm automatically creating user identities using speaker identification. Another paradigm includes a control button with 3-axis pressure sensitivity for use with game controllers and other input devices.
Abstract:
Methods, systems, and devices are described for presenting non-video content through a mobile device that uses a video camera to track a video on another screen. In one embodiment, a system includes a video display, such as a TV, that displays video content. A mobile device with an integrated video camera captures video data from the TV and allows a user to select an area in the video in order to hear/feel/smell what is at that location in the video.