Abstract:
A number of brightness samples are taken outside a shape to compensate for blooming of the shape in an image generated by a digital camera. The brightness of each of the samples is determined and averaged, and the size of the shape is adjusted based on the difference between the brightness of the shape and the average of the brightness samples.
Abstract:
An image capture device includes: a housing; a first camera defined along a front surface of the housing; a first camera controller configured to control the first camera to capture images of an interactive environment during user interactivity at a first exposure setting; a second camera defined along the front surface of the housing; a second camera controller configured to control the second camera to capture images of the interactive environment during the user interactivity at a second exposure setting lower than the first exposure setting, the captured images from the second camera being analyzed to identify and track an illuminated object in the interactive environment.
Abstract:
A method for providing directional input to a video game is provided. The method initiates with capturing an image of a first object and a second object, the first object and the second object being defined at a fixed distance from each other. A three-dimensional location of the first object is determined based on analysis of the captured image, and a two-dimensional location of the second object is determined based on analysis of the captured image. An input direction for a video game is determined based on the three-dimensional location of the first object and the two-dimensional location of the second object.
Abstract:
Input devices for interfacing with a game console to interact with a computer program are disclosed. In one example, the input device includes a controller with a handle and a spherical object that is connected to a first end of the handle. The controller further includes a circuit that identifies the position of the handle. The circuit further includes communication logic to communicate the identified position to the game console during interaction with the computer program. The controller further includes control inputs connected to a second end of the handle, wherein the spherical object is placed in contact with a surface when held by the handle and the circuit updates the identified position of the handle as the handle is pivoted on the surface. The control inputs providing commands that are exchanged with the game console to further interact with the computer program.
Abstract:
Methods and systems for beam forming an audio signal based on a location of an object relative to the listening device, the location being determined from positional data deduced from an optical image including the object. In an embodiment, an object's position is tracked based on video images of the object and the audio signal received from a microphone array located at a fixed position is filtered based on the tracked object position. Beam forming techniques may be applied to emphasize portions of an audio signal associated with sources near the object.
Abstract:
Systems and methods include receiving an image for presenting on a display screen of a head mounted display (HMD). The image is provided by an application. The received image is pre-distorted to enable optics provided in a HMD to render the image. An alignment offset is identified for an eye of a user wearing the HMD by determining a position of the eye relative to an optical axis of at least one lens of the optics of the HMD. The pre-distorted image provided by the application is adjusted to define a corrected pre- distorted image that accounts for the alignment offset. The corrected pre-distorted image is forwarded to the display screen of the HMD for rendering, such that the image presented through the optics of the HMD removes aberrations caused by the alignment offset.
Abstract:
Methods and systems are provided for head mounted display (HMD) implementations. One example implementation, a HMD includes a circuit for communicating with a computing system that processes multimedia content for display in the HMD. Further included is a front unit of the HMD that has a screen for displaying multimedia content, and the front unit has a set of LEDs. The HMD includes an accelerometer and gyroscope disposed in the front unit of the HMD. A rear section of the HMD is provided having a set of LEDs. A headband connecting the front unit to the rear section is included, such that adjustment of the headband changes a separation distance between at least one of the set of LEDs of the front unit and at least one of the set of LEDs of the rear section. Wherein calibration of the separation distance is performed from time to time to produce and estimated separation distance for tracking of the HMD during use.
Abstract:
To calibrate an positional sensor, a plurality of image locations and image sizes of a tracked object are received as the tracked object is moved through a rich motion path. Inertial data is received from the tracked object as the tracked object is moved through the rich motion path. Each of the plurality of image locations is converted to a three-dimensional coordinate system of the positional sensor based on the corresponding image sizes and a field of view of the positional sensor. An acceleration of the tracked object is computed in the three-dimensional coordinate system of the positional sensor. The inertial data is reconciled with the computed acceleration, calibrating the positional sensor.
Abstract:
To correct an angle error, acceleration data is received corresponding to a tracked object in a reference frame of the tracked object. Positional data of the tracked object is received from a positional sensor, and positional sensor acceleration data is computed from the received positional data. The acceleration data is transformed into a positional sensor reference frame using a rotation estimate. An amount of error between the transformed acceleration data and the positional sensor acceleration data is determined. The rotation estimate is updated responsive to the determined amount of error.
Abstract:
To calibrate a tracking system, a computing device receives positional data of a tracked object from an optical sensor as the object is pointed approximately toward the optical sensor. The computing device computes a first angle of the object with respect to an optical axis of the optical sensor using the received positional data. The computing device receives inertial data corresponding to the object, wherein a second angle of the object with respect to a plane normal to gravity can be computed from the inertial data. The computing device determines a pitch of the optical sensor using the first angle and the second angle.