Abstract:
A head-mounted display (HMD) device to be worn by a user in a physical environment (PE) is controlled. A 3D virtual environment (VE) is modeled to include a virtual controllable object subject to virtual control input. Motion of the position, head, and hands of the user is monitored in the PE, and a physical surface in the PE is detected. A virtual user interface (vUI) is placed in the VE relative to a virtual perspective of the user. The vUI includes an information display and at least one virtual touch control to produce the virtual control input in response to virtual manipulation of the virtual touch control. The vUI's placement is determined to coincide with the physical surface in the PE relative to the position of the user in the PE.
Abstract:
A system and method for close range object tracking are described. Close range depth images of a user's hands and fingers are acquired using a depth sensor. Movements of the user's hands and fingers are identified and tracked. This information is used to permit the user to interact with a virtual object, such as an icon or other object displayed on a screen, or the screen itself.
Abstract:
A system and method for combining depth images taken from multiple depth cameras into a composite image are described. The volume of space captured in the composite image is configurable in size and shape depending upon the number of depth cameras used and the shape of the cameras' imaging sensors. Tracking of movements of a person or object can be performed on the composite image. The tracked movements can subsequently be used by an interactive application.
Abstract:
Techniques are provided for generation of synthetic 3-dimensional object image variations for training of recognition systems. An example system may include an image synthesizing circuit configured to synthesize a 3D image of the object (including color and depth image pairs) based on a 3D model. The system may also include a background scene generator circuit configured to generate a background for each of the rendered image variations. The system may further include an image pose adjustment circuit configured to adjust the orientation and translation of the object for each of the variations. The system may further include an illumination and visual effect adjustment circuit configured to adjust illumination of the object and the background for each of the variations, and to further adjust visual effects of the object and the background for each of the variations based on application of simulated camera parameters.
Abstract:
A system and method for close range object tracking are described. Close range depth images of a user's hands and fingers or other objects are acquired using a depth sensor. Using depth image data obtained from the depth sensor, movements of the user's hands and fingers or other objects are identified and tracked, thus permitting the user to interact with an object displayed on a screen, by using the positions and movements of his hands and fingers or other objects.
Abstract:
Techniques are provided for 3D analysis of a scene including detection, segmentation and registration of objects within the scene. The analysis results may be used to implement augmented reality operations including removal and insertion of objects and the generation of blueprints. An example method may include receiving 3D image frames of the scene, each frame associated with a pose of a depth camera, and creating a 3D reconstruction of the scene based on depth pixels that are projected and accumulated into a global coordinate system. The method may also include detecting objects, and associated locations within the scene, based on the 3D reconstruction, the camera pose and the image frames. The method may further include segmenting the detected objects into points of the 3D reconstruction corresponding to contours of the object and registering the segmented objects to 3D models of the objects to determine their alignment.
Abstract:
Techniques are provided for generation of synthetic 3-dimensional object image variations for training of recognition systems. An example system may include an image synthesizing circuit configured to synthesize a 3D image of the object (including color and depth image pairs) based on a 3D model. The system may also include a background scene generator circuit configured to generate a background for each of the rendered image variations. The system may further include an image pose adjustment circuit configured to adjust the orientation and translation of the object for each of the variations. The system may further include an illumination and visual effect adjustment circuit configured to adjust illumination of the object and the background for each of the variations, and to further adjust visual effects of the object and the background for each of the variations based on application of simulated camera parameters.
Abstract:
Systems, apparatuses and methods may provide for detecting a snapshot request to conduct a long range depth capture, wherein the snapshot request is associated with a short range depth capture. Additionally, an infrared (IR) projector may be activated at a first power level for a first duration in response to the snapshot request, wherein the first power level is greater than a second power level corresponding to the short range depth capture and the first duration is less than a second duration corresponding to the short range depth capture.
Abstract:
Systems, apparatuses and methods may track air gestures within a bounding box in a field of view (FOV) of a camera. Air gestures made within the bounding box may be translated and mapped to a display screen. Should the hand, or other member making the air gesture, exit the bounds of the bounding box, the box will be dragged along by the member to a new location within the FOV.
Abstract:
Techniques are provided for generation of synthetic 3-dimensional object image variations for training of recognition systems. An example system may include an image synthesizing circuit configured to synthesize a 3D image of the object (including color and depth image pairs) based on a 3D model. The system may also include a background scene generator circuit configured to generate a background for each of the rendered image variations. The system may further include an image pose adjustment circuit configured to adjust the orientation and translation of the object for each of the variations. The system may further include an illumination and visual effect adjustment circuit configured to adjust illumination of the object and the background for each of the variations, and to further adjust visual effects of the object and the background for each of the variations based on application of simulated camera parameters.