PHYSICAL-SURFACE TOUCH CONTROL IN VIRTUAL ENVIRONMENT

    公开(公告)号:US20180284914A1

    公开(公告)日:2018-10-04

    申请号:US15474216

    申请日:2017-03-30

    Abstract: A head-mounted display (HMD) device to be worn by a user in a physical environment (PE) is controlled. A 3D virtual environment (VE) is modeled to include a virtual controllable object subject to virtual control input. Motion of the position, head, and hands of the user is monitored in the PE, and a physical surface in the PE is detected. A virtual user interface (vUI) is placed in the VE relative to a virtual perspective of the user. The vUI includes an information display and at least one virtual touch control to produce the virtual control input in response to virtual manipulation of the virtual touch control. The vUI's placement is determined to coincide with the physical surface in the PE relative to the position of the user in the PE.

    SYSTEM AND METHOD FOR USER INTERACTION AND CONTROL OF ELECTRONIC DEVICES
    2.
    发明申请
    SYSTEM AND METHOD FOR USER INTERACTION AND CONTROL OF ELECTRONIC DEVICES 审中-公开
    用于电子设备的用户交互和控制的系统和方法

    公开(公告)号:US20140123077A1

    公开(公告)日:2014-05-01

    申请号:US13676017

    申请日:2012-11-13

    CPC classification number: G06F3/017 G06F3/0304 G06F3/04842

    Abstract: A system and method for close range object tracking are described. Close range depth images of a user's hands and fingers are acquired using a depth sensor. Movements of the user's hands and fingers are identified and tracked. This information is used to permit the user to interact with a virtual object, such as an icon or other object displayed on a screen, or the screen itself.

    Abstract translation: 描述了用于近距离对象跟踪的系统和方法。 使用深度传感器获取用户手和手指的近距离深度图像。 识别和跟踪用户手指的移动。 该信息用于允许用户与虚拟对象(诸如屏幕上显示的图标或其他对象)或屏幕本身进行交互。

    SYSTEM AND METHOD FOR COMBINING DATA FROM MULTIPLE DEPTH CAMERAS
    3.
    发明申请
    SYSTEM AND METHOD FOR COMBINING DATA FROM MULTIPLE DEPTH CAMERAS 审中-公开
    用于从多个深度摄像机组合数据的系统和方法

    公开(公告)号:US20140104394A1

    公开(公告)日:2014-04-17

    申请号:US13652181

    申请日:2012-10-15

    Abstract: A system and method for combining depth images taken from multiple depth cameras into a composite image are described. The volume of space captured in the composite image is configurable in size and shape depending upon the number of depth cameras used and the shape of the cameras' imaging sensors. Tracking of movements of a person or object can be performed on the composite image. The tracked movements can subsequently be used by an interactive application.

    Abstract translation: 描述将从多个深度相机拍摄的深度图像组合成合成图像的系统和方法。 根据所使用的深度相机的数量和相机的成像传感器的形状,在合成图像中捕获的空间体积可根据尺寸和形状进行配置。 可以对合成图像执行人或物体的移动跟踪。 跟踪的动作随后可以由交互式应用程序使用。

    Generation of synthetic 3-dimensional object images for recognition systems

    公开(公告)号:US10769862B2

    公开(公告)日:2020-09-08

    申请号:US16053135

    申请日:2018-08-02

    Abstract: Techniques are provided for generation of synthetic 3-dimensional object image variations for training of recognition systems. An example system may include an image synthesizing circuit configured to synthesize a 3D image of the object (including color and depth image pairs) based on a 3D model. The system may also include a background scene generator circuit configured to generate a background for each of the rendered image variations. The system may further include an image pose adjustment circuit configured to adjust the orientation and translation of the object for each of the variations. The system may further include an illumination and visual effect adjustment circuit configured to adjust illumination of the object and the background for each of the variations, and to further adjust visual effects of the object and the background for each of the variations based on application of simulated camera parameters.

    3-dimensional scene analysis for augmented reality operations

    公开(公告)号:US10229542B2

    公开(公告)日:2019-03-12

    申请号:US15046614

    申请日:2016-02-18

    Abstract: Techniques are provided for 3D analysis of a scene including detection, segmentation and registration of objects within the scene. The analysis results may be used to implement augmented reality operations including removal and insertion of objects and the generation of blueprints. An example method may include receiving 3D image frames of the scene, each frame associated with a pose of a depth camera, and creating a 3D reconstruction of the scene based on depth pixels that are projected and accumulated into a global coordinate system. The method may also include detecting objects, and associated locations within the scene, based on the 3D reconstruction, the camera pose and the image frames. The method may further include segmenting the detected objects into points of the 3D reconstruction corresponding to contours of the object and registering the segmented objects to 3D models of the objects to determine their alignment.

    GENERATION OF SYNTHETIC 3-DIMENSIONAL OBJECT IMAGES FOR RECOGNITION SYSTEMS

    公开(公告)号:US20180357834A1

    公开(公告)日:2018-12-13

    申请号:US16053135

    申请日:2018-08-02

    Abstract: Techniques are provided for generation of synthetic 3-dimensional object image variations for training of recognition systems. An example system may include an image synthesizing circuit configured to synthesize a 3D image of the object (including color and depth image pairs) based on a 3D model. The system may also include a background scene generator circuit configured to generate a background for each of the rendered image variations. The system may further include an image pose adjustment circuit configured to adjust the orientation and translation of the object for each of the variations. The system may further include an illumination and visual effect adjustment circuit configured to adjust illumination of the object and the background for each of the variations, and to further adjust visual effects of the object and the background for each of the variations based on application of simulated camera parameters.

Patent Agency Ranking