Interactive and shared surfaces
    21.
    发明授权

    公开(公告)号:US12267620B2

    公开(公告)日:2025-04-01

    申请号:US17969907

    申请日:2022-10-20

    Abstract: The interactive and shared surface technique described herein employs hardware that can project on any surface, capture color video of that surface, and get depth information of and above the surface while preventing visual feedback (also known as video feedback, video echo, or visual echo). The technique provides N-way sharing of a surface using video compositing. It also provides for automatic calibration of hardware components, including calibration of any projector, RGB camera, depth camera and any microphones employed by the technique. The technique provides object manipulation with physical, visual, audio, and hover gestures and interaction between digital objects displayed on the surface and physical objects placed on or above the surface. It can capture and scan the surface in a manner that captures or scans exactly what the user sees, which includes both local and remote objects, drawings, annotations, hands, and so forth.

    Hover-based user-interactions with virtual objects within immersive environments

    公开(公告)号:US11068111B2

    公开(公告)日:2021-07-20

    申请号:US16695771

    申请日:2019-11-26

    Abstract: Systems and methods for enabling user-interactions with virtual objects (VOs) included in immersive environments (IEs) are provided. A head-mounted display (HMD) device is communicatively coupled with a hover-sensing (HS) device, via a communication session. The HMD device provides an IE to a wearer by displaying a field-of-view (FOV) that includes a VO. The user executes user-interactions, such as 2D and/or 3D hand gestures, fingertip gestures, multi-fingertip gestures, stylus gestures, hover gestures, and the like. The HS device detects the user-interactions and generates interaction data. The interaction data is provided to the HMD device via the communication session. The HMD device updates the FOV and/or the VO based on the interaction data. A physical overlay that includes a 3D protrusion is coupled with the HS device. The overlay is transparent to the hover-sensing capabilities of the HS device. The protrusion provides tactile feedback to the user for the user-interactions.

    Hover-based user-interactions with virtual objects within immersive environments

    公开(公告)号:US10514801B2

    公开(公告)日:2019-12-24

    申请号:US15624097

    申请日:2017-06-15

    Abstract: Systems and methods for enabling user-interactions with virtual objects (VOs) included in immersive environments (IEs) are provided. A head-mounted display (HMD) device is communicatively coupled with a hover-sensing (HS) device, via a communication session. The HMD device provides an IE to a wearer by displaying a field-of-view (FOV) that includes a VO. The user executes user-interactions, such as 2D and/or 3D hand gestures, fingertip gestures, multi-fingertip gestures, stylus gestures, hover gestures, and the like. The HS device detects the user-interactions and generates interaction data. The interaction data is provided to the HMD device via the communication session. The HMD device updates the FOV and/or the VO based on the interaction data. A physical overlay that includes a 3D protrusion is coupled with the HS device. The overlay is transparent to the hover-sensing capabilities of the HS device. The protrusion provides tactile feedback to the user for the user-interactions.

    MULTIFACTOR VENUE LOCALIZATION USING CENTRALIZED LEARNING

    公开(公告)号:US20180300623A1

    公开(公告)日:2018-10-18

    申请号:US15489234

    申请日:2017-04-17

    Abstract: A central server receives a venue identification query from a client device in the venue and a test data set including information collected from the venue. The central server then queries a classifier to identify the venue based on the test data. The classifier returns an identity value (venue ID) and a confidence value for the venue ID. When the confidence value is less than a threshold value, the central server obtains additional data from the client device until the venue is identified. The central server associates the venue ID with the test data set, including the additional data, and adds the test data set to training data for the classifier.

    ACCELERATED INSTANT REPLAY FOR CO-PRESENT AND DISTRIBUTED MEETINGS

    公开(公告)号:US20180232129A1

    公开(公告)日:2018-08-16

    申请号:US15953219

    申请日:2018-04-13

    Abstract: Techniques for recording and replay of a live conference while still attending the live conference are described. A conferencing system includes a user interface generator, a live conference processing module, and a replay processing module. The user interface generator is configured to generate a user interface that includes a replay control panel and one or more output panels. The live conference processing module is configured to extract information included in received conferencing data that is associated with one or more conferencing modalities, and to display the information in the one or more output panels in a live manner (e.g., as a live conference). The replay processing module is configured to enable information associated with the one or more conferencing modalities corresponding to a time of the conference session prior to live to be presented at a desired rate, possibly different from the real-time rate.

Patent Agency Ranking