Systems and methods for artificial intelligence-based virtual and augmented reality

    公开(公告)号:US11315325B2

    公开(公告)日:2022-04-26

    申请号:US16596610

    申请日:2019-10-08

    Abstract: Examples of the disclosure describe systems and methods for generating and displaying a virtual companion. In an example method, a first input from an environment of a user is received at a first time via a first sensor on a head-wearable device. An occurrence of an event in the environment is determined based on the first input. A second input from the user is received via a second sensor on the head-wearable device, and an emotional reaction of the user is identified based on the second input. An association is determined between the emotional reaction and the event. A view of the environment is presented at a second time later than the first time via a see-through display of the head-wearable device. A stimulus is presented at the second time via a virtual companion displayed via the see-through display, wherein the stimulus is determined based on the determined association between the emotional reaction and the event.

    Image-enhanced depth sensing via depth sensor control

    公开(公告)号:US11128854B2

    公开(公告)日:2021-09-21

    申请号:US16352522

    申请日:2019-03-13

    Abstract: Systems and methods are disclosed for computing depth maps. One method includes capturing, using a camera, a camera image of a runtime scene. The method may also include analyzing the camera image of the runtime scene to determine a plurality of target sampling points at which to capture depth of the runtime scene. The method may further include adjusting a setting associated with a low-density depth sensor based on the plurality of target sampling points. The method may further include capturing, using the low-density depth sensor, a low-density depth map of the runtime scene at the plurality of target sampling points. The method may further include generating a computed depth map of the runtime scene based on the camera image of the runtime scene and the low-density depth map of the runtime scene.

    METHOD AND SYSTEM FOR PERFORMING EYE TRACKING USING AN OFF-AXIS CAMERA

    公开(公告)号:US20210182554A1

    公开(公告)日:2021-06-17

    申请号:US17129669

    申请日:2020-12-21

    Abstract: Systems and methods for estimating a gaze vector of an eye using a trained neural network. An input image of the eye may be received from a camera. The input image may be provided to the neural network. Network output data may be generated using the neural network. The network output data may include two-dimensional (2D) pupil data, eye segmentation data, and/or cornea center data. The gaze vector may be computed based on the network output data. The neural network may be previously trained by providing a training input image to the neural network, generating training network output data, receiving ground-truth (GT) data, computing error data based on a difference between the training network output data and the GT data, and modifying the neural network based on the error data.

    ROOM LAYOUT ESTIMATION METHODS AND TECHNIQUES

    公开(公告)号:US20200234051A1

    公开(公告)日:2020-07-23

    申请号:US16844812

    申请日:2020-04-09

    Abstract: Systems and methods for estimating a layout of a room are disclosed. The room layout can comprise the location of a floor, one or more walls, and a ceiling. In one aspect, a neural network can analyze an image of a portion of a room to determine the room layout. The neural network can comprise a convolutional neural network having an encoder sub-network, a decoder sub-network, and a side sub-network. The neural network can determine a three-dimensional room layout using two-dimensional ordered keypoints associated with a room type. The room layout can be used in applications such as augmented or mixed reality, robotics, autonomous indoor navigation, etc.

Patent Agency Ranking