Dual camera HMD with remote camera alignment

    公开(公告)号:US11212503B1

    公开(公告)日:2021-12-28

    申请号:US16928162

    申请日:2020-07-14

    Abstract: Techniques for aligning and stabilizing images generated by an integrated stereo camera pair with images generated by a detached camera are disclosed. A first image is generated using a first stereo camera; a second image is generated using a second stereo camera; and a third image is generated using the detached camera. A first rotation base matrix is computed between the third and first images, and a second rotation base matrix is computed between the third and second images. The third image is aligned to the first image using the first rotation base matrix, and the third image is aligned to the second image using the second rotation base matrix. A first overlaid image is generated by overlaying the third image onto the first image, and a second overlaid image is generated by overlaying the third image onto the second image. The two overlaid images are parallax corrected and displayed.

    SYSTEMS AND METHODS FOR PROVIDING MIXED-REALITY EXPERIENCES UNDER LOW LIGHT CONDITIONS

    公开(公告)号:US20210373336A1

    公开(公告)日:2021-12-02

    申请号:US16887737

    申请日:2020-05-29

    Abstract: Systems and methods are provided for facilitating computer vision tasks (e.g., simultaneous location and mapping) and pass-through imaging include a head-mounted display (HMD) that includes a first set of one or more cameras configured for performing computer vision tasks and a second set of one or more cameras configured for capturing image data of an environment for projection to a user of the HMD. The first set of one or more cameras is configured to detect at least a visible spectrum light and at least a particular band of wavelengths of infrared (IR) light. The second set of one or more cameras includes one or more detachable IR filters configured to attenuate IR light, including at least a portion of the particular band of wavelengths of IR light.

    Mapping sensor data using a mixed-reality cloud

    公开(公告)号:US11176744B2

    公开(公告)日:2021-11-16

    申请号:US16517976

    申请日:2019-07-22

    Abstract: Improved techniques for re-localizing Internet-of-Things (IOT) devices are disclosed herein. Sensor data digitally representing one or more condition(s) monitored by an IOT device is received. In response, a sensor readings map is accessed, where this map is associated with the IOT device. The map also digitally represents the IOT device's environment and includes data representative of a location of the IOT device within the environment. The map also includes data representative of the conditions monitored by the IOT device. Additionally, the map is updated by attaching the sensor data to the map. In some cases, a coverage map can also be computed. Both the sensors readings map and the coverage map can be automatically updated in response to the TOT device being re-localized.

    Gradual fallback from full parallax correction to planar reprojection

    公开(公告)号:US11032530B1

    公开(公告)日:2021-06-08

    申请号:US16875269

    申请日:2020-05-15

    Abstract: Improved techniques for generating depth maps are disclosed. A stereo pair of images of an environment is accessed. This stereo pair of images includes first and second texture images. A signal to noise ratio (SNR) is identified within one or both of those images. Based on the SNR, which may be based on the texture image quality or the quality of the stereo match, there is a process of selectively computing and imposing a smoothness penalty against a smoothness term of a cost function used by a stereo depth matching algorithm. A depth map is generated by using the stereo depth matching algorithm to perform stereo depth matching on the stereo pair of images. The stereo depth matching algorithm performs the stereo depth matching using the smoothness penalty.

    USING MACHINE LEARNING TO TRANSFORM IMAGE STYLES

    公开(公告)号:US20210158080A1

    公开(公告)日:2021-05-27

    申请号:US16696616

    申请日:2019-11-26

    Abstract: Mapping common features between images that commonly represent an environment using different light spectrum data is performed. A first image having first light spectrum data is accessed, and a second image having second light spectrum data is accessed. These images are fed as input to a DNN, which then identifies feature points that are common between the two images. A generated mapping lists the feature points and lists coordinates of the feature points from both of the images. Differences between the coordinates of the feature points in the two images are determined. Based on these differences, the second image is warped to cause the coordinates of the feature points in the second image to correspond to the coordinates of the feature points in the first image.

    2D obstacle boundary detection
    27.
    发明授权

    公开(公告)号:US10997728B2

    公开(公告)日:2021-05-04

    申请号:US16389621

    申请日:2019-04-19

    Abstract: Techniques are provided to dynamically generate and render an object bounding fence in a mixed-reality scene. Initially, a sparse spatial mapping is accessed. The sparse spatial mapping beneficially includes perimeter edge data describing an object's edge perimeters. A gravity vector is also generated. Based on the perimeter edge data and the gravity vector, two-dimensional (2D) boundaries of the object are determined and a bounding fence mesh of the environment is generated. A virtual object is then rendered, where the virtual object is representative of at least a portion of the bounding fence mesh and visually illustrates a bounding fence around the object.

    Controlling content included in a spatial mapping

    公开(公告)号:US10964111B2

    公开(公告)日:2021-03-30

    申请号:US16047269

    申请日:2018-07-27

    Abstract: In some instances, undesired content is selectively omitted from a mixed-reality scene via use of tags. An environment's spatial mapping is initially accessed. Based on an analysis of this spatial mapping, any number of segmented objects are identified from within the spatial mapping. These segmented objects correspond to actual physical objects located within the environment and/or to virtual objects that are selected for potential projection into the mixed-reality scene. For at least some of these segmented objects, a corresponding tag is then accessed. A subset of virtual content is then generated based on certain attributes associated with those tags. The content that is included in the subset is specially chosen for actual projection. Thereafter, the selected content is either projected into the mixed-reality scene or scheduled for projection.

    Active illumination management through contextual information

    公开(公告)号:US10916024B2

    公开(公告)日:2021-02-09

    申请号:US16399443

    申请日:2019-04-30

    Abstract: An illumination module and a depth camera on a near-eye-display (NED) device used for depth tracking may be subject to strict power consumption budgets. To reduce power consumption of depth tracking, the illumination power of the illumination module is controllably varied. Such variation entails using a previous frame, or previously recorded data, to inform the illumination power used to generate a current frame. Once the NED determines the next minimum illumination power, the illumination module activates at that power level. The illumination module emits electromagnetic (EM) radiation (e.g. IR light), the EM radiation reflects off surfaces in the scene, and the reflected light is captured by the depth camera. The method repeats for subsequent frames, using contextual information from each of the previous frames to dynamically control the illumination power. Thus, the method reduces the overall power consumption of the depth camera assembly of the NED to a minimum level.

Patent Agency Ranking