Image and point cloud based tracking and in augmented reality systems

    公开(公告)号:US10657708B1

    公开(公告)日:2020-05-19

    申请号:US15971566

    申请日:2018-05-04

    Applicant: Snap Inc.

    Abstract: Systems and methods for image based location estimation are described. In one example embodiment, a first positioning system is used to generate a first position estimate. A 3D point cloud data describing an environment is then accessed. A first image of an environment is captured, and a portion of the image is matched to a portion of key points in the 3D point cloud data. An augmented reality object is then aligned within one or more images of the environment based on the match of the 3D point cloud with the image. In some embodiments, building façade data may additionally be used to determine a device location and place the augmented reality object within an image.

    Scaled perspective zoom on resource constrained devices

    公开(公告)号:US12238404B2

    公开(公告)日:2025-02-25

    申请号:US17816223

    申请日:2022-07-29

    Applicant: Snap Inc.

    Abstract: A dolly zoom effect can be applied to one or more images captured via a resource-constrained device (e.g., a mobile smartphone) by manipulating the size of a target feature while the background in the one or more images changes due to physical movement of the resource-constrained device. The target feature can be detected using facial recognition or shape detection techniques. The target feature can be resized before the size is manipulated as the background changes (e.g., changes perspective).

    Dense feature scale detection for image matching

    公开(公告)号:US12198357B2

    公开(公告)日:2025-01-14

    申请号:US18367034

    申请日:2023-09-12

    Applicant: Snap Inc.

    Abstract: Dense feature scale detection can be implemented using multiple convolutional neural networks trained on scale data to more accurately and efficiently match pixels between images. An input image can be used to generate multiple scaled images. The multiple scaled images are input into a feature net, which outputs feature data for the multiple scaled images. An attention net is used to generate an attention map from the input image. The attention map assigns emphasis as a soft distribution to different scales based on texture analysis. The feature data and the attention data can be combined through a multiplication process and then summed to generate dense features for comparison.

    Efficient human pose tracking in videos

    公开(公告)号:US12165335B2

    公开(公告)日:2024-12-10

    申请号:US18460335

    申请日:2023-09-01

    Applicant: Snap Inc.

    Abstract: Systems, devices, media and methods are presented for a human pose tracking framework. The human pose tracking framework may identify a message with video frames, generate, using a composite convolutional neural network, joint data representing joint locations of a human depicted in the video frames, the generating of the joint data by the composite convolutional neural network done by a deep convolutional neural network operating on one portion of the video frames, a shallow convolutional neural network operating on a another portion of the video frames, and tracking the joint locations using a one-shot learner neural network that is trained to track the joint locations based on a concatenation of feature maps and a convolutional pose machine. The human pose tracking framework may store, the joint locations, and cause presentation of a rendition of the joint locations on a user interface of a client device.

    IMAGE AND POINT CLOUD BASED TRACKING AND IN AUGMENTED REALITY SYSTEMS

    公开(公告)号:US20220406008A1

    公开(公告)日:2022-12-22

    申请号:US17856720

    申请日:2022-07-01

    Applicant: Snap Inc.

    Abstract: Systems and methods for image based location estimation are described. In one example embodiment, a first positioning system is used to generate a first position estimate. Point cloud data describing an environment is then accessed. A two-dimensional surface of an image of an environment is captured, and a portion of the image is matched to a portion of key points in the point cloud data. An augmented reality object is then aligned within one or more images of the environment based on the match of the point cloud with the image. In some embodiments, building façade data may additionally be used to determine a device location and place the augmented reality object within an image.

    Eye texture inpainting
    29.
    发明授权

    公开(公告)号:US11468544B2

    公开(公告)日:2022-10-11

    申请号:US17355687

    申请日:2021-06-23

    Applicant: Snap Inc.

    Abstract: Systems, devices, media, and methods are presented for generating texture models for objects within a video stream. The systems and methods access a set of images as the set of images are being captured at a computing device. The systems and methods determine, within a portion of the set of images, an area of interest containing an eye and extract an iris area from the area of interest. The systems and methods segment a sclera area within the area of interest and generate a texture for the eye based on the iris area and the sclera area.

Patent Agency Ranking