-
公开(公告)号:WO2022245653A1
公开(公告)日:2022-11-24
申请号:PCT/US2022/029197
申请日:2022-05-13
Applicant: SNAP INC.
Inventor: ZHOU, Kai
Abstract: A method for transferring a gait pattern of a first user to a second user to simulate augmented reality content in a virtual simulation environment is described. In one aspect, the method includes identifying a gait pattern of a first user operating a first visual tracking system in a first physical environment, identifying a trajectory from a second visual tracking system operated by a second user in a second physical environment, the trajectory based on poses of the second visual tracking system over time, modifying the trajectory from the second visual tracking system based on the gait pattern of the first user, applying the modified trajectory in a plurality of virtual environments, and generating simulated ground truth data based on the modified trajectory in the plurality of virtual environments.
-
公开(公告)号:WO2023091568A1
公开(公告)日:2023-05-25
申请号:PCT/US2022/050244
申请日:2022-11-17
Applicant: SNAP INC.
Inventor: ZHOU, Kai
IPC: G06T7/80
Abstract: A method for adjusting camera intrinsic parameters of a single camera visual tracking device is described. In one aspect, a method includes accessing a temperature of a camera of the visual tracking system, detecting that the temperature of the camera exceeds a threshold, in response identifying one or more feature points that are located in a central region of an initial image, generating a graphical user interface element that instructs a user of the visual tracking system to move the visual tracking system towards a border region of the initial image, and determining intrinsic parameters of the camera based on matching pairs of the one or more detected feature points in the border region and one or more projected feature points in the border region.
-
公开(公告)号:WO2022246388A1
公开(公告)日:2022-11-24
申请号:PCT/US2022/072347
申请日:2022-05-16
Applicant: SNAP INC.
Inventor: BIRKLBAUER, Clemens , HALMETSCHLAGER-FUNEK, Georg , KALKGRUBER, Matthias , ZHOU, Kai
IPC: H04N13/246 , H04N13/243 , H04N13/271
Abstract: A method for adjusting camera intrinsic parameters of a multi-camera visual tracking device is described. In one aspect, a method for calibrating the multi-camera visual tracking system includes disabling a first camera of the multi-camera visual tracking system while a second camera of the multi- camera visual tracking system is enabled, detecting a first set of features in a first image generated by the first camera after detecting that the temperature of the first camera is within the threshold of the factory calibration temperature of the first camera, and accessing and correcting intrinsic parameters of the second camera based on the projection of the first set of features in the second image and a second set of features in the second image.
-
公开(公告)号:WO2022005717A1
公开(公告)日:2022-01-06
申请号:PCT/US2021/036555
申请日:2021-06-09
Applicant: SNAP INC.
Inventor: ZHOU, Kai , QI, Qi , HOL, Jeroen
IPC: G06T19/00 , G06T7/70 , G06T15/20 , G06T19/003 , G06T2207/20081 , G06T2207/20084 , G06T2207/30241 , G06T2207/30244 , G06T2210/41 , G06T7/20 , G06T7/50
Abstract: Systems and methods of generating ground truth datasets for producing virtual reality (VR) experiences, for testing simulated sensor configurations, and for training machine-learning algorithms. In one example, a recording device with one or more cameras and one or more inertial measurement units captures images and motion data along a real path through a physical environment. A SLAM application uses the captured data to calculate the trajectory of the recording device. A polynomial interpolation module uses Chebyshev polynomials to generate a continuous time trajectory (CTT) function. The method includes identifying a virtual environment and assembling a simulated sensor configuration, such as a VR headset. Using the CTT function, the method includes generating a ground truth output dataset that represents the simulated sensor configuration in motion along a virtual path through the virtual environment. The virtual path is closely correlated with the motion along the real path as captured by the recording device. Accordingly, the output dataset produces a realistic and life-like VR experience. In addition, the methods described can be used to generate multiple output datasets, at various sample rates, which are useful for training the machine-learning algorithms which are part of many VR systems.
-
公开(公告)号:WO2022005698A1
公开(公告)日:2022-01-06
申请号:PCT/US2021/036096
申请日:2021-06-07
Applicant: SNAP INC.
Inventor: KALKGRUBER, Matthias , MENDEZ, Erick Mendez , WAGNER, Daniel , WOLF, Daniel , ZHOU, Kai
IPC: G05D1/02 , G01C11/02 , G02B2027/0138 , G02B2027/014 , G02B2027/0178 , G02B27/0172 , G05D1/0253 , G06F3/012 , H04N5/2252 , H04N5/3532
Abstract: Visual-inertial tracking of an eyewear device using a rolling shutter camera(s). The eyewear device includes a position determining system. Visual-inertial tracking is implemented by sensing motion of the eyewear device. An initial pose is obtained for a rolling shutter camera and an image of an environment is captured. The image includes feature points captured at a particular capture time. A number of poses for the rolling shutter camera is computed based on the initial pose and sensed movement of the device. The number of computed poses is responsive to the sensed movement of the mobile device. A computed pose is selected for each feature point in the image by matching the particular capture time for the feature point to the particular computed time for the computed pose. The position of the mobile device is determined within the environment using the feature points and the selected computed poses for the feature points.
-
-
-
-