Full body pose estimation through feature extraction from multiple wearable devices
Abstract:
Embodiments are disclosed for full body pose estimation using features extracted from multiple wearable devices. In an embodiment, a method comprises: obtaining point of view (POV) video data and inertial sensor data from multiple wearable devices worn at the same time by a user; obtaining depth data capturing the user's full body; extracting two-dimensional (2D) keypoints from the POV video data; reconstructing a full body 2D skeletal model from the 2D keypoints; generating a three-dimensional (3D) mesh model of the user's full body based on the depth data; merging nodes of the 3D mesh model with the inertial sensor data; aligning respective orientations of the 2D skeletal model and the 3D mesh model in a common reference frame; and predicting, using a machine learning model, classification types based on the aligned 2D skeletal model and 3D mesh model.
Information query
Patent Agency Ranking
0/0