Abstract:
The present disclosure describes a target tracker that evaluates frames of data of one or more targets, such as a body part, body, and/or object, acquired by a depth camera. Positions of the joints of the target(s) in the previous frame and the data from a current frame are used to determine the positions of the joints of the target(s) in the current frame. To perform this task, the tracker proposes several hypotheses and then evaluates the data to validate the respective hypotheses. The hypothesis that best fits the data generated by the depth camera is selected, and the joints of the target(s) are mapped accordingly.
Abstract:
The present disclosure describes a target tracker that evaluates frames of data of one or more targets, such as a body part, body, and/or object, acquired by a depth camera. Positions of the joints of the target(s) in the previous frame and the data from a current frame are used to determine the positions of the joints of the target(s) in the current frame. To perform this task, the tracker proposes several hypotheses and then evaluates the data to validate the respective hypotheses. The hypothesis that best fits the data generated by the depth camera is selected, and the joints of the target(s) are mapped accordingly.
Abstract:
Systems and methods for combining three-dimensional tracking of a user's movements with a three-dimensional user interface display is described. A tracking module processes depth data of a user performing movements, for example, movements of the user's hand and fingers. The tracked movements are used to animate a representation of the hand and fingers, and the animated representation is displayed to the user using a three-dimensional display. Also displayed are one or more virtual objects with which the user can interact. In some embodiments, the interaction of the user with the virtual objects controls an electronic device.
Abstract:
Systems and methods for combining three-dimensional tracking of a user's movements with a three-dimensional user interface display is described. A tracking module processes depth data of a user performing movements, for example, movements of the user's hand and fingers. The tracked movements are used to animate a representation of the hand and fingers, and the animated representation is displayed to the user using a three-dimensional display. Also displayed are one or more virtual objects with which the user can interact. In some embodiments, the interaction of the user with the virtual objects controls an electronic device.