Determining a pointing vector for gestures performed before a depth camera
Abstract:
A pointing vector is determined for a gesture that is performed before a depth camera. One example includes receiving a first and a second image of a pointing gesture in a depth camera, the depth camera having a first and a second image sensor, applying erosion and dilation to the first image using a 2D convolution filter to isolate the gesture from other objects, finding the imaged gesture in the filtered first image of the camera, finding a pointing tip of the imaged gesture, determining a position of the pointing tip of the imaged gesture using the second image, and determining a pointing vector using the determined position of the pointing tip.
Information query
Patent Agency Ranking
0/0