Abstract:
Object tracking technology, in which controlling an illumination source is controlled to illuminate while a camera is capturing an image to define an intersection region within the image captured by the camera. The image captured by the camera is analyzed to detect an object within the intersection region. User input is determined based on the object detected within the intersection region and an application is controlled based on the determined user input.
Abstract:
The enhanced detection of a waving engagement gesture, in which a shape is defined within motion data, the motion data is sampled at points that are aligned with the defined shape, and, based on the sampled motion data, positions of a moving object along the defined shape are determined over time. It is determined whether the moving object is performing a gesture based on a pattern exhibited by the determined positions, and an application is controlled if determining that the moving object is performing the gesture.
Abstract:
A method includes receiving a first output from a first sensor of an electronic device and receiving a second output from a second sensor of the electronic device. The first sensor has a first sensor type and the second sensor has a second sensor type that is different from the first sensor type. The method also includes detecting a gesture based on the first output and the second output according to a complementary voting scheme that is at least partially based on gesture complexity.
Abstract:
A method includes receiving a first output from a first sensor of an electronic device and receiving a second output from a second sensor of the electronic device. The first sensor has a first sensor type and the second sensor has a second sensor type that is different from the first sensor type. The method also includes detecting a gesture based on the first output and the second output according to a complementary voting scheme that is at least partially based on gesture complexity.
Abstract:
Object tracking technology, in which controlling an illumination source is controlled to illuminate while a camera is capturing an image to define an intersection region within the image captured by the camera. The image captured by the camera is analyzed to detect an object within the intersection region. User input is determined based on the object detected within the intersection region and an application is controlled based on the determined user input.