Abstract:
Examples disclosed herein relate to identifying a target touch region of a touch-sensitive surface based on an image. Examples include a touch input detected at a location of a touch-sensitive surface, an image representing an object disposed between a camera that captures the image and the touch-sensitive surface, identifying a target touch region of a touch-sensitive surface based on an image, and rejecting the detected touch input when the location of the detected touch input is not within any of the at least one identified target touch region of the touch-sensitive surface.
Abstract:
Examples disclosed herein relate to identifying a target touch region of a touch-sensitive surface based on an image. Examples include a touch input detected at a location of a touch-sensitive surface, an image representing an object disposed between a camera that captures the image and the touch-sensitive surface, identifying a target touch region of a touch-sensitive surface based on an image, and rejecting the detected touch input when the location of the detected touch input is not within any of the at least one identified target touch region of the touch-sensitive surface.
Abstract:
Examples disclosed herein relate to detecting misalignment of a touch sensitive mat. Examples include detecting corners of the touch sensitive mat, determining a set of reference corners, performing a comparison of the detected corners of the mat with the set of reference corners, and determining a level of misalignment based on the comparison.
Abstract:
Examples disclosed herein relate to determining a segmentation boundary based on images representing an object. Examples include an IR image based on IR light reflected by an object disposed between an IR camera and an IR-absorbing surface, a color image representing the object disposed between the color camera and the IR-absorbing surface, and determining a segmentation boundary for the object.
Abstract:
Examples disclosed herein relate to determining a segmentation boundary based on images representing an object. Examples include an IR image based on IR light reflected by an object disposed between an IR camera and an IR-absorbing surface, a color image representing the object disposed between the color camera and the IR-absorbing surface, and determining a segmentation boundary for the object.
Abstract:
A method performed by a system. The method includes automatically generating an interactive window on at least a portion of a display, the interactive window having a first size corresponding to a first pattern on the display. The method further includes automatically expanding the interactive window to a second size in response to a second pattern on the display to enclose the first pattern and the second pattern in the interactive window.
Abstract:
A method performed by a system. The method includes automatically generating an interactive window on at least a portion of a display, the interactive window having a first size corresponding to a first pattern on the display. The method further includes automatically expanding the interactive window to a second size in response to a second pattern on the display to enclose the first pattern and the second pattern in the interactive window.