Abstract:
A light-generated input interface is provided using a combination of components that include a projector and a sensor. The projector displays an image corresponding to an input device. The sensor can be used to detect selection of input based on contact by a user-controlled object with displayed regions of the projected input device. An intersection of a projection area and an active sensor area on a surface where the input device is to be displayed is used to set a dimension of an image of the input device.
Abstract:
One or more objects are classified and/or recognized from a scene based on a depth difference between surface regions of the object(s) and a reference. First, a depth image of a scene with no object is acquired (410). Then, an event is detected in order to trigger classification (420). The depth image of the scene with the object present is acquired (430), and the difference between the two images is obtained (440). Feature(s) are extracted from the difference image (450, and the object is classified based on the extracted features (460).
Abstract:
A light-generated input interface is provided using a combination of components that include a projector and a sensor. The projector displays an image corresponding to an input device. The sensor can be used to detect selection of input based on contact by a user-controlled object with displayed regions of the projected input device. An intersection of a projection area and an active sensor area on a surface where the input device is to be displayed is used to set a dimension of an image of the input device.