Abstract:
An embodiment of a method of recognizing finger detection data in a detection data map produced by a touch screen includes converting the data from the x, y, z space into a three-descriptor space including: a first coordinate representative of the number of intensity peaks in the map, a second coordinate representative of the number of nodes (i.e., pixels) absorbed under one or more of the intensity peaks. A third coordinate may be selected as the angular coefficient or slope of a piecewise-linear approximating function passing through points having the numbers of nodes absorbed under the intensity peaks ordered in decreasing order over said intensity peaks, which permits singling out finger data with respect to non-finger data over the whole of the touch screen. The third coordinate may be also selected as an adjacency value representative of the extent the intensity peaks are adjacent to one another, which permits singling out finger data produced over a portion of the touch screen with respect to non-finger data produced over another portion of the touch screen.
Abstract:
An embodiment of a method for processing finger-detection data produced by a touch screen includes: computing the area of the finger-data map and extracting the main axes from the finger-data map, computing the lengths and orientations of the main axes, determining from the main axes a major axis having a major-axis orientation, computing a geometrical center and a center of mass of the finger-data map, computing an eccentricity of the finger-data map as a function of the lengths of the main axes outputting the major-axis orientation as indicative of the finger-orientation direction in the plane of the screen, outputting the mutual position of the geometrical center and the center of mass of the finger-data map as indicative of finger-pointing direction along the finger-orientation direction in the plane of the screen, and outputting a combination of the eccentricity and the area of the finger data map as indicative of finger orientation with respect to the plane of the screen.
Abstract:
Detecting the presence of a finger in proximity of a screen that generates detection signals in the horizontal direction and vertical direction includes sampling the detection signals and generating raw-data vectors X and Y. The raw data have a maximum for elements of the vector that define the position of the finger on the screen in the directions X and Y, respectively. The vectors X and Y are divided into subsets defined as “macro-areas” and cumulative values computed of each macro-area by adding together all the elements of the vector X and of the vector Y that belong to the macro-area. The maximum values are selected from among horizontal cumulative values and vertical cumulative values. A value identifying the macro-area selected on the basis of the maximum values is supplied, or no value supplied in the presence of elements of disturbance in the proximity of the screen.
Abstract:
Hand gestures, such as hand or finger hovering, in the proximity space of a sensing panel are detected from X-node and Y-node sensing signals indicative of the presence of a hand feature at corresponding row and column locations of a sensing panel. Hovering is detected by detecting the locations of maxima for a plurality of frames over a time window for sets of X-node and Y-node sensing signals by recognizing a hovering gesture if the locations of the maxima detected vary over the plurality of frames for one of the sets of sensing signals and not for the other of set. Finger shapes are distinguished over “ghosts” generated by palm or fist features by transforming the node-intensity representation for the sensing signals into a node-distance representation based on distances of detection intensities for a number of nodes under a peak for a mean point between valleys adjacent to the peak.
Abstract:
In an embodiment, hand gestures, such as hand or finger hovering, in the proximity space of a sensing panel are detected from X-node and Y-node sensing signals indicative of the presence of a hand feature at corresponding row locations and column locations of a sensing panel. Hovering is detected by detecting the locations of maxima for a plurality of frames over a time window for a set of X-node sensing signals and for a set of Y-node sensing signals by recognizing a hovering gesture if the locations of the maxima detected vary over the plurality of frames for one of the sets of X-node and Y-node sensing signals while remaining stationary for the other of the sets of X-node and Y-node sensing signals(Y). Finger shapes are distinguished over “ghosts” generated by palm or fist features by transforming the node-intensity representation for the sensing signals into a node-distance representation, based on the distances of the detection intensities for a number of nodes under a peak for a mean point between the valleys adjacent to the peak.
Abstract:
In one embodiment, a light sensor includes four cell arrays, one for each color of the Bayer pattern, and four lenses each focusing the light coming from the scene to be captured on a respective cell array. The lenses are oriented such that at least a second green image, commonly provided by the fourth cell array, is both horizontally and vertically shifted (spaced) apart by half a pixel pitch from a first (reference) green image. In a second embodiment, the four lenses are oriented such that the red and blue images are respectively shifted (spaced) apart by half a pixel pitch from the first or reference green image, one horizontally and the other vertically, and the second green image is shifted (spaced) apart by half a pixel pitch from the reference green image both horizontally and vertically.
Abstract:
Hand gestures, such as hand or finger hovering, in the proximity space of a sensing panel are detected from X-node and Y-node sensing signals indicative of the presence of a hand feature at corresponding row and column locations of a sensing panel. Hovering is detected by detecting the locations of maxima for a plurality of frames over a time window for sets of X-node and Y-node sensing signals by recognizing a hovering gesture if the locations of the maxima detected vary over the plurality of frames for one of the sets of sensing signals and not for the other of set. Finger shapes are distinguished over “ghosts” generated by palm or fist features by transforming the node-intensity representation for the sensing signals into a node-distance representation based on distances of detection intensities for a number of nodes under a peak for a mean point between valleys adjacent to the peak.
Abstract:
In an embodiment, hand gestures, such as hand or finger hovering, in the proximity space of a sensing panel are detected from X-node and Y-node sensing signals indicative of the presence of a hand feature at corresponding row locations and column locations of a sensing panel. Hovering is detected by detecting the locations of maxima for a plurality of frames over a time window for a set of X-node sensing signals and for a set of Y-node sensing signals by recognizing a hovering gesture if the locations of the maxima detected vary over the plurality of frames for one of the sets of X-node and Y-node sensing signals while remaining stationary for the other of the sets of X-node and Y-node sensing signals. Finger shapes are distinguished over “ghosts” generated by palm or first features by transforming the node-intensity representation for the sensing signals into a node-distance representation, based on the distances of the detection intensities for a number of nodes under a peak for a mean point between the valleys adjacent to the peak.
Abstract:
In one embodiment, a light sensor includes four cell arrays, one for each color of the Bayer pattern, and four lenses each focusing the light coming from the scene to be captured on a respective cell array. The lenses are oriented such that at least a second green image, commonly provided by the fourth cell array, is both horizontally and vertically shifted (spaced) apart by half a pixel pitch from a first (reference) green image. In a second embodiment, the four lenses are oriented such that the red and blue images are respectively shifted (spaced) apart by half a pixel pitch from the first or reference green image, one horizontally and the other vertically, and the second green image is shifted (spaced) apart by half a pixel pitch from the reference green image both horizontally and vertically.