Abstract:
An embodiment of a method of recognizing finger detection data in a detection data map produced by a touch screen includes converting the data from the x, y, z space into a three-descriptor space including: a first coordinate representative of the number of intensity peaks in the map, a second coordinate representative of the number of nodes (i.e., pixels) absorbed under one or more of the intensity peaks. A third coordinate may be selected as the angular coefficient or slope of a piecewise-linear approximating function passing through points having the numbers of nodes absorbed under the intensity peaks ordered in decreasing order over said intensity peaks, which permits singling out finger data with respect to non-finger data over the whole of the touch screen. The third coordinate may be also selected as an adjacency value representative of the extent the intensity peaks are adjacent to one another, which permits singling out finger data produced over a portion of the touch screen with respect to non-finger data produced over another portion of the touch screen.
Abstract:
A method of processing an electrical signal transduced from a voice signal is disclosed. A classification model is applied to the electrical signal to produce a classification indicator. The classification model has been trained using an augmented training dataset. The electrical signal is classified as either one of a first class and a second class in a binary classification. The classifying being performed is a function of the classification indicator. A trigger signal is provided to a user circuit as a result of the electrical signal being classified in the first class of the binary classification.
Abstract:
A capacitive touch screen of e.g., a mobile communications device such as a smart phone or tablet is operated by producing a capacitance map of capacitance values for the screen, wherein the capacitance values are indicative of locations of the screen exposed to touch by a user, and by identifying locations of the screen exposed to touch by a user by comparing the capacitance values against settings of sensing thresholds. Descriptor processing is applied to the capacitance map to extract a set of descriptors indicative of said screen being in one of a plurality of different operating conditions. A set of rules is applied to these descriptors to identify one of a plurality of different operating conditions, and selecting the setting of sensing thresholds as a function of the operating condition thus identified.
Abstract:
In accordance with embodiments, methods and systems for a trigger to the KWS are provided. The computing device converts an audio signal into a plurality of audio frames. The computing device generates a Mel Frequency Cepstral Coefficients (MFCC) matrix. The MFCC matrix includes N columns. Each column of the N columns comprises coefficients associated with audio features corresponding to a different audio frame of the plurality of audio frames. The computing device determines that a trigger condition is satisfied based on an MFCC_0 buffer. The MFCC_0 buffer comprises a first row of the MFCC matrix. The computing device then provides the MFCC matrix to a neural network for the neural network to use the MFCC matrix to make keyword inference based on the determining that the trigger condition is satisfied.
Abstract:
Hand gestures, such as hand or finger hovering, in the proximity space of a sensing panel are detected from X-node and Y-node sensing signals indicative of the presence of a hand feature at corresponding row and column locations of a sensing panel. Hovering is detected by detecting the locations of maxima for a plurality of frames over a time window for sets of X-node and Y-node sensing signals by recognizing a hovering gesture if the locations of the maxima detected vary over the plurality of frames for one of the sets of sensing signals and not for the other of set. Finger shapes are distinguished over “ghosts” generated by palm or fist features by transforming the node-intensity representation for the sensing signals into a node-distance representation based on distances of detection intensities for a number of nodes under a peak for a mean point between valleys adjacent to the peak.
Abstract:
A sequence of images is processed to generate optical flow data including a list of motion vectors. The motion vectors are grouped based on orientation into a first set of moving away motion vectors and a second set of moving towards motion vectors. A vanishing point is determined as a function of the first set of motion vectors and a center position of the images is determined. Pan and tilt information is computed from the distance difference between the vanishing point and the center position. Approaching objects are identified from the second set as a function of position, length and orientation, thereby identifying overtaking vehicles. Distances to the approaching objects are determined from object position, camera focal length, and pan and tilt information. A warning signal is issued as a function of the distances.
Abstract:
Method for generating a lane departure warning in a vehicle, comprising acquiring a plurality of frames of a digital image of a road on which the vehicle is running, the digital image of a road including the image of a lane within which the vehicle is running and of marking lines of the lane, for each of the acquired frames, extracting edge points of the frame, analyzing the edge points to evaluate a lane departure status, the evaluation including performing a lane departure verification procedure including identifying in the frame points representative of the position of the lane marking lines, generating a lane departure alert if a lane departure status is detected by the lane departure verification procedure. In the described method, the lane departure verification procedure includes comparing the position of the points to reference positions of the lane, the reference positions of the lane being obtained by a lane calibration procedure performed on a set of acquired frames, the lane calibration procedure including filtering edge points of the image frame belonging to an area of a horizontal stripe of the frame including a plurality of rows of the frame.
Abstract:
In an embodiment, hand gestures, such as hand or finger hovering, in the proximity space of a sensing panel are detected from X-node and Y-node sensing signals indicative of the presence of a hand feature at corresponding row locations and column locations of a sensing panel. Hovering is detected by detecting the locations of maxima for a plurality of frames over a time window for a set of X-node sensing signals and for a set of Y-node sensing signals by recognizing a hovering gesture if the locations of the maxima detected vary over the plurality of frames for one of the sets of X-node and Y-node sensing signals while remaining stationary for the other of the sets of X-node and Y-node sensing signals. Finger shapes are distinguished over “ghosts” generated by palm or first features by transforming the node-intensity representation for the sensing signals into a node-distance representation, based on the distances of the detection intensities for a number of nodes under a peak for a mean point between the valleys adjacent to the peak.
Abstract:
A sequence of images is processed to generate optical flow data including a list of motion vectors. The motion vectors are grouped based on orientation into a first set of moving away motion vectors and a second set of moving towards motion vectors. A vanishing point is determined as a function of the first set of motion vectors and a center position of the images is determined. Pan and tilt information is computed from the distance difference between the vanishing point and the center position. Approaching objects are identified from the second set as a function of position, length and orientation, thereby identifying overtaking vehicles. Distances to the approaching objects are determined from object position, camera focal length, and pan and tilt information. A warning signal is issued as a function of the distances.