Recognition and processing of gestures in a graphical user interface using machine learning
Abstract:
In an embodiment, a computer-implemented method comprises: displaying a particular view of a plurality of views of a continuous content stream of individually actionable content items; wherein the plurality of views, each including a different subset of the individually actionable content items, responds to different sets of signaling gestures; automatically recognizing, while the continuous content stream is being displayed, a mode change from a control mode to a signal mode in the particular view of the plurality of views; receiving a touch input in the particular view of the plurality of views and, in response, generating output data indicating a signaling gesture classification that is accepted by the particular view and is for the touch input; updating, according to the output data, the particular view of the plurality of views; wherein the method is performed by one or more computing devices.
Information query
Patent Agency Ranking
0/0