Abstract:
Techniques for automatically completing a partially completed UI design created by a user are described. A UI query including attributes of UI components in the partially completed UI design is created. Design examples with similar UI components are identified. UI components of one such design example are displayed to automatically complete the partially completed UI design (also called an “auto-complete suggestion”). The user can systematically navigate the design examples and accept auto-completed suggestions to include into the partially complete UI design.
Abstract:
A method includes identifying a user interface (UI) action demonstrated by a user for an application (104) executed at a first electronic device (102) and identifying a gesture demonstrated by a user using a second electronic device (108, 109, 110) as a gesture intended by the user to trigger the UI action for the application at the first electronic device. In response to detecting a subsequent instance of the gesture at the second electronic device, the method includes triggering an instance of the UI action for the application at the first electronic device.
Abstract:
In general, techniques and systems for defining a gesture with a computing device using short-range communication are described. In one example, a method includes obtaining position information from an array of position devices using near-field communication (NFC) during a movement of the computing device with respect to the array, wherein the position information identifies unique positions within the array for each position device from which position information was obtained. The method may also include determining sequence information associated with the position information, wherein the sequence information is representative of an order in which the position information was obtained from each position device, and performing, by the computing device, an action based at least in part on the position information and the sequence information, wherein the position information and the sequence information are representative of a gesture input associated with the movement of the computing device.
Abstract:
A computer-implemented user interface method and apparatus are disclosed. A user input signal corresponding to a drawn gesture is received and sampled. If the input signal is orientation invariant, the sampled, spaced points are rotated in accordance with an indicative angle to generate an input vector. If the input signal is orientation sensitive, the sampled, spaced points are rotated to align with a base orientation to generate the input vector. The gesture is recognized based on a comparison of the input vector to a plurality of templates.
Abstract:
In one example, a method includes detecting, by a computing device, at least one user contact with a presence-sensitive screen of the computing device to input one or more characters of an input string. The method also includes detecting, by the computing device, a subsequent user contact with the presence-sensitive screen. The method also includes detecting, by the computing device, a gesture at a region of the presence-sensitive screen that is associated with a terminator symbol while the subsequent user contact is maintained with the presence-sensitive screen. The method also includes adding, by the computing device, the terminator symbol to the input string when the gesture comprises a virtual key press gesture. The method also includes replacing, by the computing device, the input string with a predicted completed string for the input string when the gesture comprises a prediction completion gesture.
Abstract:
Techniques for automatically completing a partially completed UI design created by a user are described. A UI query including attributes of UI components in the partially completed UI design is created. Example designs with similar UI components are identified. UI components of one such design example are displayed to automatically complete the partially completed UI design (also called an “auto-complete suggestion”). The user can systematically navigate the design examples and accept auto-completed suggestions to include into the partially complete UI design.
Abstract:
Techniques for automatically completing a partially completed UI design created by a user are described. A UI query including attributes of UI components in the partially completed UI design is created. Example designs with similar UI components are identified. UI components of one such design example are displayed to automatically complete the partially completed UI design (also called an “auto-complete suggestion”). The user can systematically navigate the design examples and accept auto-completed suggestions to include into the partially complete UI design.
Abstract:
By knowing which upcoming actions a user might perform, a mobile application can optimize a user interface or reduce the amount of user input needed for accomplishing a task. A herein-described prediction module can answer queries from a mobile application regarding which actions in the application the user is likely to perform at a given time. Any application can register and communicate with the prediction module via a straightforward application programming interface (API). The prediction module continuously learns a prediction model for each application based on the application's evolving event history. The prediction module generates predictions by combining multiple predictors with an online learning method, and capturing event patterns not only within but also across registered applications. The prediction module is evaluated using events collected from multiple types of mobile devices.
Abstract:
Methods and apparatus for an interaction system specifying cross-device interaction are provided. The interaction system provides application program interfaces (APIs) and a scripting environment that allow development of scripts that combine user input and sensing events and distribute output and across devices to create a range of rich cross-device behaviors with minimal development. The interaction system includes an integrated environment for developers to author and test cross-device behaviors. When a script is ready, the script can be deployed on a network of mobile and wearable computing devices using the interaction system's runtime environment. An interaction system script can. An evaluation of the interaction system with twelve participants revealed that the interaction system significantly reduced developer effort for creating and iterating on cross-device behaviors. The interaction system allowed developers to focus on target interaction behaviors and high-level device capabilities, rather than low-level specifications.
Abstract:
Techniques for automatically completing a partially completed UI design created by a user are described. A UI query including attributes of UI components in the partially completed UI design is created. Example designs with similar UI components are identified. UI components of one such design example are displayed to automatically complete the partially completed UI design (also called an “auto-complete suggestion”). The user can systematically navigate the design examples and accept auto-completed suggestions to include into the partially complete UI design.