Abstract:
Techniques for automatically completing a partially completed UI design created by a user are described. A UI query including attributes of UI components in the partially completed UI design is created. Example designs with similar UI components are identified. UI components of one such design example are displayed to automatically complete the partially completed UI design (also called an “auto-complete suggestion”). The user can systematically navigate the design examples and accept auto-completed suggestions to include into the partially complete UI design.
Abstract:
Techniques for automatically completing a partially completed UI design created by a user are described. A UI query including attributes of UI components in the partially completed UI design is created. Design examples with similar UI components are identified. UI components of one such design example are displayed to automatically complete the partially completed UI design (also called an “auto-complete suggestion”). The user can systematically navigate the design examples and accept auto-completed suggestions to include into the partially complete UI design.
Abstract:
By knowing which upcoming actions a user might perform, a mobile application can optimize a user interface or reduce the amount of user input needed for accomplishing a task. A herein-described prediction module can answer queries from a mobile application regarding which actions in the application the user is likely to perform at a given time. Any application can register and communicate with the prediction module via a straightforward application programing interface (API). The prediction module continuously learns a prediction model for each application based on the application's evolving event history. The prediction module generates predictions by combining multiple predictors with an online learning method, and capturing event patterns not only within but also across registered applications. The prediction module is evaluated using events collected from multiple types of mobile devices.
Abstract:
A computing device comprises a processor and an authoring tool executing on the processor. The processor receives demonstration data representative of at least one demonstration of a multi-finger gesture and declaration data specifying one or more constraints for the multi-finger gesture. The processor generates, in accordance with the demonstration data and the declaration data, a module to detect the multi-finger gesture within a computer-generated user interface.
Abstract:
A method may include identifying, from a set of applications, a subset of the set of applications, each application from the subset of the set of applications being predicted, by a computing device, to be selected by a user. The method may also include outputting a graphical user interface that includes: a plurality of application icons representing the set of applications and positioned around at least a portion of a perimeter of the graphical user interface; and a plurality of prediction icons positioned within an interior of the graphical user interface and representing the subset of the set of applications. The position of a particular prediction icon representing a particular application may be based on a position of a particular application icon representing the particular application. The method may further include executing an action associated with the particular prediction icon or the one of the plurality of application icons.
Abstract:
A method may include identifying, from a set of applications, a subset of the set of applications, each application from the subset of the set of applications being predicted, by a computing device, to be selected by a user. The method may also include outputting a graphical user interface that includes: a plurality of application icons representing the set of applications and positioned around at least a portion of a perimeter of the graphical user interface; and a plurality of prediction icons positioned within an interior of the graphical user interface and representing the subset of the set of applications. The position of a particular prediction icon representing a particular application may be based on a position of a particular application icon representing the particular application. The method may further include executing an action associated with the particular prediction icon or the one of the plurality of application icons.
Abstract:
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for combining authentication and application shortcut. An example method includes detecting, by a device having a touchscreen, a gesture by a user on the touchscreen while the device is in a sleep mode; classifying the gesture, by the device, as an intentional gesture or an accidental gesture; maintaining the device in the sleep mode if the gesture is classified as an accidental gesture; responsive to determining, by the device, that the gesture matches one or more confirmed gestures stored on the device based at least in part on a set of predefined criteria, if the gesture is classified as an intentional gesture: recognizing the user as authenticated; and without requiring additional user input, selecting an application, from a plurality of different applications, according to the gesture and launching the application on the device.
Abstract:
Certain implementations of the disclosed technology may include systems, methods, and computer-readable media for transferring images and information from a mobile computing device to a computer monitor for display. In one example implementation, a method is provided that includes receiving, from a remote client, an initiation request, wherein the remote client is associated with a remote display. The method further includes sending a representation of a unique code to the remote client, and receiving, from a mobile device, an indication that the mobile device captured the representation of the unique code. The method further includes receiving, from the mobile device, a display image for presentation on the remote display, and sending the display image to the remote client for presentation on the remote display.
Abstract:
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for combining authentication and application shortcut. An example method includes detecting, by a device having a touchscreen, a gesture by a user on the touchscreen while the device is in a sleep mode; classifying the gesture, by the device, as an intentional gesture or an accidental gesture; maintaining the device in the sleep mode if the gesture is classified as an accidental gesture; responsive to determining, by the device, that the gesture matches one or more confirmed gestures stored on the device based at least in part on a set of predefined criteria, if the gesture is classified as an intentional gesture: recognizing the user as authenticated; and without requiring additional user input, selecting an application, from a plurality of different applications, according to the gesture and launching the application on the device.
Abstract:
A system and method for creating and organizing events includes an activity stream application that captures, searches and collaborates on one or more events. The events include unstructured data comprising text, digital ink, an audio clip and an image. The activity stream application receives user input and generates a new event and combines related events into the same activity. The activity stream application receives a search query and searches for events that are relevant to the search query. In one embodiment, the search query includes contextual information that includes at least one of at a similar time, at a similar location, in a similar situation and a relatedness of event attributes.