Abstract:
Embodiments are described for handling focus when a gesture is input in a multi-screen device. In embodiments, the gesture indicates that two images, one that is in focus, swap positions. In response to receiving the gesture, the image in focus is moved from a first display of a first screen to a second display of a second screen. After the images are swapped, the focus is maintained on the image that originally had the focus.
Abstract:
A dual-screen user device and methods for launching applications from a revealed desktop onto a logically chosen screen are disclosed. Specifically, a user reveals the desktop and then launches a selected application from one of two desktops displayed on a primary and secondary screen of a device. When the application is launched, it is displayed onto a specific screen depending on the input received and the logical rules determining the display output. As the application is displayed onto the specific screen, the desktop is removed from display and the opposite screen can display other data.
Abstract:
Methods and devices for providing a virtual keyboard in connection with a multiple screen device are provided. More particularly, information displayed on the screen of a multiple screen device having a current focus of the user is identified, and is presented by a top screen. The virtual keyboard is presented by the bottom screen. The virtual keyboard can be dismissed in response to detecting a change in the focus of the user.
Abstract:
A multi-display device is adapted to be dockable or otherwise associatable with an additional device. In accordance with one exemplary embodiment, the multi-display device is dockable with a smartpad. The exemplary smartpad can include a screen, a touch sensitive display, a configurable area, a gesture capture region(s) and a camera. The smartpad can also include a port adapted to receive the device. The exemplary smartpad is able to cooperate with the device such that information displayable on the device is also displayable on the smartpad. Furthermore, any one or more of the functions on the device are extendable to the smartpad, with the smartpad capable of acting as an input/output interface or extension of the smartpad. Therefore, for example, information from one or more of the displays on the multi-screen device is displayable on the smartpad.
Abstract:
Embodiments are described for handling the launching of applications in a multi-screen device. In embodiments, a first touch sensitive display of a first screen receives input to launch an application. In response, the application is launched. A determination is made as to whether the first touch sensitive display already has windows in its stack. If there are no windows in the stack of the first touch sensitive display, a new window of the first application is displayed on the first touch sensitive display. If there are windows in the stack, a determination is made whether a second display has windows in its stack. If not, the new window is displayed on the second display. If the second display also has windows in its stack, the new window will be displayed on the first touch sensitive display.
Abstract:
A dual-screen user device and methods for controlling data displayed thereby are disclosed. Specifically, the data displayed by the multiple screens of the dual-screen user device is conditioned upon the type of user gesture or combination of user gestures detected. The display controls described herein can correlate user inputs received in a gesture capture region to one or more display actions, which may include maximization, minimization, or reformatting instructions.
Abstract:
A dual-screen user device and methods for revealing a combination of desktops on single and multiple screens are disclosed. Specifically, a determined number of desktops is displayed to at least one of the screens of the device conditioned upon input received and the state of the device. Where a screen of the device is determined to be inactive, the desktop is not displayed to the screen, but is stored in a virtually displayed state by the device. Upon receiving input that the inactive screen is active, the device can actually display the desktop to the screen.
Abstract:
Methods and systems for presenting a user interface that includes a virtual keyboard are provided. More particularly, a virtual keyboard can be presented using one or more touch screens included in a multiple display device. The content of the virtual keyboard can be controlled in response to user input. Configurable portions of the virtual keyboard include selectable rows of virtual keys. In addition, whether selectable rows of virtual keys and/or a suggestion bar is displayed together with the standard character and control keys of the virtual keyboard can be determined in response to context or user input.
Abstract:
Methods and devices for providing a virtual keyboard in connection with a multiple screen device are provided. More particularly, information displayed on the screen of a multiple screen device having a current focus of the user is identified, and is presented by a top screen. The virtual keyboard is presented by the bottom screen. The virtual keyboard can be dismissed in response to detecting a change in the focus of the user.
Abstract:
Control of a computing device using gesture inputs. The computing device may be a handheld computing device with a plurality of displays. The displays may be capable of displaying a graphical user interface (GUI). The plurality of displays may be modified in response to receipt of a gesture input such that a hierarchical application having related GUI screens are modified in response to the gesture input. The modification may include changing the hierarchical application from being displayed in a single screen mode to being displayed in a multi screen mode or vice versa.