Abstract:
A wireless communication device may wirelessly control an object, such as a physical device, directly or through interaction with a virtual representation (or placeholder) of the object situated at a predefined physical location. In particular, the wireless communication device may identify an intent gesture performed by a user that indicates intent to control the object. For example, the intent gesture may involve pointing or orienting the wireless communication device toward the object, with or without additional input. Then, the wireless communication device may determine the object associated with the intent gesture using wireless ranging and/or device orientation. Moreover, the wireless communication device may interpret sensor data from one or more sensors associated with the wireless communication device to determine an action gesture corresponding to a command or a command value. The wireless communication device may then transmit the command value to control the object.
Abstract:
An example method of indicating to a user that a biometric input was authenticated. The method is performed at a computing system comprising a processor, memory, a first housing that includes a primary display, and a second housing containing a physical keyboard, a touch-sensitive secondary display, and a biometric input device. The method includes displaying, at the primary display, a web page that identifies items for purchasing, and detecting a selection of an affordance displayed on the web page. In response, displaying, on touch-sensitive secondary display, an alert prompting a user to provide a biometric input to the biometric input device, and detecting a biometric input on the biometric input device that is in second housing. In response to detecting the biometric input, and in accordance with a determination that the biometric input has been authenticated, displaying on the display an indication that purchase of the items has been validated.
Abstract:
A device detects a first press input followed by a second press input on the button. The device provides a first non-visual output in response to the first press input and before the second press input. Depending on the amount of time lag between the first press input and the second press input, the device provides either a second non-visual output in conjunction with performing a first operation or a third non-visual output in conjunction with performing a second operation, where the second non-visual output and the third non-visual output have different output profiles.
Abstract:
A method is performed at a computing system with a first housing that includes a primary display and a second housing at least partially containing (i) a physical keyboard and a touch-sensitive secondary display (“TSSD”) that is distinct from the primary display. The method includes: displaying, on the primary display, a first user interface for an application and displaying, on the TSSD, a first set of affordances corresponding to a first portion of the application. The method further includes: detecting a swipe gesture on the TSSD. If the swipe gesture was performed in a first direction, the method includes: displaying a second set of affordances corresponding to the first portion on the TSSD. If the swipe gesture was performed in a second direction substantially perpendicular to the first direction, the method includes: displaying a third set of affordances corresponding to a second portion of the application on the TSSD.
Abstract:
Disclosed herein are systems, devices, and methods for dynamically updating a touch-sensitive secondary display. An example method includes receiving a request to open an application and, in response, (i) displaying, on a primary display, a plurality of user interface (UI) objects associated with the application, the plurality including a first UI object displayed with associated content and other UI objects displayed without associated content; and (ii) displaying, on the touch-sensitive secondary display, a set of affordances representing the plurality of UI objects. The method also includes: detecting, via the touch-sensitive secondary display, a swipe gesture in a direction from a first affordance and towards a second affordance, the first affordance representing the first UI object and the second affordance represents a distinct second UI object. In response to detecting the swipe gesture, the method includes: updating the primary display to display associated content for the second UI object.
Abstract:
An electronic device detects a first increase in a characteristic intensity of a contact on a touch-sensitive surface, and in response the device produces a first tactile output that has a first tactile output profile. The first tactile output profile includes an output parameter that varies in accordance with a proximity of the characteristic intensity of the contact to meeting a first intensity criteria. While producing the tactile output that has the first tactile output profile, the device detects a second increase in the characteristic intensity of the contact. In response to detecting the second increase in the characteristic intensity, in accordance with a determination that the characteristic intensity meets the first intensity criteria, the device produces a second tactile output that has a second tactile output profile that is different from the first tactile output profile.
Abstract:
In some embodiments, a multifunction device with a display and a touch-sensitive surface creates a plurality of workspace views. A respective workspace view is configured to contain content assigned by a user to the respective workspace view. The content includes application windows. The device displays a first workspace view in the plurality of workspace views on the display without displaying other workspace views in the plurality of workspace views and detects a first multifinger gesture on the touch-sensitive surface. In response to detecting the first multifinger gesture on the touch-sensitive surface, the device replaces display of the first workspace view with concurrent display of the plurality of workspace views.
Abstract:
In any context where a user can view multiple different content items, switching among content items is provided using an array mode. In a full-frame mode, one content item is visible and active, but other content items may also be open. In response to user input the display can be switched to an array mode, in which all of the content items are visible in a scrollable array. Selecting a content item in array mode can result in the display returning to the full-frame mode, with the selected content item becoming visible and active. Smoothly animated transitions between the full-frame and array modes and a gesture-based interface for controlling the transitions can also be provided.
Abstract:
An electronic device includes a touch-sensitive surface and a display. The device displays, on the display, a first user interface. The device detects a gesture on the touch-sensitive surface. The gesture includes movement of a contact in a respective direction on the touch-sensitive surface. In response to detecting the gesture: in accordance with a determination that the movement of the contact is entirely on a first portion of the touch-sensitive surface, the device performs an operation in the first user interface that corresponds to the gesture; and in accordance with a determination that the movement of the contact is entirely on a second portion of the touch-sensitive surface, the device replaces display of the first user interface with display of a second user interface different from the first user interface.
Abstract:
In any context where a user can view multiple different content items, switching among content items is provided using an array mode. In a full-frame mode, one content item is visible and active, but other content items may also be open. In response to user input the display can be switched to an array mode, in which all of the content items are visible in a scrollable array. Selecting a content item in array mode can result in the display returning to the full-frame mode, with the selected content item becoming visible and active. Smoothly animated transitions between the full-frame and array modes and a gesture-based interface for controlling the transitions can also be provided.