Abstract:
In some embodiments, a computer system presents navigation directions from a first location to a second location. In some embodiments, if the current location of the computer system is different from the first location, the computer system presents audio and/or tactile feedback indicating distance and/or direction to the first location.
Abstract:
A device implementing a system for machine-learning based gesture recognition includes at least one processor configured to, receive, from a first sensor of the device, first sensor output of a first type, and receive, from a second sensor of the device, second sensor output of a second type that differs from the first type. The at least one processor is further configured to provide the first sensor output and the second sensor output as inputs to a machine learning model, the machine learning model having been trained to output a predicted gesture based on sensor output of the first type and sensor output of the second type. The at least one processor is further configured to determine the predicted gesture based on an output from the machine learning model, and to perform, in response to determining the predicted gesture, a predetermined action on the device.
Abstract:
An electronic device, while in an interaction configuration mode for a first application, concurrently displays: a first user interface, one or more interaction control user interface objects, and an application restriction controls display user interface object for the first application. The device detects a first gesture, and in response, displays application restriction control user interface objects for the first application. A respective application restriction control user interface object indicates whether a corresponding feature of the first application is configured to be enabled in a restricted interaction mode. The device detects a second gesture, and changes display of a setting in the first application restriction control user interface object for the first application. The device detects a second input, and in response, enters the restricted interaction mode for the first application. The corresponding feature is restricted in accordance with the setting in the first application restriction control user interface object.
Abstract:
Methods and systems are provided for diagnosing inadvertent activation of user interface settings on an electronic device. The electronic device receives a user input indicating that the user is having difficulty operating the electronic device. The device then determines whether a setting was changed on the device within a predetermined time period prior to receiving the user input. When a first setting was changed within the predetermined time period prior to receiving the user input, the device restores the changed setting to a prior setting.
Abstract:
An electronic device with a display and a touch-sensitive surface displays, on the display, a first visual indicator that corresponds to a virtual touch. The device receives a first input from an adaptive input device. In response to receiving the first input from the adaptive input device, the device displays a first menu on the display. The first menu includes a virtual touches selection icon. In response to detecting selection of the virtual touches selection icon, a menu of virtual multitouch contacts is displayed.
Abstract:
The present disclosure generally relates to exploring a geographic region that is displayed in computer user interfaces. In some embodiments, a method includes at an electronic device with a display and one or more input devices, displaying a map of a geographic region on the display and detecting a first user input to select a starting location on the map. After detecting the first user input, the method includes detecting a second user input to select a first direction of navigation from the starting location. In response to detecting the second user input, the method includes determining a path on the map that traverses in the first direction of navigation and connects the starting location to an ending location, and providing audio that includes traversal information about traversing along the path in the geographic region in the first direction of navigation and from the starting location to the ending location.
Abstract:
The present disclosure generally relates to providing time feedback on an electronic device, and in particular to providing non-visual time feedback on the electronic device. Techniques for providing non-visual time feedback include detecting an input and, in response to detecting the input, initiating output of a first type of non-visual indication of a current time or a second type of non-visual indication of the current time based on the set of non-visual time output criteria met by the input. Techniques for providing non-visual time feedback also include, in response to detecting that a current time has reached a first predetermined time of a set of one or more predetermined times, outputting a first non-visual alert or a second non-visual alert based on a type of watch face that the electronic device is configured to display.
Abstract:
While an electronic device with a display and a touch-sensitive surface is in a screen reader accessibility mode, the device displays a character input area and a keyboard, the keyboard including a plurality of key icons. The device detects a sequence of one or more gestures on the touch-sensitive surface that correspond to one or more characters. A respective gesture of the one or more gestures that corresponds to a respective character is a single finger gesture that moves across the touch-sensitive surface along a respective path that corresponds to the respective character. The respective path traverses one or more locations on the touch-sensitive surface that correspond to one or more key icons of the plurality of key icons without activating the one or more key icons. In response to detecting the respective gesture, the device enters the corresponding respective character in the character input area of the display.
Abstract:
An electronic device with a display and a touch-sensitive surface displays, on the display, a first visual indicator that corresponds to a virtual touch. The device receives a first input from an adaptive input device. In response to receiving the first input from the adaptive input device, the device displays a first menu on the display. The first menu includes a virtual touches selection icon. In response to detecting selection of the virtual touches selection icon, a menu of virtual multitouch contacts is displayed.