Abstract:
While an electronic device with a display and a touch-sensitive surface is in a screen reader accessibility mode, the device displays a character input area and a keyboard, the keyboard including a plurality of key icons. The device detects a sequence of one or more gestures on the touch-sensitive surface that correspond to one or more characters. A respective gesture of the one or more gestures that corresponds to a respective character is a single finger gesture that moves across the touch-sensitive surface along a respective path that corresponds to the respective character. The respective path traverses one or more locations on the touch-sensitive surface that correspond to one or more key icons of the plurality of key icons without activating the one or more key icons. In response to detecting the respective gesture, the device enters the corresponding respective character in the character input area of the display.
Abstract:
While an electronic device with a display and a touch-sensitive surface is in a screen reader accessibility mode, the device displays an application launcher screen including a plurality of application icons. A respective application icon corresponds to a respective application stored in the device. The device detects a sequence of one or more gestures on the touch-sensitive surface that correspond to one or more characters. A respective gesture that corresponds to a respective character is a single finger gesture that moves across the touch-sensitive surface along a respective path that corresponds to the respective character. The device determines whether the detected sequence of one or more gestures corresponds to a respective application icon of the plurality of application icons, and, in response to determining that the detected sequence of one or more gestures corresponds to the respective application icon, performs a predefined operation associated with the respective application icon.
Abstract:
The present disclosure generally relates to techniques and interfaces for generating synthesized speech outputs. For example, a user interface for a text-to-speech service can include ranked and/or categorized phrases, which can be selected to enter as text. A synthesized speech output is then generated to deliver any entered text, for example, using a personalized voice model.
Abstract:
The present disclosure generally relates to user interfaces and techniques for managing and visualizing sound reduction using a computer system. In accordance with some embodiments, the computer system displays an indication that a second noise level is less than a first sound level when the computer system is using a sound reduction device. In accordance with some embodiments, the computer system displays a representation of a noise level at a plurality of different times that indicates a first noise level when a sound reduction effect was in effect, and that indicates a second noise level when the sound reduction effect was not in effect. In accordance with some embodiments, the computer system displays a representation of a sound reduction level for a first time period and receives an input selling a second time period different from the first time period, and in response to the input, the computer system displays a representation of the sound reduction level for the second time period.