Abstract:
While a first view of a three-dimensional environment is visible, a computer system detects a first input meeting selection criteria. If, when the first input was detected, a user was directing attention to a first portion of the first view that has a spatial relationship to a viewport through which the three-dimensional environment is visible, the computer system displays a user interface object including affordances for accessing functions of the computer system; otherwise, the computer system forgoes displaying the user interface object. While a different view of the three-dimensional environment is visible, the computer system detects a second input meeting the selection criteria. If, when the second input was detected, the user was directing attention to a second portion of the different view that has the same spatial relationship to the viewport, the computer system displays the user interface object; otherwise, the computer system forgoes displaying the user interface object.
Abstract:
A device implementing a system for machine-learning based gesture recognition includes at least one processor configured to, receive, from a first sensor of the device, first sensor output of a first type, and receive, from a second sensor of the device, second sensor output of a second type that differs from the first type. The at least one processor is further configured to provide the first sensor output and the second sensor output as inputs to a machine learning model, the machine learning model having been trained to output a predicted gesture based on sensor output of the first type and sensor output of the second type. The at least one processor is further configured to determine the predicted gesture based on an output from the machine learning model, and to perform, in response to determining the predicted gesture, a predetermined action on the device.
Abstract:
The present disclosure generally relates to exploring a geographic region that is displayed in computer user interfaces. In some embodiments, a method includes at an electronic device with a display and one or more input devices, displaying a map of a geographic region on the display and detecting a first user input to select a starting location on the map. After detecting the first user input, the method includes detecting a second user input to select a first direction of navigation from the starting location. In response to detecting the second user input, the method includes determining a path on the map that traverses in the first direction of navigation and connects the starting location to an ending location, and providing audio that includes traversal information about traversing along the path in the geographic region in the first direction of navigation and from the starting location to the ending location.
Abstract:
A device implementing a system for machine-learning based gesture recognition includes at least one processor configured to, receive, from a first sensor of the device, first sensor output of a first type, and receive, from a second sensor of the device, second sensor output of a second type that differs from the first type. The at least one processor is further configured to provide the first sensor output and the second sensor output as inputs to a machine learning model, the machine learning model having been trained to output a predicted gesture based on sensor output of the first type and sensor output of the second type. The at least one processor is further configured to determine the predicted gesture based on an output from the machine learning model, and to perform, in response to determining the predicted gesture, a predetermined action on the device.
Abstract:
Methods for presenting symbolic expressions such as mathematical, scientific, or chemical expressions, formulas, or equations are performed by a computing device. One method includes: displaying a first portion of a symbolic expression within a first area of a display screen; while in a first state in which the first area is selected for aural presentation, aurally presenting first information related to the first portion of the symbolic expression; while in the first state, detecting particular user input; in response to detecting the particular user input, performing the steps of: transitioning from the first state to a second state in which a second area, of the display, is selected for aural presentation; determining second information associated with a second portion, of the symbolic expression, that is displayed within the second area; in response to determining the second information, aurally presenting the second information.
Abstract:
An electronic device can provide an interactive map with non-visual output, thereby making the map accessible to visually impaired users. The map can be based on a starting location defined based on a current location of the electronic device or on a location entered by the user. Nearby paths, nearby points of interest, or directions from the starting location to an ending location can be identified via audio output. Users can touch a screen of the electronic device in order to virtually explore a neighborhood. A user can be alerted when he is moving along or straying from a path, approaching an intersection or point of interest, or changing terrains. Thus, the user can familiarize himself with city-level spatial relationships without needing to physically explore unfamiliar surroundings.
Abstract:
An electronic device with a display and a touch-sensitive surface displays, on the display, a first visual indicator that corresponds to a virtual touch. The device receives a first input from an adaptive input device. In response to receiving the first input from the adaptive input device, the device displays a first menu on the display. The first menu includes a virtual touches selection icon. In response to detecting selection of the virtual touches selection icon, a menu of virtual multitouch contacts is displayed.
Abstract:
A method for controlling a peripheral from a group of computing devices is provided. The method sets up a group of computing devices for providing media content and control settings to a peripheral device such as a hearing aid. The computing devices in the group are interconnected by a network and exchange data with each other regarding the peripheral. A master device in the group is directly paired with the peripheral device and can use the pairing connection to provide media content or to apply the control settings to the peripheral device. The peripheral device is paired with only the master devices of the group. A slave device can request to directly pair with the peripheral device and become the master device in order to provide media content to the peripheral.
Abstract:
An electronic device can provide an interactive map with non-visual output, thereby making the map accessible to visually impaired users. The map can be based on a starting location defined based on a current location of the electronic device or on a location entered by the user. Nearby paths, nearby points of interest, or directions from the starting location to an ending location can be identified via audio output. Users can touch a screen of the electronic device in order to virtually explore a neighborhood. A user can be alerted when he is moving along or straying from a path, approaching an intersection or point of interest, or changing terrains. Thus, the user can familiarize himself with city-level spatial relationships without needing to physically explore unfamiliar surroundings.
Abstract:
Methods for presenting symbolic expressions such as mathematical, scientific, or chemical expressions, formulas, or equations are performed by a computing device. One method includes: displaying a first portion of a symbolic expression within a first area of a display screen; while in a first state in which the first area is selected for aural presentation, aurally presenting first information related to the first portion of the symbolic expression; while in the first state, detecting particular user input; in response to detecting the particular user input, performing the steps of: transitioning from the first state to a second state in which a second area, of the display, is selected for aural presentation; determining second information associated with a second portion, of the symbolic expression, that is displayed within the second area; in response to determining the second information, aurally presenting the second information.