Abstract:
Some embodiments provide a navigation application. The navigation application includes an interface for receiving data describing junctures along a route from a first location to a second location. The data for each juncture comprises a set of angles at which roads leave the juncture. The navigation application includes a juncture simplifier for simplifying the angles for the received junctures. The navigation application includes an arrow generator for generating at least two different representations of the simplified juncture. The representations are for use in displaying navigation information describing a maneuver to perform at the juncture during the route. The navigation application includes an arrow selector for selecting one of the different representations of the simplified juncture for display according to a context in which the representation will be displayed.
Abstract:
A device that includes at least one processing unit and stores a multi-mode mapping program for execution by the at least one processing unit is described. The program includes a user interface (UI). The UI includes a display area for displaying a two-dimensional (2D) presentation of a map or a three-dimensional (3D) presentation of the map. The UI includes a selectable 3D control for directing the program to transition between the 2D and 3D presentations.
Abstract:
For a mapping application, a method for reporting a problem related to a map displayed by the mapping application is described. The method identifies a mode in which the mapping application is operating. The method identifies a set of types of problems to report based on the identified mode. The method displays, in a display area of the mapping application, a graphical user interface (GUI) page that includes a set of selectable user interface (UI) items that represent the identified set of types of problems.
Abstract:
For a mapping application, a method for reporting a problem related to a map displayed by the mapping application is described. The method identifies a mode in which the mapping application is operating. The method identifies a set of types of problems to report based on the identified mode. The method displays, in a display area of the mapping application, a graphical user interface (GUI) page that includes a set of selectable user interface (UI) items that represent the identified set of types of problems.
Abstract:
Some embodiments provide a mapping application that provides a variety of UI elements for allowing a user to specify a location (e.g., for viewing or serving as route destinations). In some embodiments, these location-input UI elements appear in succession on a sequence of pages, according to a hierarchy that has the UI elements that require less user interaction appear on earlier pages in the sequence than the UI elements that require more user interaction. In some embodiments, the location-input UI elements that successively appear in the mapping application include (1) selectable predicted-destination notifications, (2) a list of selectable predicted destinations, (3) a selectable voice-based search affordance, and (4) a keyboard. In some of these embodiments, these UI elements appear successively on the following sequence of pages: (1) a default page for presenting the predicted-destination notifications, (2) a destination page for presenting the list of predicted destinations, (3) a search page for receiving voice-based search requests, and (4) a keyboard page for receiving character input.
Abstract:
Some embodiments provide a mapping application that provides a variety of UI elements for allowing a user to specify a location (e.g., for viewing or serving as route destinations) In some embodiments, these location-input UI elements appear in succession on a sequence of pages, according to a hierarchy that has the UI elements that require less user interaction appear on earlier pages in the sequence than the UI elements that require more user interaction. In some embodiments, the location-input UI elements that successively appear in the mapping application include (1) selectable predicted-destination notifications, (2) a list of selectable predicted destinations, (3) a selectable voice-based search affordance, and (4) a keyboard. In some of these embodiments, these UI elements appear successively on the following sequence of pages: (1) a default page for presenting the predicted-destination notifications, (2) a destination page for presenting the list of predicted destinations, (3) a search page for receiving voice-based search requests, and (4) a keyboard page for receiving character input.
Abstract:
For a mapping application, a method for reporting a problem related to a map displayed by the mapping application is described. The method identifies a mode in which the mapping application is operating. The method identifies a set of types of problems to report based on the identified mode. The method displays, in a display area of the mapping application, a graphical user interface (GUI) page that includes a set of selectable user interface (UI) items that represent the identified set of types of problems.
Abstract:
A multitouch device can interpret and disambiguate different gestures related to manipulating a displayed image of a 3D object, scene, or region. Examples of manipulations include pan, zoom, rotation, and tilt. The device can define a number of manipulation modes, including one or more single-control modes such as a pan mode, a zoom mode, a rotate mode, and/or a tilt mode. The manipulation modes can also include one or more multi-control modes, such as a pan/zoom/rotate mode that allows multiple parameters to be modified simultaneously.
Abstract:
Some embodiments provide a mapping application that provides a variety of UI elements for allowing a user to specify a location (e.g., for viewing or serving as route destinations). In some embodiments, these location-input UI elements appear in succession on a sequence of pages, according to a hierarchy that has the UI elements that require less user interaction appear on earlier pages in the sequence than the UI elements that require more user interaction. In some embodiments, the location-input UI elements that successively appear in the mapping application include (1) selectable predicted-destination notifications, (2) a list of selectable predicted destinations, (3) a selectable voice-based search affordance, and (4) a keyboard. In some of these embodiments, these UI elements appear successively on the following sequence of pages: (1) a default page for presenting the predicted-destination notifications, (2) a destination page for presenting the list of predicted destinations, (3) a search page for receiving voice-based search requests, and (4) a keyboard page for receiving character input.
Abstract:
A method of providing a sequence of turn-by-turn navigation instructions on a device traversing a route is provided. Each turn-by-turn navigation instruction is associated with a location on the route. As the device traverses along the route, the method displays a turn-by-turn navigation instruction associated with a current location of the device. The method receives a touch input through a touch input interface of the device while displaying a first turn-by-turn navigation instruction and a first map region that displays the current location and a first location associated with the first turn-by-turn navigation instruction. In response to receiving the touch input, the method displays a second turn-by-turn navigation instruction and a second map region that displays a second location associated with the second turn-by-turn navigation instruction. Without receiving additional input, the method automatically returns to the display of the first turn-by-turn navigation instruction and the first map region.