Abstract:
Some embodiments of the invention provide a navigation application that presents road signs during a navigation presentation. In presenting the road signs, the application of some embodiments differentiates the appearance of road signs at junctions that require a change of direction from road signs at junctions that do not require a change of direction. The application may perform processes that ensure that it arranges the road signs on the map in an aesthetically pleasing manner. In addition, the navigation application of some embodiments does not display too many road signs along the navigated route so that the route is not by occluded by too many road signs.
Abstract:
The subject technology provides a mapping view that enables a user to view, from their current location, a portion of a route to a destination including a graphical element corresponding to the current location and a second graphical element in the form of an arrow-shaped object to indicate a direction of a turn to be made on the route. As progress on route is made, when the current location of the user is within a distance threshold of the location of the turn to be made, the subject technology may animate the second graphical element to move in the direction of the turn along the route to another location where a subsequent turn is to be made. The second graphical element may remain stationary at this other location to indicate a direction of the subsequent turn until the user's current location reaches the distance threshold for this subsequent turn.
Abstract:
A method of providing navigation instructions in a locked mode of a device is disclosed. The method, while the display screen of the device is turned off, determines that the device is near a navigation point. The method turns on the display screen and provides navigation instructions. In some embodiments, the method identifies the ambient light level around the device and turns on the display at brightness level determined by the identified ambient light level. The method turns off the display after the navigation point is passed.
Abstract:
Space interaction (SI) functionality is described herein for assisting a user in interacting with a space. The SI functionality includes a sound generation module that generates three-dimensional sounds in various circumstances. A three-dimensional sound is perceived by a user as emanating from particular location within the space. Different modules may leverage the three-dimensional sounds for different purposes. In one implementation, a path guidance module uses a three-dimensional beat sound to direct the user in a particular direction. An exploration module uses three-dimensional sounds to identify the locations of items of interest that lie within (or are otherwise associated with) a subspace to which an attention of the user is currently directed. An orientation module uses three-dimensional sounds to identify the locations of items of interest that are associated with an entire space around the user.
Abstract:
A street-level view can realistically reflect that objects occlude depicted route paths. Such objects can include guardrails, buildings, or any of a variety of other objects as described herein. A superior user experience that portrays route paths while taking real-world geometry into account can result.
Abstract:
Some embodiments of the invention provide a navigation application that allows a user to peek ahead or behind during a turn-by-turn navigation presentation that the application provides while tracking a device (e.g., a mobile device, a vehicle, etc.) traversal of a physical route. As the device traverses along the physical route, the navigation application generates a navigation presentation that shows a representation of the device on a map traversing along a virtual route that represents the physical route on the map. While providing the navigation presentation, the navigation application can receive user input to look ahead or behind along the virtual route. Based on the user input, the navigation application moves the navigation presentation to show locations on the virtual route that are ahead or behind the displayed current location of the device on the virtual route. This movement can cause the device representation to no longer be visible in the navigation presentation. Also, the virtual route often includes several turns, and the peek ahead or behind movement of the navigation presentation passes the presentation through one or more of these turns. In some embodiments, the map can be defined presented as a two-dimensional (2D) or a three-dimensional (3D) scene.
Abstract:
Intersection guide systems, methods, and programs acquire information on a path of a vehicle and acquire a travel direction of the vehicle at a guide intersection ahead of the vehicle on the basis of the information on the path. The systems, methods, and programs display a guide image that represents the travel direction superimposed on a portion of a forward scene ahead of the vehicle other than an image of the guide intersection, and a connection line image as superimposed on the forward scene, the connection line image connecting between the image of the guide intersection in the forward scene and the guide image. The connection line image is superimposed on the forward scene such that a length of the connection line image becomes shorter as a degree of approach of the vehicle to the guide intersection becomes larger.
Abstract:
Methods, systems and apparatus are described to render map data according to texture masks. A rendering device may obtain map data, which may include one or more shapes described by vector graphics data. Along with the one or more shapes, embodiments may include mask indicators corresponding to the one or more shapes. Embodiments may render the map data by creating a mask shape based upon mask indicators corresponding to the shapes described by the vector graphics data. For each created mask shape, a texture source may be determined according to the mask indicator for the mask shape. Embodiments may obtain a texture from the texture source and may apply the mask shape to the obtained texture to render a fill portion of the corresponding shape described by the vector graphics data. Some embodiments may display the rendered map data as a map view.
Abstract:
A method of providing navigation instructions in a locked mode of a device is disclosed. The method, while the display screen of the device is turned off, determines that the device is near a navigation point. The method turns on the display screen and provides navigation instructions. In some embodiments, the method identifies the ambient light level around the device and turns on the display at brightness level determined by the identified ambient light level. The method turns off the display after the navigation point is passed.
Abstract:
Some embodiments provide a method for generating intersection data for paths in a map region. The method receives a set of junctions at which paths intersect in the map region. For a particular junction of at least two paths, the method automatically determines whether any of the other junctions in the map region satisfy criteria to be part of a single intersection with the particular junction. When at least one of the other junctions satisfies the criteria, the method automatically combines the other junctions that satisfy the criteria with the particular junction into a single intersection for use in performing mapping operations.