Abstract:
A system and method is provided of detecting user manipulation (430) of an inanimate object (432) and interpreting that manipulation as input for an software application. In one aspect, the manipulation may be detected by an image capturing component (412) of a computing device (410), and the manipulation is interpreted as an instruction to execute a command, such as opening up a drawing application in response to a user picking up a pen (432). The manipulation may also be detected with the aid of an audio capturing device, e.g., a microphone (122) on the computing device.
Abstract:
Provided is a process for personalizing an interactive map that includes the following: receiving a user request to view an interactive map; determining a map extent responsive to the request; obtaining a profile of the user; personalizing, with a computer, an interactive map based on the profile; and presenting the personalized map to the user. Personalizing includes determining whether to depict geographic features within the map extent in the personalized map based on the profile, and formatting a depiction of the features to have, for each respective feature, a prominence determined based on the profile.
Abstract:
Systems and methods for navigating imagery, such as geographic imagery, are provided. A user can initiate an imagery pan in a viewport presented on a display of a computing device by throwing the imagery in the viewport. The motion of the imagery pan can be controlled based on content displayed in or near the viewport such that the imagery pan is more likely to land near predominate features depicted in the viewport. For instance, features depicted in the viewport can act as "friction" or "gravity" on the imagery pan, adjusting the pan rate and/or pan direction of the imagery as the imagery pans across the viewport. In particular aspects, the motion of the imagery pan can be adjusted based on weights associated with features depicted in or near the viewport. Features with higher weights will affect the motion of the imagery pan more than features with lower weights.
Abstract:
Aspects of the disclosure relate generally to determine specularity of an object. As an example, an object or area of geometry 512, 712, 812, 1304 may be selected. A set of images 510, 520, or 620, or 710, 720 or 810, 820 or 1306 that include the area of geometry 512 may be captured. This set of images may be filtered to remove images that do not show the area of geometry well, such as if the area is in a shadow or occluded by another object. A set of intensity values for the area are determined for each image. A set of angle values for each image is determined based on at least a direction of a camera that captured the particular image when the particular image was captured. The set of average intensities and the set of angle values are paired and fit to a curve 636, 840, 1308. The specularity of the area may then be classified based on at least the fit.
Abstract:
Systems and methods for applying one or more route-based modifications to a map are provided. In some aspects, a system includes a pathfinder module configured to determine a primary route from a beginning point to a destination point on the map. The pathfinder module is further configured to generate one or more primary modifications to the map based on the primary route. The system also includes a restyling module configured to apply the primary route and the one or more primary modifications to the map. The one or more primary modifications include at least one of a) adding a first object to the map that would otherwise be excluded from the map if the primary route is not applied to the map and b) excluding a second object from the map that would otherwise be added to the map if the primary route is not applied to the map.
Abstract:
A capability for prominence-based feature generation and rendering for digital maps is provided. More specifically, embodiments relate to rendering map features such as buildings or landmarks in different rendering styles based on signals for how important a particular feature is to a search context. A search context may be, for example and without limitation, a general view of the map or a user-initiated search request for a particular point of interest or driving directions between different points of interest on the map. For example, the different rendering styles may include, but are not limited to, two-dimensional (2D) footprints, two-and-a-half-dimensional (2.5D) extruded polygons, as will be described further below, and full three-dimensional (3D) models. Furthermore, the style could include color and/or visual texture.
Abstract:
A system and method for generating a content based, custom labeled map is provided. A request for a map is received. The request includes a geographical area to be displayed in the map and a type of content item to be displayed in the map. A plurality of orientation points to display on the map is determined based on a ranking of locations in the geographical area, and one or more pieces of content to be associated with each orientation point is determined. Each orientation point is ranked based in part on ranks of the one or more pieces of content associated with each orientation point. A map is generated to display at the locations of the plurality of orientation points the pieces of content associated with each orientation point at a level of prominence that is based on the ranking of each orientation point.
Abstract:
A system and method may truncate entered text of a text box user interface element to display both a beginning portion of the text and an ending portion of the text. Within a displayed user interface, a collapsible text entry box may include a text entry field having a maximum size based on a spatial relationship between the field and the box. The text entry field may include a text entry capacity of a threshold number of characters based on the text entry field parameter. The field may receive a stream of characters and the user interface may initially display all characters as the field receives them. When the displayed characters exceeds the threshold number, the system and method may truncate the displayed characters at a truncation point. The position of the truncation point may include a displayed character after a first displayed character of the received character stream.
Abstract:
Methods and systems for navigating panoramic imagery are provided. If a user rotates panoramic imagery to a view having a view angle that deviates beyond a threshold view angle, the view of the panoramic imagery will be adjusted to the threshold view angle. In a particular implementation, the view is drifted to the threshold view angle so that a user can at least temporarily view the imagery that deviates beyond the threshold view angle. A variety of transition animations can be used as the imagery is drifted to the threshold view angle. For instance, the view can be elastically snapped back to the threshold view angle to provide a visually appealing transition to a user.