Abstract:
A computing device receives, via a communication network, map data including (i) a description of geometries of map features and (ii) a first description of visual characteristics defined separately and independently of the description of the geometries. The computing device applies the visual characteristics to the geometries to render a first digital map. The computing device then receives, via the communication network, a second description of visual characteristics for application to the geometries previously provided to the computing device as part of the map data, and applies the second visual characteristics to the previously received geometries of the plurality of map features to render a second digital map.
Abstract:
A viewing window of a map surface is determined, at a certain zoom level corresponding to the magnification of the map surface. A first set of style parameters for applying to a feature of the map surface, where the feature is described in a vector format using several interconnected vertices, is determined. The first set of style parameters corresponds to a first zoom level of the viewing window, and the first zoom level corresponds to a first magnification. A second set of style parameters for the feature is also determined, where the second set of style parameters corresponds to a second zoom level of the viewing window, and where the second zoom level corresponds to a second magnification. A third set of style parameters for displaying the feature is determined by interpolating between the first set of style parameters and the second set of style parameters.
Abstract:
A map server generates a set of base map tiles having vector descriptors, each indicating a geometry of a respective map element, in accordance with a non-raster format for rendering a first map image. The map server provides the base map tiles to the client device. Upon receiving an indication that a specific map image for the selected geographic region is to be rendered at the client device, the map server generates a set of difference map tiles that indicate changes to be made to the set of base map tiles and sends the difference map tiles to the client device for use, along with the set of base map tiles, in rendering the requested specific map image. The client device renders the new map view defined by the difference map tiles without needing to again pre-process all of the features or elements defined in the base map tiles.
Abstract:
A computing device receives, via a communication network, map data including (i) a description of geometries of map features and (ii) a first description of visual characteristics defined separately and independently of the description of the geometries. The computing device applies the visual characteristics to the geometries to render a first digital map. The computing device then receives, via the communication network, a second description of visual characteristics for application to the geometries previously provided to the computing device as part of the map data, and applies the second visual characteristics to the previously received geometries of the plurality of map features to render a second digital map.
Abstract:
Techniques for merging multiple filters into a single GPU program is described. The system includes a filtering engine, which receives an input to apply a plurality of filters to a source image. The filtering engine identifies a first type of filter and a second type of filter from the input. The filtering engine identifies a supplemental transformation filter from the input, implements the supplemental transformation filter using a custom function to generate a color value at the source image and merges the first type of filter and the second type of filter based on the supplemental transformation filter. Finally, the filtering engine may apply the merged filter on the source image to generate a destination image.
Abstract:
A graphics or image rendering system, such as a map image rendering system, receives image data from an image database in the form of vector data that defines various image objects, such as roads, geographical boundaries, etc., and textures defining text strings to be displayed on the image to provide, for example, labels for the image objects. The imaging rendering system renders the images such that the individual characters of the text strings are placed on the image following a multi-segmented or curved line. This rendering system enables text strings to be placed on a map image so that the text follows the center line of a curved or angled road or other image feature without knowing the specifics of the curvature of the line along which the text will be placed when creating the texture that stores the text string information. This feature provides enhanced visual properties within a map image as it allows, for example, road names to be placed anywhere inside a road following the curvature of the road, thus providing a pleasing visual effect within the map image.
Abstract:
Multiple individually renderable map elements, each representing a respective physical entity in a geographic area, are rendered to generate a digital map of the geographic area. A description of an aggregate map feature that includes several but not all of the multiple map elements is received. The several map elements represent physical entities that form a common administrative unit. A selection of one of the several map elements is received via the user interface. In response to receiving the selection, the several map elements included in the aggregate map feature are automatically selected, and an indication that the aggregate map feature has been selected is provided on the user interface.
Abstract:
Techniques for merging multiple filters into a single GPU program is described. The system includes a filtering engine, which receives an input to apply a plurality of filters to a source image. The filtering engine identifies a first type of filter and a second type of filter from the input. The filtering engine identifies a supplemental transformation filter from the input, implements the supplemental transformation filter using a custom function to generate a color value at the source image and merges the first type of filter and the second type of filter based on the supplemental transformation filter. Finally, the filtering engine may apply the merged filter on the source image to generate a destination image.
Abstract:
A graphics or image rendering system, such as a map image rendering system, may receive map data associated with a set of zoom levels, where the map data includes style attribute data corresponding to various features of a map surface at corresponding zoom levels. The system may interpolate at least some of the style parameter values from the received map data to provide style parameter values over a range of zoom levels.
Abstract:
To provide map data for rendering map images corresponding to a selected geographic region at a client device, a map server generates a set of base map tiles having vector descriptors, each of which indicates a geometry of a respective map element, in accordance with a non-raster format for rendering a first map image. The map server, at some point, provides the base map tiles to the client device. Upon receiving an indication that a specific map image for the selected geographic region is to be rendered at the client device, the map server further generates a set of difference map tiles that indicate changes to be made to the set of base map tiles and sends the difference map tiles to the client device for use, along with the set of base map tiles, in rendering the requested specific map image. The client device renders the new map view defined by the difference map tiles without needing to again pre-process all of the features or elements defined in the base map tiles, which makes the rendering of the new map view defined by the difference map tiles faster and more efficient in terms of processing power requirements.