Abstract:
Implementations described and claimed herein provide systems and methods for interaction between a user and a machine. In one implementation, a system is provided that receives an input from a user of a mobile machine which indicates or describes an object in the world. In one example, the user may gesture to the object which is detected by a visual sensor. In another example, the user may verbally describe the object which is detected by an audio sensor. The system receiving the input may then determine which object near the location of the user that the user is indicating. Such a determination may include utilizing known objects near the geographic location of the user or the autonomous or mobile machine.
Abstract:
A multitouch device can interpret and disambiguate different gestures related to manipulating a displayed image of a 3D object, scene, or region. Examples of manipulations include pan, zoom, rotation, and tilt. The device can define a number of manipulation modes, including one or more single-control modes such as a pan mode, a zoom mode, a rotate mode, and/or a tilt mode. The manipulation modes can also include one or more multi-control modes, such as a pan/zoom/rotate mode that allows multiple parameters to be modified simultaneously.
Abstract:
Some embodiments provide a mapping application for generating views of a three-dimensional (3D) map. The mapping application includes a geographic data module for identifying a set of geographic data that represents a portion of the 3D map. The set of geographic data includes a set of camera captured images that correspond to the portion of the 3D map. The mapping application includes an image processing module for rendering the view of the 3D map based on the geographic data by animating a type of map element in the view of the 3D map.
Abstract:
A mobile device including a touchscreen display can detect multiple points of fingertip contact being made against the touchscreen concurrently. The device can distinguish this multi-touch gesture from other gestures based on the duration, immobility, and concurrency of the contacts. In response to detecting such a multi-touch gesture, the device can send a multi-touch event to an application executing on the device. The application can respond to the multi-touch event in a variety of ways. For example, the application can determine a distance of a path in between points on a map that a user has concurrently touched with his fingertips. The application can display this distance to the user.
Abstract:
A device that includes at least one processing unit and stores a multi-mode mapping program for execution by the at least one processing unit is described. The program includes a user interface (UI). The UI includes a display area for displaying a two-dimensional (2D) presentation of a map or a three-dimensional (3D) presentation of the map. The UI includes a selectable 3D control for directing the program to transition between the 2D and 3D presentations.
Abstract:
Some embodiments provide a non-transitory machine-readable medium that stores a mapping application which when executed on a device by at least one processing unit provides automated animation of a three-dimensional (3D) map along a navigation route. The mapping application identifies a first set of attributes for determining a first position of a virtual camera in the 3D map at a first instance in time. Based on the identified first set of attributes, the mapping application determines the position of the virtual camera in the 3D map at the first instance in time. The mapping application identifies a second set of attributes for determining a second position of the virtual camera in the 3D map at a second instance in time. Based on the identified second set of attributes, the mapping application determines the position of the virtual camera in the 3D map at the second instance in time. The mapping application renders an animated 3D map view of the 3D map from the first instance in time to the second instance in time based on the first and second positions of the virtual camera in the 3D map.
Abstract:
An AR system that leverages a pre-generated 3D model of the world to improve rendering of 3D graphics content for AR views of a scene, for example an AR view of the world in front of a moving vehicle. By leveraging the pre-generated 3D model, the AR system may use a variety of techniques to enhance the rendering capabilities of the system. The AR system may obtain pre-generated 3D data (e.g., 3D tiles) from a remote source (e.g., cloud-based storage), and may use this pre-generated 3D data (e.g., a combination of 3D mesh, textures, and other geometry information) to augment local data (e.g., a point cloud of data collected by vehicle sensors) to determine much more information about a scene, including information about occluded or distant regions of the scene, than is available from the local data.
Abstract:
Implementations described and claimed herein provide systems and methods for interaction between a user and a machine. In one implementation, a system is provided that receives an input from a user of a mobile machine which indicates or describes an object in the world. In one example, the user may gesture to the object which is detected by a visual sensor. In another example, the user may verbally describe the object which is detected by an audio sensor. The system receiving the input may then determine which object near the location of the user that the user is indicating. Such a determination may include utilizing known objects near the geographic location of the user or the autonomous or mobile machine.
Abstract:
An AR system that leverages a pre-generated 3D model of the world to improve rendering of 3D graphics content for AR views of a scene, for example an AR view of the world in front of a moving vehicle. By leveraging the pre-generated 3D model, the AR system may use a variety of techniques to enhance the rendering capabilities of the system. The AR system may obtain pre-generated 3D data (e.g., 3D tiles) from a remote source (e.g., cloud-based storage), and may use this pre-generated 3D data (e.g., a combination of 3D mesh, textures, and other geometry information) to augment local data (e.g., a point cloud of data collected by vehicle sensors) to determine much more information about a scene, including information about occluded or distant regions of the scene, than is available from the local data.
Abstract:
A device that provides a map and/or navigation application that displays items on the map and/or navigation instructions differently in different modes. The applications of some embodiments provide a day mode and a night mode. In some embodiments the application uses the day mode as a default and activates the night mode when the time is after sunset at the location of the device. Some embodiments activate night mode when multiple conditions are satisfied (for example, when (1) the time is after sunset at the location of the device and (2) the ambient light level is below a threshold brightness).