Abstract:
Methods and systems are described for new paradigms for user interaction with an unmanned aerial vehicle (referred to as a flying digital assistant or FDA) using a portable multifunction device (PMD) such as smart phone. In some embodiments, a magic wand user interaction paradigm is described for intuitive control of an FDA using a PMD. In other embodiments, methods for scripting a shot are described.
Abstract:
Methods and systems are described for new paradigms for user interaction with an unmanned aerial vehicle (referred to as a flying digital assistant or FDA) using a portable multifunction device (PMD) such as smart phone. In some embodiments, a user may control image capture from an FDA by adjusting the position and orientation of a PMD. In other embodiments, a user may input a touch gesture via a touch display of a PMD that corresponds with a flight path to be autonomously flown by the FDA.
Abstract:
Methods and systems are described for new paradigms for user interaction with an unmanned aerial vehicle (referred to as a flying digital assistant or FDA) using a portable multifunction device (PMD) such as smart phone. In some embodiments, a magic wand user interaction paradigm is described for intuitive control of an FDA using a PMD. In other embodiments, methods for scripting a shot are described.
Abstract:
In some examples, an unmanned aerial vehicle (UAV) may identify a scan target. The UAV may navigate to two or more positions in relation to the scan target. The UAV may capture, using one or more image sensors of the UAV, two or more images of the scan target from different respective positions in relation to the scan target. For instance, the two or more respective positions may be selected by controlling a spacing between the two or more respective positions to enable determination of parallax disparity between a first image captured at a first position and a second image captured at a second position of the two or more positions. The UAV may determine a three-dimensional model corresponding to the scan target based in part on the determined parallax disparity of the two or more images including the first image and the second image.
Abstract:
Techniques are described for controlling an autonomous vehicle such as an unmanned aerial vehicle (UAV) using objective-based inputs. In an embodiment, the underlying functionality of an autonomous navigation system is exposed via an application programming interface (API) allowing the UAV to be controlled through specifying a behavioral objective, for example, using a call to the API to set parameters for the behavioral objective. The autonomous navigation system can then incorporate perception inputs such as sensor data from sensors mounted to the UAV and the set parameters using a multi-objective motion planning process to generate a proposed trajectory that most closely satisfies the behavioral objective in view of certain constraints. In some embodiments, developers can utilize the API to build customized applications for the UAV. Such applications, also referred to as “skills,” can be developed, shared, and executed to control behavior of an autonomous UAV and aid in overall system improvement.
Abstract:
In some examples, an unmanned aerial vehicle (UAV) may access a scan plan that includes a sequence of poses for the UAV to assume to capture images of a scan target using one or more image sensors. The UAV may check a next pose of the scan plan for obstructions. Responsive to detection of an obstruction, the UAV may determine a backup pose based at least on a field of view of the next pose. The UAV may control a propulsion mechanism to cause the UAV to fly to assume the backup pose. The UAV may capture, based on the backup pose and using the one or more image sensors, one or more images of the scan target.
Abstract:
Methods and systems are disclosed for an unmanned aerial vehicle (UAV) configured to autonomously navigate a physical environment while capturing images of the physical environment. In some embodiments, the motion of the UAV and a subject in the physical environment may be estimated based in part on images of the physical environment captured by the UAV. In response to estimating the motions, image capture by the UAV may be dynamically adjusted to satisfy a specified criterion related to a quality of the image capture.
Abstract:
Described herein are systems and methods for structure scan using an unmanned aerial vehicle. For example, some methods include accessing a three-dimensional map of a structure; generating facets based on the three-dimensional map, wherein the facets are respectively a polygon on a plane in three-dimensional space that is fit to a subset of the points in the three-dimensional map; generating a scan plan based on the facets, wherein the scan plan includes a sequence of poses for an unmanned aerial vehicle to assume to enable capture, using image sensors of the unmanned aerial vehicle, of images of the structure; causing the unmanned aerial vehicle to fly to assume a pose corresponding to one of the sequence of poses of the scan plan; and capturing one or more images of the structure from the pose.
Abstract:
In some examples, a computing apparatus may include one or more non-transitory computer-readable storage media and program instructions stored on the one or more computer-readable storage media that, when executed by one or more processors, direct the computing apparatus to perform various steps. For example, the program instructions may continually present a graphical user interface (GUI) at the computing apparatus including a display of a current view of the physical environment from a perspective of an aerial vehicle. The program instructions may detect user interactions with the GUI while the aerial vehicle is in flight. The user interactions may include instructions directing the aerial vehicle to maneuver within the physical environment and configure parameters for scanning a three-dimensional (3D) scan volume. The program instruction may then transmit, to the aerial vehicle, data encoding the instructions for performing a 3D scan of the 3D scan volume.
Abstract:
Systems and methods are disclosed for tracking objects in a physical environment using visual sensors onboard an autonomous unmanned aerial vehicle (UAV). In certain embodiments, images of the physical environment captured by the onboard visual sensors are processed to extract semantic information about detected objects. Processing of the captured images may involve applying machine learning techniques such as a deep convolutional neural network to extract semantic cues regarding objects detected in the images. The object tracking can be utilized, for example, to facilitate autonomous navigation by the UAV or to generate and display augmentative information regarding tracked objects to users.