AbstractDigital imaging is being extensively used in in today's world. That said, it is quite difficult to take acceptable pictures from a handheld camera in several situations, such as in the inspection of large scale structures including nuclear reactors and dams. Quadcopters can be used to image large surfaces and from a close distance, thereby permitting detail. As a consequence of the the proximity, we end up with a reduced field of view, and thus there is a need to create a panoramic representation from photos or videos taken from a flying craft. What (interesting) photos should we take, and from where to take them, is a technical question that we address in this work.
Further, if there are regions in the scene with few or no features (e.g., gaps between regions, or even textureless regions) it is not possible for existing mosaicing techniques to construct a panorama. Lack of features, a situation we term vacant spaces , confound the matching algorithm used to stitch interesting images. Likewise, if patterns repeat in a scene, the mosaicing algorithm gets confounded, resulting in incorrect mosaicing. We argue that the panoramic mosaicing problem is unsolved, and we present solutions using additional kinematic information present in the Inertial Measurement Unit (IMU) of even the most inexpensive quadcopter.
With these ideas, we could build an image mosaic of very large planar surfaces. In the real world, we expect to encounter multiple piecewise planar surfaces, or even curved surfaces. Manually navigating the quadcopter around such large surfaces for the purpose of capturing a orthographic video footage is tedious. We present a method for navigation of a quadcopter for imaging large multiplanar surfaces. The method finds the path along which the quadcopter is autonomously maneuvered. The eventual result is an unrolled view of the input scene where we get the output mosaic of the input scene as if it were present on a single plane.
A quadcopter's limited battery is a hurdle we experienced during the imaging of large multiplanar scenes. We hypothesized the use of multiple quadcopters simultaneously to overcome this draining issue. To collaborate, the relative spatial position of the quadcopters have to be identified, so as to divide the work in an appropriate manner. Fiducials are necessary for identifying objects, but in our environment, the quadcopters are subject to quick and unstable motions. This causes significant motion blur in the captured images and this severely affects the identification of the quadcopters using existing fiducials. We design fiducials that is resilient to motion blur with the intention of placing these on quadcopters.