Deep Convolutional Neural Network for flood extent mapping using unmanned aerial vehicles DataAuthor(s): Asmamaw Gebrehiwot; Leila Hashemi-Beni; Gary Thompson; Parisa Kordjamshidi; Thomas Langan
Publication Series: Scientific Journal (JRNL)
Station: Southern Research Station
Download Publication (3.0 MB)
Flooding is one of the leading threats of natural disasters to human life and property, especially in densely populated urban areas. Rapid and precise extraction of the flooded areas is key to supporting emergency-response planning and providing damage assessment in both spatial and temporal measurements. Unmanned Aerial Vehicles (UAV) technology has recently been recognized as an efficient photogrammetry data acquisition platform to quickly deliver high-resolution imagery because of its cost effectiveness, ability to fly at lower altitudes, and ability to enter a hazardous area. Different image classification methods including SVM (Support Vector Machine) have been used for flood extent mapping. In recent years, there has been a significant improvement in remote sensing image classification using Convolutional Neural Networks (CNNs). CNNs have demonstrated excellent performance on various tasks including image classification, feature extraction, and segmentation. CNNs can learn features automatically from large datasets through the organization of multi-layers of neurons and have the ability to implement nonlinear decision functions. This study investigates the potential of CNN approaches to extract flooded areas from UAV imagery. A VGG-based fully convolutional network (FCN-16s) was used in this research. The model was fine tuned and a k-fold cross-validation was applied to estimate the performance of the model on the new UAV imagery dataset. This approach allowed FCN-16s to be trained on the datasets that contained only one hundred training samples, and resulted in a highly accurate classification. Confusion matrix was calculated to estimate the accuracy of the proposed method. The image segmentation results obtained from FCN-16s were compared from the results obtained from FCN-8s, FCN-32s and SVMs. Experimental results showed that the FCNs could extract flooded areas precisely from UAV images compared to the traditional classifiers such as SVMs. The classification accuracy achieved by FCN-16s, FCN-8s, FCN-32s, and SVM for the water class was 97.52%, 97.8%, 94.20% and 89%, respectively.
- You may send email to firstname.lastname@example.org to request a hard copy of this publication.
- (Please specify exactly which publication you are requesting and your mailing address.)
- We recommend that you also print this page and attach it to the printout of the article, to retain the full citation information.
- This article was written and prepared by U.S. Government employees on official time, and is therefore in the public domain.
CitationGebrehiwot, Asmamaw; Hashemi-Beni, Leila; Thompson, Gary; Kordjamshidi, Parisa; Langan, Thomas. 2019. Deep Convolutional Neural Network for flood extent mapping using unmanned aerial vehicles Data. Sensors. 19(7): 1486-. https://doi.org/10.3390/s19071486.
Keywordsremote sensing, convolutional neural networks, floodplain mapping, fully convolutional network, unmanned aerial vehicles, geospatial data processing
- A convolutional neural network classifier identifies tree species in mixed-conifer forest from hyperspectral imagery
- Classification of CITES-listed and other neotropical Meliaceae wood images using convolutional neural networks
- Automated identification of avian vocalizations with deep convolutional neural networks
XML: View XML