Can supervise: YES
Pradhan, B, Laser Scanning Systems in Highway and Safety Assessment, Springer.
Sameen, MI, Pradhan, B & Lee, S 2020, 'Application of convolutional neural networks featuring Bayesian optimization for landslide susceptibility assessment', Catena, vol. 186.View/Download from: Publisher's site
© 2019 Elsevier B.V. This study developed a deep learning based technique for the assessment of landslide susceptibility through a one-dimensional convolutional network (1D-CNN) and Bayesian optimisation in Southern Yangyang Province, South Korea. A total of 219 slide inventories and 17 slide conditioning variables were obtained for modelling. The data showed a complex scenario. Some past slides have spread over steep lands, while others have spread through flat terrain. Random forest (RF) served to keep only important factors for further analysis as a pre-processing measure. To select CNN hyperparameters, Bayesian optimization was used. Three methods contributed to overcoming the overfitting issue owing to small training data in our research. The selection of key factors by RF helped first of all to reduce information dimensionality. Second, the CNN model with 1D convolutions was intended to considerably decrease the number of its parameters. Third, a high rate of drop-out (0.66) helped reduce the CNN parameters. Overall accuracy, area under the receiver operating characteristics curve (AUROC) and 5-fold cross-validation were used to evaluate the models. CNN performance was compared to ANN and SVM. CNN achieved the highest accuracy on testing dataset (83.11%) and AUROC (0.880, 0.893, using testing and 5-fold CV, respectively). Bayesian optimization enhanced CNN accuracy by~3% (compared with default configuration). CNN could outperform ANN and SVM owing to its complicated architecture and handling of spatial correlations through convolution and pooling operations. In complex situations where some variables make a non-linear contribution to the occurrence of landslides, the method suggested could thus help develop landslide susceptibility maps.
© 2019 Elsevier B.V. Current practice in choosing training samples for landslide susceptibility modelling (LSM) is to randomly subdivide inventory information into training and testing samples. Where inventory data differ in distribution, the selection of training samples by a random process may cause inefficient training of machine learning (ML)/statistical models. A systematic technique may, however, produce efficient training samples that well represent the entire inventory data. This is particularly true when inventory information is scarce. This research proposed a systemic strategy to deal with this problem based on the fundamental distribution of probabilities (i.e. Hellinger) and a novel graphical representation of information contained in inventory data (i.e. inventory information curve, IIC). This graphical representation illustrates the relative increase in available information with the growth of the training sample size. Experiments on a selected dataset over the Cameron Highlands, Malaysia were conducted to validate the proposed methods. The dataset contained 104 landslide inventories and 7 landslide-conditioning factors (i.e. altitude, slope, aspect, land use, distance from the stream, distance from the road and distance from lineament) derived from a LiDAR-based digital elevation model and thematic maps acquired from government authorities. In addition, three ML/statistical models, namely, k-nearest neighbour (KNN), support vector machine (SVM) and decision tree (DT), were utilised to assess the proposed sampling strategy for LSM. The impacts of model's hyperparameters, noise and outliers on the performance of the models and the shape of IICs were also investigated and discussed. To evaluate the proposed method further, it was compared with other standard methods such as random sampling (RS), stratified RS (SRS) and cross-validation (CV). The evaluations were based on the area under the receiving characteristic curves. The results show that IICs a...
Sameen, MI, Sarkar, R, Pradhan, B, Drukpa, D, Alamri, AM & Park, HJ 2020, 'Landslide spatial modelling using unsupervised factor optimisation and regularised greedy forests', Computers and Geosciences, vol. 134.View/Download from: Publisher's site
© 2019 Elsevier Ltd This study evaluates the contribution of an unsupervised factor optimisation based on sparse autoencoders (SAEs) to spatial landslide modelling with regularised greedy forests (RGFs). A total of 952 landslides were identified by field surveys, equally divided and used for training and testing of the proposed model. Ten conditioning factors related to landslides, including geo-morphometrical (i.e. altitude, slope, aspect, curvature, slope length, topographic wetness index and sediment transport index) and geo-environmental (i.e. lithology, nearness to roads and nearness to streams), were used to investigate the spatial relationships between the variables and landslides. 1The steps of the modelling were twofold. First, the factors were optimised by SAE to reduce information redundancy and correlation in the data. Second, RGF was used to create landslide susceptibility maps with the optimised feature representations. The area under the receiver operating characteristic curve (AUROC) was used to assess the predictive ability of the proposed models. Experimental results show that the proposed SAE–RGF outperforms the RGF and random forest (RF) models in terms of prediction rate and is less sensitive to overfitting and underfitting. The highest prediction rate (AUROC = 0.892) was obtained with only seven features by the SAE–RGF model, which is better than the two other methods (RGF and RF). The unsupervised factor optimisation approach not only reduces computation time but also improves the prediction accuracy of tree-based models, including RGF. The generated landslide susceptibility maps can be implemented to mitigate landslide hazards and to designate land use by stakeholders (e.g. planners and engineers).
Alzuhairi, M, Pradhan, B & Lee, S 2019, 'Self-Learning Random Forests Model for Mapping Groundwater Yield in Data-Scarce Areas', Natural Resources Research, vol. 28, no. 3, pp. 757-775.View/Download from: Publisher's site
© 2018, International Association for Mathematical Geosciences. Globally, groundwater plays a major role in supplying drinking water for urban and rural population and is used for irrigation to grow crops and in many industrial processes. A novel self-learning random forest (SLRF) model is developed and validated for groundwater yield zonation within the Yeondong Province in South Korea. This study was conducted with an inventory data initially divided randomly into 70% for training and 30% for testing and 13 groundwater-conditioning factors. SLRF was optimized using Bayesian optimization method. We also compared our method to other machine learning methods including support vector machine (SVM), artificial neural networks (ANN), decision trees (DT), and voting ensemble models. Model validation was accomplished using several methods, including a confusion matrix, receiver operating characteristics, cross-validation, and McNemar’s test. Our proposed self-learning method improves random forest (RF) generalization performance by about 23%, with SLRF success rates of 0.76 and prediction rates of 0.83. In addition, the optimized SLRF performed better [according to a threefold cross-validated AUC (area under curve) of 0.75] than that using randomly initialized parameters (0.57). SLRF outperformed all of the other models for the testing dataset (RF, SVM, ANN, DT, and Voted ANN-RF) when the overall accuracy, prediction rate, and cross-validated AUC metrics were considered. The SLRF also estimated the contribution of individual groundwater conditioning factors and showed that the three most influential factors were geology (1.00), profile curvature (0.97), and TWI (0.95). Overall, SLRF effectively modeled groundwater potential, even within data-scarce regions.
Ahmed, AA, Pradhan, B, Sameen, MI & Makky, AM 2018, 'An optimized object-based analysis for vegetation mapping using integration of Quickbird and Sentinel-1 data', Arabian Journal of Geosciences, vol. 11, no. 11.View/Download from: Publisher's site
© 2018, Saudi Society for Geosciences. This study proposed a workflow for an optimized object-based analysis for vegetation mapping using integration of Quickbird and Sentinel-1 data. The method is validated on a set of data captured over a part of Selangor located in the Peninsular Malaysia. The method comprised four components including image segmentation, Taguchi optimization, attribute selection using random forest, and rule-based feature extraction. Results indicated the robustness of the proposed approach as the area under curve of forest; grassland, old oil palm, rubber, urban tree, and young oil palm were calculated as 0.90, 0.89, 0.87, 0.87, 0.80, and 0.77, respectively. In addition, results showed that SAR data is very useful for extracting rubber and young oil palm trees (given by random forest importance values). Finally, further research is suggested to improve segmentation results and extract more features from the scene.
Hong, H, Pradhan, B, Sameen, MI, Kalantar, B, Zhu, A & Chen, W 2018, 'Improving the accuracy of landslide susceptibility model using a novel region-partitioning approach', Landslides, vol. 15, no. 4, pp. 753-772.View/Download from: Publisher's site
© 2017, Springer-Verlag GmbH Germany. Landslide is a natural disaster that threatens human lives and properties worldwide. Numerous have been conducted on landslide susceptibility mapping (LSM), in which each has attempted to improve the accuracy of final outputs. This study presents a novel region-partitioning approach for LSM to understand the effects of partitioning a focused region into smaller areas on the prediction accuracy of common regression models. Results showed that the partitioning of the study area into two regions using the proposed method improved the prediction rate from 0.77 to 0.85 when support vector machine was used, and from 0.87 to 0.88 when logistic regression model was utilized. The spatial agreements of the models were also improved after partitioning the area into two regions based on Shannon entropy equations. Our comparative study indicated that the proposed method outperformed the geographically weighted regression model that considered the spatial variations in landslide samples. Overall, the main advantages of the proposed method are improved accuracy and the reduction of the effects of spatial variations exhibited in landslide-conditioning factors.
Lee, J-H, Sameen, MI, Pradhan, B & Park, H-J 2018, 'Modeling landslide susceptibility in data-scarce environments using optimized data mining and statistical methods', GEOMORPHOLOGY, vol. 303, pp. 284-298.View/Download from: Publisher's site
Nahhas, FH, Shafri, HZM, Sameen, MI, Pradhan, B & Mansor, S 2018, 'Deep Learning Approach for Building Detection Using LiDAR-Orthophoto Fusion', Journal of Sensors, vol. 2018, pp. 1-12.View/Download from: Publisher's site
© 2018 Faten Hamed Nahhas et al. This paper reports on a building detection approach based on deep learning (DL) using the fusion of Light Detection and Ranging (LiDAR) data and orthophotos. The proposed method utilized object-based analysis to create objects, a feature-level fusion, an autoencoder-based dimensionality reduction to transform low-level features into compressed features, and a convolutional neural network (CNN) to transform compressed features into high-level features, which were used to classify objects into buildings and background. The proposed architecture was optimized for the grid search method, and its sensitivity to hyperparameters was analyzed and discussed. The proposed model was evaluated on two datasets selected from an urban area with different building types. Results show that the dimensionality reduction by the autoencoder approach from 21 features to 10 features can improve detection accuracy from 86.06% to 86.19% in the working area and from 77.92% to 78.26% in the testing area. The sensitivity analysis also shows that the selection of the hyperparameter values of the model significantly affects detection accuracy. The best hyperparameters of the model are 128 filters in the CNN model, the Adamax optimizer, 10 units in the fully connected layer of the CNN model, a batch size of 8, and a dropout of 0.2. These hyperparameters are critical to improving the generalization capacity of the model. Furthermore, comparison experiments with the support vector machine (SVM) show that the proposed model with or without dimensionality reduction outperforms the SVM models in the working area. However, the SVM model achieves better accuracy in the testing area than the proposed model without dimensionality reduction. This study generally shows that the use of an autoencoder in DL models can improve the accuracy of building recognition in fused LiDAR-orthophoto data.
Sameen, MI, Pradhan, B & Aziz, OS 2018, 'Classification of very high resolution aerial photos using spectral-spatial convolutional neural networks', Journal of Sensors, vol. 2018.View/Download from: Publisher's site
© 2018 Maher Ibrahim Sameen et al. Classification of aerial photographs relying purely on spectral content is a challenging topic in remote sensing. A convolutional neural network (CNN) was developed to classify aerial photographs into seven land cover classes such as building, grassland, dense vegetation, waterbody, barren land, road, and shadow. The classifier utilized spectral and spatial contents of the data to maximize the accuracy of the classification process. CNN was trained from scratch with manually created ground truth samples. The architecture of the network comprised of a single convolution layer of 32 filters and a kernel size of 3 × 3, pooling size of 2 × 2, batch normalization, dropout, and a dense layer with Softmax activation. The design of the architecture and its hyperparameters were selected via sensitivity analysis and validation accuracy. The results showed that the proposed model could be effective for classifying the aerial photographs. The overall accuracy and Kappa coefficient of the best model were 0.973 and 0.967, respectively. In addition, the sensitivity analysis suggested that the use of dropout and batch normalization technique in CNN is essential to improve the generalization performance of the model. The CNN model without the techniques above achieved the worse performance, with an overall accuracy and Kappa of 0.932 and 0.922, respectively. This research shows that CNN-based models are robust for land cover classification using aerial photographs. However, the architecture and hyperparameters of these models should be carefully selected and optimized.
Alzuhairi, M & Pradhan, B 2017, 'A Novel Road Segmentation Technique from Orthophotos Using Deep Convolutional Autoencoders', Korean Journal of Remote Sensing, vol. 33, no. 4.View/Download from: Publisher's site
Hong, H, Pradhan, B, Sameen, MI, Chen, W & Xu, C 2017, 'Spatial prediction of rotational landslide using geographically weighted regression, logistic regression, and support vector machine models in Xing Guo area (China)', GEOMATICS NATURAL HAZARDS & RISK, vol. 8, no. 2, pp. 1997-2022.View/Download from: Publisher's site
Kalantar, B, Mansor, S, Khuzaimah, Z, Sameen, MI & Pradhan, B 2017, 'Modelling mean albedo of individual roofs in complex urban areas using satellite images and airborne laser scanning point clouds', International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives, vol. 42, no. 2W7, pp. 237-240.View/Download from: Publisher's site
© Authors 2017. CC BY 4.0 License. Knowledge of surface albedo at individual roof scale is important for mitigating urban heat islands and understanding urban climate change. This study presents a method for quantifying surface albedo of individual roofs in a complex urban area using the integration of Landsat 8 and airborne LiDAR data. First, individual roofs were extracted from airborne LiDAR data and orthophotos using optimized segmentation and supervised object based image analysis (OBIA). Support vector machine (SVM) was used as a classifier in OBIA process for extracting individual roofs. The user-defined parameters required in SVM classifier were selected using v-fold cross validation method. After that, surface albedo was calculated for each individual roof from Landsat images. Finally, thematic maps of mean surface albedo of individual roofs were generated in GIS and the results were discussed. Results showed that the study area is covered by 35% of buildings varying in roofing material types and conditions. The calculated surface albedo of buildings ranged from 0.16 to 0.65 in the study area. More importantly, the results indicated that the types and conditions of roofing materials significantly effect on the mean value of surface albedo. Mean albedo of new concrete, old concrete, new steel, and old steel were found to be equal to 0.38, 0.26, 0.51, and 0.44 respectively. Replacing old roofing materials with new ones should highly prioritized.
Kalantar, B, Mansor, SB, Sameen, MI, Pradhan, B & Shafri, HZM 2017, 'Drone-based land-cover mapping using a fuzzy unordered rule induction algorithm integrated into object-based image analysis', International Journal of Remote Sensing, vol. 38, no. 8-10, pp. 2535-2556.View/Download from: Publisher's site
© 2017 Informa UK Limited, trading as Taylor & Francis Group. Land-cover maps provide essential data for a wide range of practical and small-scale applications. A number of data sources appropriate for land-cover extraction are available. Among these, images captured using unmanned aerial vehicles (UAVs) are low cost, have very high resolution, and can be acquired at any time with few restrictions. Over the past two decades, various classification techniques have been developed to extract land-cover features from UAV images, and object-based image analysis (OBIA) is the preferred technique based on the recent literature. This study presents a novel method that integrates the fuzzy unordered rule induction algorithm (FURIA) into OBIA to achieve accurate land-cover extraction from UAV images. The images were segmented using a multiresolution segmentation algorithm with an optimized scale parameter. The scale parameter was optimized using a novel approach that integrated feature space optimization into the plateau objective function. During the classification stage, significant features were selected via random forest, and rule sets were developed using FURIA. For comparison, result of the proposed approach was compared with those of decision tree (DT) rules and the Support Vector Machine (SVM) classification method. The results of this study indicate that the proposed method outperforms DT and SVM with an overall accuracy of 91.23%. A transferability evaluation showed that FURIA achieved accurate classification results on different UAV image subsets captured at different times. The findings suggest that fuzzy rules are more appropriate than conventional crisp rules for land-cover extraction from UAV images.
Mezaal, MR, Pradhan, B, Sameen, MI, Mohd Shafri, HZ & Yusoff, ZM 2017, 'Optimized Neural Architecture for Automatic Landslide Detection from High‐Resolution Airborne Laser Scanning Data', Applied Sciences, vol. 7, pp. 1-20.View/Download from: Publisher's site
An accurate inventory map is a prerequisite for the analysis of landslide susceptibility, hazard, and risk. Field survey, optical remote sensing, and synthetic aperture radar techniques are traditional techniques for landslide detection in tropical regions. However, such techniques are time consuming and costly. In addition, the dense vegetation of tropical forests complicates the generation of an accurate landslide inventory map for these regions. Given its ability to penetrate vegetation cover, high-resolution airborne light detection and ranging (LiDAR) has been used to generate accurate landslide maps. This study proposes the use of recurrent neural networks (RNN) and multi-layer perceptron neural networks (MLP-NN) in landscape detection. These efficient neural architectures require little or no prior knowledge compared with traditional classification methods. The proposed methods were tested in the Cameron Highlands, Malaysia. Segmentation parameters and feature selection were respectively optimized using a supervised approach and correlation-based feature selection. The hyper-parameters of network architecture were defined based on a systematic grid search. The accuracies of the RNN and MLP-NN models in the analysis area were 83.33% and 78.38%, respectively. The accuracies of the RNN and MLP-NN models in the test area were 81.11%, and 74.56%, respectively. These results indicated that the proposed models with optimized hyper-parameters produced the most accurate classification results. LiDAR-derived data, orthophotos, and textural features significantly affected the classification results. Therefore, the results indicated that the proposed methods have the potential to produce accurate and appropriate landslide inventory in tropical regions such as Malaysia.
Pradhan, B, Sameen, MI & Kalantar, B 2017, 'Optimized Rule-Based Flood Mapping Technique Using Multitemporal RADARSAT-2 Images in the Tropical Region', IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 10, no. 7, pp. 3190-3199.View/Download from: Publisher's site
Flood is one of the most common natural disasters in Malaysia. Preparing an accurate flood inventory map is the basic step in flood risk management. Flood detection is a complex process because of the limitation of methodological approaches and cloud coverage over tropical areas. An efficient approach is proposed to identify flooded areas using multitemporal RADARSAT-2 imageries. First, multispectral Landsat image was used to extract and subtract permanent water bodies, and this image was later utilized to extract the same information from multitemporal RADARSAT-2 imageries. Next, water bodies during a flood event were extracted from RADARSAT-2 images. Permanent water bodies, shadow, and paddy were detected from synthetic aperture radar (SAR) images by analyzing their temporal backscattering values. During feature extraction, rule-based object-oriented technique was applied to classify both SAR and Landsat images. Image segmentation during object-based analysis was performed to distinguish the boundaries of various dimensions and scales of objects. Moreover, a Taguchibased method was employed to optimize the segmentation parameters. After segmentation, the rules were defined and images were classified to produce an accurate flood inventory map for the 2014 Kelantan flood. A confusion matrix was generated to evaluate the performance of the classification method. The overall accuracy of 86.16% was achieved for RADARSAT-2 using rule-based classification and optimization technique. The resulting flood inventory map using the proposed approach supported the efficiency of the proposed methodology.
Sameen, MI & Pradhan, B 2017, 'A Simplified Semi-Automatic Technique for Highway Extraction from High-Resolution Airborne LiDAR Data and Orthophotos', Journal of the Indian Society of Remote Sensing, vol. 45, no. 3, pp. 395-405.View/Download from: Publisher's site
Information on highways is an essential input for various geospatial applications, including car navigation, forensic analysis on highway geometries, and intelligent transportation systems. Semi-automatic and automatic extractions of highways are critical for the regular updating of municipal databases and for highway maintenance. This study presents a semi-automatic data processing approach for extracting highways from high-resolution airborne LiDAR height information and aerial orthophotos. The method was developed based on two data sets. Experimental results for the first testing site showed that the accuracy of the proposed method for highway extraction was 74.50 % for completeness and 73.13 % for correctness. Meanwhile, the completeness and correctness for the second testing site were 71.20 and 70.72 %, respectively. The proposed method was compared with an object-based approach on a different data set. The accuracy for highway extraction of the object-based approach was 64.29 % for completeness and 63.11 % for correctness, whereas that of the proposed method was 67.14 % for completeness and 65.08 % for correctness. This research aims to promote semi-automatic highway extraction from LiDAR data and orthophotos by proposing a new approach and a multistep post-processing technique. The proposed method provides an accurate final output that is valuable for a wide range of geospatial applications.
Sameen, MI & Pradhan, B 2017, 'A Two-Stage Optimization Strategy for Fuzzy Object-Based Analysis Using Airborne LiDAR and High-Resolution Orthophotos for Urban Road Extraction', Journal of Sensors, vol. 2017, pp. 1-18.View/Download from: Publisher's site
In the last decade, object-based image analysis (OBIA) has been extensively recognized as an effective classification method for very high spatial resolution images or integrated data from different sources. In this study, a two-stage optimization strategy for fuzzy object-based analysis using airborne LiDAR was proposed for urban road extraction. The method optimizes the two basic steps of OBIA, namely, segmentation and classification, to realize accurate land cover mapping and urban road extraction. This objective was achieved by selecting the optimum scale parameter to maximize class separability and the optimum shape and compactness parameters to optimize the final image segments. Class separability was maximized using the Bhattacharyya distance algorithm, whereas image segmentation was optimized using the Taguchi method. The proposed fuzzy rules were created based on integrated data and expert knowledge. Spectral, spatial, and texture features were used under fuzzy rules by implementing the particle swarm optimization technique. The proposed fuzzy rules were easy to implement and were transferable to other areas. An overall accuracy of 82% and a kappa index of agreement (KIA) of 0.79 were achieved on the studied area when results were compared with reference objects created via manual digitization in a geographic information system. The accuracy of road extraction using the developed fuzzy rules was 0.76 (producer), 0.85 (user), and 0.72 (KIA). Meanwhile, overall accuracy was decreased by approximately 6% when the rules were applied on a test site. A KIA of 0.70 was achieved on the test site using the same rules without any changes. The accuracy of the extracted urban roads from the test site was 0.72 (KIA), which decreased to approximately 0.16. Spatial information (i.e., elongation) and intensity from LiDAR were the most interesting properties for urban road extraction. The proposed method can be applied to a wide range of real applications through remote ...
In this paper, a deep learning model using a Recurrent Neural Network (RNN) was developed and employed to predict the injury severity of traffic accidents based on 1130 accident records that have occurred on the North-South Expressway (NSE), Malaysia over a six-year period from 2009 to 2015. Compared to traditional Neural Networks (NNs), the RNN method is more effective for sequential data, and is expected to capture temporal correlations among the traffic accident records. Several network architectures and configurations were tested through a systematic grid search to determine an optimal network for predicting the injury severity of traffic accidents. The selected network architecture comprised of a Long-Short Term Memory (LSTM) layer, two fully-connected (dense) layers and a Softmax layer. Next, to avoid over-fitting, the dropout technique with a probability of 0.3 was applied. Further, the network was trained with a Stochastic Gradient Descent (SGD) algorithm (learning rate = 0.01) in the Tensorflow framework. A sensitivity analysis of the RNN model was further conducted to determine these factors’ impact on injury severity outcomes. Also, the proposed RNN model was compared with Multilayer Perceptron (MLP) and Bayesian Logistic Regression (BLR) models to understand its advantages and limitations. The results of the comparative analyses showed that the RNN model outperformed the MLP and BLR models. The validation accuracy of the RNN model was 71.77%, whereas the MLP and BLR models achieved 65.48% and 58.30% respectively. The findings of this study indicate that the RNN model, in deep learning frameworks, can be a promising tool for predicting the injury severity of traffic accidents.
Sameen, MI, Pradhan, B, Shafri, HZM, Mezaal, MR & bin Hamid, H 2017, 'Integration of Ant Colony Optimization and Object-Based Analysis for LiDAR Data Classification', IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 10, no. 5, pp. 2055-2066.View/Download from: Publisher's site
Light detection and ranging (LiDAR) data classification provides useful thematic maps for numerous geospatial applications. Several methods and algorithms have been proposed recently for LiDAR data classification. Most studies focused on object-based analysis because of its advantages over per-pixel-based methods. However, several issues, such as parameter optimization, attribute selection, and development of transferable rulesets, remain challenging in this topic. This study contributes to LiDAR data classification by developing an approach that integrates ant colony optimization (ACO) and rule-based classification. First, LiDAR-derived digital elevation and digital surface models were integrated with high-resolution orthophotos. Second, the processed raster was segmented with the multiresolution segmentation method. Subsequently, the parameters were optimized with a supervised technique based on fuzzy analysis. A total of 20 attributes were selected based on general knowledge on the study area and LiDAR data; the best subset containing 12 attributes was then selected via ACO. These attributes were utilized to develop rulesets through the use of a decision tree algorithm, and a thematic map was generated for the study area. Results revealed the robustness of the proposed method, which has an overall accuracy of ~95% and a kappa coefficient of 0.94. The rule-based approach with all attributes and the k nearest neighbor (KNN) classification method were applied to validate the results of the proposed method. The overall accuracy of the rule-based method with all attributes was ~88% (kappa = 0.82), whereas the KNN method had an overall accuracy of <;70% and produced a poor thematic map. The selection of the ACO algorithm was justified through a comparison with three well-known feature selection methods. On the other hand, the transferability of the developed rules was evaluated by using a second LiDAR dataset at another study area. The overall accuracy and the kappa...
Jasim, MA, Shafri, HZM, Hamedianfar, A & Sameen, MI 2016, 'Land transformation assessment using the integration of remote sensing and GIS techniques: a case study of Al-Anbar Province, Iraq', Arabian Journal of Geosciences, vol. 9, no. 15.View/Download from: Publisher's site
© 2016, Saudi Society for Geosciences. Human activities and climate changes significantly affect our environment, altering hydrologic cycles. Several environmental, social, political, and economical factors contribute to land transformation as well as environmental changes. This study first identified the most critical factors that affect the environment in Al-Anbar city including population growth, urbanization expansion, bare land expansion, and reduction in vegetation cover. The combination of remote sensing data and fuzzy analytic hierarch process (Fuzzy AHP) enabled exploration of land transformations and environmental changes in the study area during 2001 to 2013 in terms of long and short-term changes. Results of land transformation showed that the major changes in water bodies increased radically (94 %) from the long-term change in 2001 to 2013 because of water policies. In addition, the urban class expanded in two short-term periods (2001–2007 and 2007–2013), representing net changes of 46 and 60 %, respectively. Finally, barren land showed 25 % reduction in the first period because of the huge expansion of water in the lake; a small percentage of growth gain was observed in the second period. Based on the land transformation results, the environmental degradation assessment showed that the study area generally had high level of environmental degradation. The degradation was mostly in the center and the north part of the study area. This study suggested for further studies to include other factors that also responsible for environmental degradation such as water quality and desertification threatening.
Sameen, MI & Pradhan, B 2016, 'Assessment of the effects of expressway geometric design features on the frequency of accident crash rates using high-resolution laser scanning data and GIS', Geomatics, Natural Hazards and Risk, pp. 1-15.
Sameen, MI, Nahhas, FH, Buraihi, FH, Pradhan, B & Shariff, ARBM 2016, 'A refined classification approach by integrating Landsat Operational Land Imager (OLI) and RADARSAT-2 imagery for land-use and land-cover mapping in a tropical area', International Journal of Remote Sensing, vol. 37, no. 10, pp. 2358-2375.
Alzuhairi, M, Jena, R & Pradhan, B 2020, 'Geospatial Technology Applications in Environmental Disaster Management' in Sustainable Energy and Environment: An Earth System Approach, Apple Academic Press, USA, pp. 271-306.
Geospatial technologies like GIS and GPS have grown rapidly and widely utilized in disaster management. This chapter provides an overview of the roles and applications of various geospatial technologies in landslide, oil spill, and earthquake disaster management. It explains the use of each technology in various stages of a disaster management process, for example, pre-disaster, during disaster, and post-disaster; and provides the current limitations of such technologies in reducing the impacts of these disasters. Finally, it presents examples of recent methods applied to disaster management using different geospatial technologies
Alzuhairi, M & Pradhan, B 2017, 'Manifestation of SVM-Based Rectified Linear Unit (ReLU) Kernel Function in Landslide Modelling' in Space Science and Communication for Sustainability, pp. 185-195.
Pradhan, B & Sameen, MI 2017, 'Effects of the Spatial Resolution of Digital Elevation Models and Their Products on Landslide Susceptibility Mapping' in Laser Scanning Applications in Landslide Assessment, Springer, Germany, pp. 133-150.View/Download from: Publisher's site
Landslides are among the destructive natural disasters that cause significant damage to human life and properties worldwide. Numerous researchers have attempted to provide an understanding of landslide causes and related problems. An important and simple analysis method that has been used in landslide studies is landslide susceptibility mapping/modeling (LSM). LSM is fundamental to hazard and risk assessments, and it is widely used by governments for planning land use and strategic projects. LSM requires landslide conditioning factors and landslide inventories, which can be acquired using remote sensing and field surveying techniques. The output of LSM is a map that shows the degree of landslide susceptibility of an area.
Pradhan, B & Sameen, MI 2017, 'Landslide Susceptibility Modeling: Optimization and Factor Effect Analysis' in Laser Scanning Applications in Landslide Assessment, Springer, Germany, pp. 115-132.View/Download from: Publisher's site
Landslides are considered devastating natural geohazards worldwide; they pose significant threats to human life and result in socioeconomic losses in many countries (Mahalingam et al. 2016).
Pradhan, B & Sameen, MI 2017, 'Laser Scanning Systems in Landslide Studies' in Laser Scanning Applications in Landslide Assessment, Springer, pp. 3-19.
Pradhan, B, Sameen, MI & Kalantar, B 2017, 'Ensemble Disagreement Active Learning for Spatial Prediction of Shallow Landslide' in Laser Scanning Applications in Landslide Assessment, Springer, Germany, pp. 179-191.View/Download from: Publisher's site
In Malaysia, landslides are considered as the most frequent and devastating natural disaster that cause human life and property losses. The spatial prediction of landslides is the basic step required for hazard and risk assessments. Spatial prediction methods of landslides are established and documented in the literature. However, several research directions on this topic need to be developed and explored in depth. The current improvement in computer technology and laser scanning systems provide improved data processing capabilities and topographic datasets, as well as new trends in landslide modeling and methods that can deal with such advanced technologies and datasets.
Ahmed, AA, Kalantar, B, Pradhan, B, Mansor, S & Sameen, MI 2017, 'Land use and land cover mapping using rule-based classification in Karbala City, Iraq', GCEC 2017 Proceedings of the 1st Global Civil Engineering Conference, Global Civil Engineering Conference, Springer, Kuala Lumpur, Malaysia, pp. 1019-1027.View/Download from: Publisher's site
© Springer Nature Singapore Pte Ltd. 2019. Land use and land cover are important and useful geographic information system (GIS) layers that have been used for a wide range of geospatial applications. These layers are usually generated by applying digital image processing steps for a satellite image or images captured from an aircraft. Several methods are available in literature to produce such GIS layers. Image classification is the main method that has been used by many researchers to produce thematic maps. In the current study, a decision tree was used to develop rulesets at object level. These rules were applied and a thematic map of Karbala city was produced from SPOT image. The overall accuracy of the classification image was 96% and the kappa index was 0.95. The results indicated that the proposed classification method is effective and can produce promising results.
Sameen, MI, Pradhan, B, Shafri, HZM & Hamid, HB 2017, 'Applications of deep learning in severity prediction of traffic accidents', Global Civil Engineering Conference 2017, Malaysia, pp. 793-808.View/Download from: Publisher's site
© Springer Nature Singapore Pte Ltd. 2019. This study investigates the power of deep learning in predicting the severity of injuries when accidents occur due to traffic on Malaysian highways. Three network architectures based on a simple feedforward Neural Networks (NN), Recurrent Neural Networks (RNN), and Convolutional Neural Networks (CNN) were proposed and optimized through a grid search optimization to fine tune the hyperparameters of the models that can best predict the outputs with less computational costs. The results showed that among the tested algorithms, the RNN model with an average accuracy of 73.76% outperformed the NN model (68.79%) and the CNN (70.30%) model based on a 10-fold cross-validation approach. On the other hand, the sensitivity analysis indicated that the best optimization algorithm is “Nadam” in all the three network architectures. In addition, the best batch size for the NN and RNN was determined to be 4 and 8 for CNN. The dropout with keep probability of 0.2 and 0.5 was found critical for the CNN and RNN models, respectively. This research has shown that deep learning models such as CNN and RNN provide additional information inherent in the raw data such as temporal and spatial correlations that outperform the traditional NN model in terms of both accuracy and stability.
Sameen, MI & Pradhan, B 2016, 'A novel built-up spectral index developed by using multiobjective particle-swarm-optimization technique', IOP Conference Series: Earth and Environmental Science, IOP Publishing, p. 012006.
Sameen, MI & Pradhan, B 1970, 'FORECASTING SEVERITY OF TRAFFIC ACCIDENTS USING ROAD GEOMETRY EXTRACTED FROM MOBILE LASER SCANNING DATA', Sri Lanka, Colombo.
Sameen, MI & Shariff, ARBM 2015, 'The use of genetic algorithm for palm oil fruit maturity detection', ACRS 2015 - 36th Asian Conference on Remote Sensing: Fostering Resilient Growth in Asia, Proceedings.
Palm oil is one of the most significant agricultural products in the areas of Southeast Asia, South Africa, and South America, which plays a crucial role in economic development. Maturity or ripeness of the palm oil fruits commands the quality, as well as the overall marketing of palm oil produced. The traditional method for the detection of palm oil fruit maturity involves manual detection of maturity fresh palm oil fruits or counting number of fruits loosened in a bunch. Manual counting of fresh palm oil fruits is a labor-intensive process takes a long time, leading to bias and human error, which affects dramatically the profitability of farmers. This study uses a genetic algorithm to develop palm oil ripeness index. Genetic algorithms are adaptive heuristic search algorithm based on the evolutionary ideas of natural selection and genetics. The results of this study indicated that genetic algorithm is useful for the detection of palm oil maturity. This has been achieved for the overall accuracy of (67.10%), while accuracy of (60%) and (73.60%) were achieved for under-ripe and ripe palm oil fruits respectively.