Interpretable Machine Learning for IoT Security: Feature Selection and Explainability in Botnet Intrusion Detection using Extra Trees Classifier


Title:  Interpretable Machine Learning for IoT Security: Feature Selection and Explainability in Botnet Intrusion Detection using Extra Trees Classifier

Conference: 2024 1st International Conference on Innovative Engineering Sciences and Technological Research (ICIESTR)

Conference Location: Muscat, Oman

DOI: 10.1109/ICIESTR60916.2024.10798158

Abstract: 

In the landscape of the Internet of Things (IoT), where security concerns are paramount, particularly in the context of botnet intrusions, this research addresses these challenges through a machine learning-driven approach with a strong emphasis on interpretability. The methodology employs a comprehensive strategy, commencing with IoT botnet data pre-processing. Feature selection, involving correlation analysis, mutual information, and Principal Component Analysis (PCA), is executed to distill essential features. These selected features are subsequently integrated into the Extra Trees Classifier for model training and evaluation. Notably, a significant emphasis is placed on model interpretability by incorporating LimeTabularExplainer, unraveling intricate relationships between features and model predictions. The achieved results demonstrate exceptional performance in both general and nuanced botnet intrusion categorization, with 5-fold cross-validation yielding accuracy, precision, recall, and F1-Score metrics consistently exceeding 99%. This research marks a significant stride toward the development of resilient and comprehensible cybersecurity solutions, crucial for securing the ever-expanding IoT landscape.


Keywords—Botnet Intrusion Detection, Interpretable, Machine Learning, Explainable Artificial Intelligence, IoT Threat, Mitigation, Ensemble Learning for Security




#Security 


No comments:

Post a Comment