Rectified Linear Unit Function and its Variants: Exploring Options for Web Phishing Classification
Main Article Content
Abstract
In today's digital landscape, the rise of web phishing poses a significant cybersecurity threat, prompting urgent research into detection and prevention strategies. This study investigates the efficacy of different activation functions within Multilayer Perceptron (MLP) models for detecting phishing websites, utilizing a dataset with 87 features reduced to 60 using Principal Component Analysis (PCA). Evaluation metrics including accuracy, precision, recall, F1-score, and AUC are computed across four activation functions: ReLU, Leaky ReLU, Parametric ReLU, and Exponential Linear Unit (ELU). Results demonstrate consistently high performance across all activation functions, with slight improvements observed with Leaky ReLU (LReLU) and ELU, particularly in precision and F1-score metrics. These findings underscore the robustness and adaptability of MLP models in handling complex classification tasks like phishing detection. Moreover, the study highlights the importance of considering diverse activation functions in model design, offering insights for future optimization and exploration in cybersecurity research.
Downloads
Article Details
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.