Rectified Linear Unit Function and its Variants: Exploring Options for Web Phishing Classification

Main Article Content

Adam Sagara
Teddy Surya Gunawan
Wanayumini

Abstract

In today's digital landscape, the rise of web phishing poses a significant cybersecurity threat, prompting urgent research into detection and prevention strategies. This study investigates the efficacy of different activation functions within Multilayer Perceptron (MLP) models for detecting phishing websites, utilizing a dataset with 87 features reduced to 60 using Principal Component Analysis (PCA). Evaluation metrics including accuracy, precision, recall, F1-score, and AUC are computed across four activation functions: ReLU, Leaky ReLU, Parametric ReLU, and Exponential Linear Unit (ELU). Results demonstrate consistently high performance across all activation functions, with slight improvements observed with Leaky ReLU (LReLU) and ELU, particularly in precision and F1-score metrics. These findings underscore the robustness and adaptability of MLP models in handling complex classification tasks like phishing detection. Moreover, the study highlights the importance of considering diverse activation functions in model design, offering insights for future optimization and exploration in cybersecurity research.

Downloads

Download data is not yet available.

Article Details

How to Cite
Rectified Linear Unit Function and its Variants: Exploring Options for Web Phishing Classification. (2024). ASTEEC Conference Proceeding: Computer Science, 1(1), 190-193. https://www.proceedings.asteec.com/index.php/acp-cs/article/view/54
Section
Articles

How to Cite

Rectified Linear Unit Function and its Variants: Exploring Options for Web Phishing Classification. (2024). ASTEEC Conference Proceeding: Computer Science, 1(1), 190-193. https://www.proceedings.asteec.com/index.php/acp-cs/article/view/54