Measuring the Human Risk Factor: Developing Predictive Models for Insider Threat Detection Using Behavioral Analytics

Main Article Content

Día Fayyad

Abstract

Insider threats represent one of the most complex challenges in modern cybersecurity, as they originate from authorized users who exploit legitimate access for malicious or negligent purposes. Unlike external attacks, insider incidents often evolve gradually through subtle behavioral changes that traditional rule-based systems fail to detect. This study investigates the human risk factor underlying insider threats and develops predictive models that leverage behavioral analytics to identify early indicators of malicious intent.
The research utilizes the Carnegie Mellon University CERT Insider Threat Dataset (v4.2), integrating system activity logs, communication patterns, and psychosocial proxies such as email sentiment and work-hour deviations. Data preprocessing involved normalization and feature selection across technical, behavioral, and psychological dimensions. Three machine learning models—Random Forest, Long Short-Term Memory (LSTM), and Autoencoder—were implemented to evaluate predictive performance. Model performance was assessed using precision, recall, F1-score, and ROC-AUC metrics.
Results show that the LSTM model achieved the highest overall accuracy of 93.2 percent with an AUC of 0.96, outperforming both Random Forest and Autoencoder models. Behavioral deviations such as unusual file transfers, abrupt login time changes, and communication tone shifts emerged as strong predictors of insider risk. The findings highlight the value of integrating human behavioral analytics into cybersecurity frameworks to enhance proactive threat detection.
This study contributes to the development of data-driven, ethically aligned security strategies that enable organizations to identify, quantify, and mitigate insider risks before they escalate into security incidents.

Article Details

Section

Articles