Repository logo

Intelligent Inside Threat Detection Framework Based on Digital Twin, Transformer Variant Models, and Transfer Learning

Loading...
Thumbnail ImageThumbnail Image

Journal Title

Journal ISSN

Volume Title

Publisher

Université d'Ottawa / University of Ottawa

Creative Commons

Attribution-NonCommercial-NoDerivatives 4.0 International

Abstract

With the rise of networked systems and modern hacker techniques, insider threats have become a greater concern than external hackers, as they often cause more significant damage and are harder to detect due to authorized access, complex behaviors, data imbalances, and a lack of explainability. To address these challenges, we proposed DTITD, a centralized learning framework that combines Digital Twin (DT) technology and transformer models. DTITD tackles data imbalance by utilizing contextual embeddings from pre-trained Large Language Models (LLMs), and it provides insights into user behavior through Digital Twin analysis, enhancing detection explainability. Extensive experiments on CERT r4.2 (dense) and CERT r6.2 (sparse) datasets show that DistilledTrans, a customized transformer model, outperforms baseline models in accuracy, precision, recall, F1-score, and AUC, while being computationally efficient. To overcome challenges like data privacy and resource costs, we introduced FedITD, a Federated Parameter-Efficient Tuning (PETuning) framework with Federated Learning (FL) and Transfer Learning. This framework allows for decentralized model learning without data transmission, safeguarding privacy and reducing resource costs. Combining DTITD and FedITD provides a highly accurate, efficient, and privacy-preserving solution for insider threat detection at an enterprise level.

Description

Keywords

Digital Twin, Cybersecurity, Insider Threat, deep learning, transformer, BERT, RoBERTa, GPT, data augmentation, artificial intelligence, machine learning, UEBA, XLNet, DistilBERT, LLM, Parameter Efficient Tuning, LoRA,, Adapter, BitFit, NLP, Transfer Learning

Citation

Related Materials

Alternate Version