Financial Fraud Detection with Self-Attention Mechanism: A Comparative Study

Main Article Content

Graham Fletcher
Tao Shi

Abstract

Financial fraud detection has always been an important research topic in the financial industry. With the increasing complexity of financial transactions, traditional detection methods have gradually become incapable of dealing with large and complex data. This study proposes a model based on the self-attention mechanism to improve the recognition accuracy of financial fraud transactions. The self-attention mechanism can effectively capture long-distance dependencies and global information in financial data, thereby improving the model's ability to identify fraudulent behavior. By comparing with a variety of common models such as decision trees, MLP, XGBoost, and CNN, the experimental results show that the model based on the self-attention mechanism outperforms other models in terms of accuracy, recall, precision, and F1-Score, especially in terms of recall and F1-Score. This study provides a new idea for financial fraud detection, especially when dealing with complex and heterogeneous data, and has strong application potential. Future research can further optimize the model and combine it with other deep learning techniques for multimodal learning to improve the accuracy and efficiency of fraud detection.


 

Article Details

How to Cite
Fletcher, G., & Shi, T. (2025). Financial Fraud Detection with Self-Attention Mechanism: A Comparative Study. Journal of Computer Science and Software Applications, 5(1), 10–18. Retrieved from https://mfacademia.org/index.php/jcssa/article/view/179
Section
Articles