Financial Fraud Detection with Self-Attention Mechanism: A Comparative Study
Main Article Content
Abstract
Financial fraud detection has always been an important research topic in the financial industry. With the increasing complexity of financial transactions, traditional detection methods have gradually become incapable of dealing with large and complex data. This study proposes a model based on the self-attention mechanism to improve the recognition accuracy of financial fraud transactions. The self-attention mechanism can effectively capture long-distance dependencies and global information in financial data, thereby improving the model's ability to identify fraudulent behavior. By comparing with a variety of common models such as decision trees, MLP, XGBoost, and CNN, the experimental results show that the model based on the self-attention mechanism outperforms other models in terms of accuracy, recall, precision, and F1-Score, especially in terms of recall and F1-Score. This study provides a new idea for financial fraud detection, especially when dealing with complex and heterogeneous data, and has strong application potential. Future research can further optimize the model and combine it with other deep learning techniques for multimodal learning to improve the accuracy and efficiency of fraud detection.
Article Details
This work is licensed under a Creative Commons Attribution 4.0 International License.
Mind forge Academia also operates under the Creative Commons Licence CC-BY 4.0. This allows for copy and redistribute the material in any medium or format for any purpose, even commercially. The premise is that you must provide appropriate citation information.