Money Laundering Detection with Multi-Aggregation Custom Edge GIN
Pub. online: 12 June 2025
Type: Data Science In Action
Open Access
Received
30 September 2024
30 September 2024
Accepted
23 May 2025
23 May 2025
Published
12 June 2025
12 June 2025
Abstract
Detecting illicit transactions in Anti-Money Laundering (AML) systems remains a significant challenge due to class imbalances and the complexity of financial networks. This study introduces the Multiple Aggregations for Graph Isomorphism Network with Custom Edges (MAGIC) convolution, an enhancement of the Graph Isomorphism Network (GIN) designed to improve the detection of illicit transactions in AML systems. MAGIC integrates edge convolution (GINE Conv) and multiple learnable aggregations, allowing for varied embedding sizes and increased generalization capabilities. Experiments were conducted using synthetic datasets, which simulate real-world transactions, following the experimental setup of previous studies to ensure comparability. MAGIC, when combined with XGBoost as a link predictor, outperformed existing models in 16 out of 24 metrics, with notable improvements in F1 scores and precision. In the most imbalanced dataset, MAGIC achieved an F1 score of 82.6% and a precision of 90.4% for the illicit class. While MAGIC demonstrated high precision, its recall was lower or comparable to the other models, indicating potential areas for future enhancement. Overall, MAGIC presents a robust approach to AML detection, particularly in scenarios where precision and overall quality are critical. Future research should focus on optimizing the model’s recall, potentially by incorporating additional regularization techniques or advanced sampling methods. Additionally, exploring the integration of foundation models like GraphAny could further enhance the model’s applicability in diverse AML environments.
Supplementary material
Supplementary MaterialThe source code for this study is available on GitHub: https://github.com/maddataanalyst/Graph_MAGIC_Conv. The repository includes all the necessary components to reproduce the training results. A supplementary PDF file attached to this publication provides detailed analyses of the train/validation/test splits, hyperparameter tuning results, and a comprehensive breakdown of the model architecture for each dataset.
References
Alsuwailem AAS, Saudagar AKJ (2020). Anti-money laundering systems: A systematic literature review. Journal of Money Laundering Control, 23: 833–848. https://doi.org/10.1108/JMLC-02-2020-0018
Behera DK, Das M, Swetanisha S, Nayak J, Vimal S, Naik B (2021). Follower link prediction using the xgboost classification model with multiple graph features. Wireless Personal Communications, 127(1): 695–714. https://doi.org/10.1007/s11277-021-08399-y
Brossard R, Frigo O, Dehaene D (2020). Graph convolutions that can finally model local structure. CoRR. arXiv preprint: https://arxiv.org/abs/2011.15069.
Dumitrescu B, Baltoiu A, Budulan S (2022). Anomaly detection in graphs of bank transactions for anti money laundering applications. IEEE Access, 10: 47699–47714. https://doi.org/10.1109/ACCESS.2022.3170467
Eddin AN, Bono J, Aparício D, Polido D, Ascensão JT, Bizarro P, et al. (2021). Anti-money laundering alert optimization using machine learning with graphs. CoRR. arXiv preprint: https://arxiv.org/abs/2112.07508.
Hamilton WL (2020). Graph representation learning Hamilton. Synthesis Lectures on Artificial Intelligence and Machine Learning, 14: 1–159. https://doi.org/10.1007/978-3-031-01588-5
Han J, Huang Y, Liu S, Towey K (2020). Artificial intelligence for anti-money laundering: A review and extension. Digital Finance, 2: 211–239. https://doi.org/10.1007/s42521-020-00023-1
Hu W, Liu B, Gomes J, Zitnik M, Liang P, Pande VS, et al. (2019). Pre-training graph neural networks. CoRR. arXiv preprint: https://arxiv.org/abs/1905.12265.
Johannessen F, Jullum M (2023). Finding money launderers using heterogeneous graph neural networks. CoRR. arXiv preprint: https://arxiv.org/abs/2307.13499.
Opitz J, Burst S (2019). Macro F1 and macro F1. CoRR. arXiv preprint: https://arxiv.org/abs/1911.03347.
Scarselli F, Gori M, Tsoi AC, Hagenbuchner M, Monfardini G (2009). Computational capabilities of graph neural networks. IEEE Transactions on Neural Networks, 20: 81–102. https://doi.org/10.1109/TNN.2008.2005141
Tailor SA, Opolka FL, Liò P, Lane ND (2022). Do we need anisotropic graph neural networks? arXiv preprint: https://arxiv.org/abs/2104.01481.
Velickovic P, Cucurull G, Casanova A, Romero A, Liò P, Bengio Y (2017). Graph attention networks. CoRR. arXiv preprint: https://arxiv.org/abs/1710.10903.
Weber M, Chen J, Suzumura T, Pareja A, Ma T, Kanezashi H, et al. (2018). Scalable graph learning for anti-money laundering: A first look. CoRR. arXiv preprint: https://arxiv.org/abs/1812.00076.
Weber M, Domeniconi G, Chen J, Weidele DKI, Bellei C, Robinson T, et al. (2019). Anti-money laundering in bitcoin: Experimenting with graph convolutional networks for financial forensics. CoRR. arXiv preprint: https://arxiv.org/abs/1908.02591.
Wu Z, Pan S, Chen F, Long G, Zhang C, Yu PS (2021). A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems, 32: 4–24. https://doi.org/10.1109/TNNLS.2020.2978386
Xu K, Hu W, Leskovec J, Jegelka S (2018). How powerful are graph neural networks? CoRR. arXiv preprint: https://arxiv.org/abs/1810.00826.
Yang C, Xiao Y, Zhang Y, Sun Y, Han J (2020). Heterogeneous network representation learning: A unified framework with survey and benchmark. IEEE Transactions on Knowledge and Data Engineering, 34: 4854–4873. https://doi.org/10.1109/TKDE.2020.3045924
Yang Y, Li D (2020). NENN: Incorporate node and edge features in graph neural networks. In: Proceedings of the 12th Asian Conference on Machine Learning, ACML 2020, 18–20 November 2020, Bangkok, Thailand (SJ Pan, M Sugiyama, eds.), volume 129 of Proceedings of Machine Learning Research, PMLR, 593–608.
Zhao J, Mostafa H, Galkin M, Bronstein MM, Zhu Z, Tang J (2024). Graphany: A foundation model for node classification on any graph. arXiv preprint: https://arxiv.org/abs/2405.20445.