Journal of Data Science logo


Login Register

  1. Home
  2. To appear
  3. A Designed Look at Artificial Intelligen ...

Journal of Data Science

Submit your article Information
  • Article info
  • Related articles
  • More
    Article info Related articles

A Designed Look at Artificial Intelligence from the Lens of Fairness
Md Borhan Uddin ORCID icon link to view author Md Borhan Uddin details   Mengqi Yin   Nairanjana Dasgupta  

Authors

 
Placeholder
https://doi.org/10.6339/26-JDS1219
Pub. online: 4 February 2026      Type: Statistical Data Science      Open accessOpen Access

Received
30 November 2025
Accepted
24 January 2026
Published
4 February 2026

Abstract

As the use of Artificial Intelligence (AI), especially Generative AI, becomes ubiquitous, we take a look at the performance of these methods. We specifically focus on concept of fairness element of trustworthiness. We use Statistical Parity Difference and Equalized Odds Difference to mathematically measure fairness. To systematically study how various factors like bias, access to protected categories, types of intervention affect fairness and accuracy, we performed a simulation as a multi-factor experiment. Our results indicate that accuracy and fairness (in terms of statistical parity and equalized odds) tend to go in opposite directions. This opens up the question of whether we can look at methods that can consider both accuracy and fairness simultaneously.

Supplementary material

 Supplementary Material
The supplementary materials include Data generation process described in 3.2 as well as the full Python code. The Python implementation is also available at Https://github.com/borhan-stat/fairness-simulation-paper.

References

 
Barocas S, Hardt M, Narayanan A (2023). Fairness and Machine Learning: Limitations and Opportunities. MIT Press.
 
Barocas S, Selbst AD (2016). Big data’s disparate impact. California Law Review, 104(3): 671–732.
 
Capraro V, Lentsch A, Acemoglu D, Akgun S, Akhmedova A, Bilancini E, et al. (2024). The impact of generative artificial intelligence on socioeconomic inequalities and policy making. Proceedings of the National Academy of Sciences of the United States of America, 121(27): e2400303121.
 
Chouldechova A (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data, 5(2): 153–163. https://doi.org/10.1089/big.2016.0047
 
Corbett-Davies S, Gaebler JD, Nilforoshan H, Shroff R, Goel S (2023). The measure and mismeasure of fairness. Journal of Machine Learning Research, 24: 1–117.
 
D’Amour A, Heller K, Moldovan D, Adlam B, Alipanahi B, Beutel A, et al. (2022). Underspecification presents challenges for credibility in modern machine learning. Journal of Machine Learning Research, 23(226): 1–61.
 
Darlington RB (1971). Another look at “cultural fairness”. Journal of Educational Measurement, 8(2): 71–82. https://doi.org/10.1111/j.1745-3984.1971.tb00908.x
 
Doshi-Velez F, Kortz M, Budish R, Bavitz C, Gershman S, O’Brien D, et al. (2017). Accountability of ai under the law: The role of explanation. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3064761
 
Dutta S, Wei D, Yueksel H, Chen PY, Liu S, Varshney K (2020). Is there a trade-off between fairness and accuracy? A perspective using mismatched hypothesis testing. In: Proceedings of the 37th International Conference on Machine Learning, (H Daumé III, A Singh, eds.), volume 119 of ICML, 2803–2813. PMLR.
 
Dwork C, Hardt M, Pitassi T, Reingold O, Zemel R (2012). Fairness through awareness. In: Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, (S Goldwasser, ed.), 214–226. ACM.
 
Federici M, Tomioka R, Forré P (2021). An information-theoretic approach to distribution shifts. In: Advances in Neural Information Processing Systems 34 (NeurIPS 2021), (M Ranzato, A Beygelzimer, Y Dauphin, P Liang, J Wortman Vaughan, eds.), 17628–17641. Curran Associates Inc.
 
Graunt J (1662). Natural and Political Observations Made upon the Bills of Mortality. Royal Society, London.
 
Gupta N, Khatri K, Malik Y, Lakhani A, Kanwal A, Aggarwal S, et al. (2024). Exploring prospects, hurdles, and road ahead for generative artificial intelligence in orthopedic education and training. BMC Medical Education, 24: 1544. https://doi.org/10.1186/s12909-024-06592-8
 
Haltaufderheide J, Ranisch R (2024). The ethics of ChatGPT in medicine and healthcare: A systematic review on large language models (LLMs). npj Digital Medicine, 7: 183. https://doi.org/10.1038/s41746-024-01157-x
 
Hardt M, Price E, Srebro N (2016). Equality of opportunity in supervised learning. In: Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS’16, (DD Lee, M Sugiyama, U von Luxburg, I Guyon, R Garnett, eds.), 3323–3331. Curran Associates Inc., Red, Hook, NY, USA.
 
Ioannidis JPA (2005). Why most published research findings are false. PLoS Medicine, 2(8): e124. https://doi.org/10.1371/journal.pmed.0020124
 
Kim MP, Ghorbani A, Zou J (2019). Multiaccuracy: Black-box post-processing for fairness in classification. In: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, AIES’19, (V Conitzer, GK Hadfield, S Vallor, eds.), 247–254. Association for Computing Machinery.
 
Kleinberg J, Mullainathan S, Raghavan M (2017). Inherent trade-offs in the fair determination of risk scores. In: 8th Innovations in Theoretical Computer Science Conference (ITCS 2017) (CH Papadimitriou, ed.), volume 67 of Leibniz International Proceedings in Informatics (LIPIcs), 43:1–43:23. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik.
 
Koh PW, Sagawa S, Marklund H, Xie SM, Zhang M, Balsubramani A, et al. (2021). WILDS: A benchmark of in-the-wild distribution shifts. In: Proceedings of the 38th International Conference on Machine Learning (ICML) (M Meila, T Zhang, eds.), volume 139 of Proceedings of Machine Learning Research, 5637–5664. PMLR.
 
Lifton RJ (1986). The Nazi Doctors: Medical Killing and the Psychology of Genocide. Basic Books, New York.
 
National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research (1979). The Belmont report: Ethical principles and guidelines for the protection of human subjects of research. Technical report, U.S. Department of Health, Education, and Welfare, Washington, D.C.
 
Nightingale F (1858). Notes on Matters Affecting the Health, Efficiency, and Hospital Administration of the British Army. Harrison, London.
 
Nuremberg Military Tribunals (1949). Trials of War Criminals Before the Nuremberg Military Tribunals Under Control Council Law No. 10, Vol. 2. U.S. Government Printing Office, Washington, DC.
 
Quionero-Candela J, Sugiyama M, Schwaighofer A, Lawrence N (2009). Dataset Shift in Machine Learning. MIT Press.
 
Reverby SM (2009). Examining Tuskegee: The Infamous Syphilis Study and Its Legacy. University of North Carolina Press, Chapel Hill.
 
Shao M, Li D, Zhao C, Wu X, Lin Y, Tian Q (2024). Supervised algorithmic fairness in distribution shifts: A survey. In: Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence, IJCAI 2024, (K Larson, ed.), Jeju, South Korea, August 3–9, 2024, 8225–8233. ijcai.org.
 
Surbakti FPS (2025). Systematic literature review on generative ai: Ethical challenges and opportunities. International Journal of Advanced Computer Science and Applications (IJACSA), 16(5): 307–315.
 
Tabassum A, Elmahjub E, Padela AI, Zwitter A, Qadir J (2025). Generative ai and the metaverse: A scoping review of ethical and legal challenges. IEEE Open Journal of the Computer Society, 6: 1–15. https://doi.org/10.1109/OJCS.2025.3536082
 
Weerts H, Pfisterer F, Feurer M, Eggensperger K, Bergman E, Awad N, et al. (2024). Can fairness be automated? Guidelines and opportunities for fairness-aware automl. Journal of Artificial Intelligence Research, 79: 639–677. https://doi.org/10.1613/jair.1.14747
 
World Medical Association (2013). Declaration of Helsinki: Ethical principles for medical research involving human subjects. JAMA, 310(20): 2191–2194. https://doi.org/10.1001/jama.2013.281053

Related articles PDF XML
Related articles PDF XML

Copyright
2026 The Author(s). Published by the School of Statistics and the Center for Applied Statistics, Renmin University of China.
by logo by logo
Open access article under the CC BY license.

Keywords
equalized odds ethical principles of science factorial design statistical parity unbiasedness

Funding
This work was partly supported by a grant from Washington State Students Achievements Council (AWD00499).

Metrics
since February 2021
72

Article info
views

16

PDF
downloads

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

Journal of data science

  • Online ISSN: 1683-8602
  • Print ISSN: 1680-743X

About

  • About journal

For contributors

  • Submit
  • OA Policy
  • Become a Peer-reviewer

Contact us

  • JDS@ruc.edu.cn
  • No. 59 Zhongguancun Street, Haidian District Beijing, 100872, P.R. China
Powered by PubliMill  •  Privacy policy