Journal of Data Science logo


Login Register

  1. Home
  2. Issues
  3. Volume 20, Issue 3 (2022): Special Issue: Data Science Meets Social Sciences
  4. Do Americans Think the Digital Economy i ...

Journal of Data Science

Submit your article Information
  • Article info
  • Related articles
  • More
    Article info Related articles

Do Americans Think the Digital Economy is Fair? Using Supervised Learning to Explore Evaluations of Predictive Automation
Volume 20, Issue 3 (2022): Special Issue: Data Science Meets Social Sciences, pp. 381–399
Emilio Lehoucq  

Authors

 
Placeholder
https://doi.org/10.6339/22-JDS1053
Pub. online: 20 June 2022      Type: Data Science In Action      Open accessOpen Access

Received
2 December 2021
Accepted
26 May 2022
Published
20 June 2022

Abstract

Predictive automation is a pervasive and archetypical example of the digital economy. Studying how Americans evaluate predictive automation is important because it affects corporate and state governance. However, we have relevant questions unanswered. We lack comparisons across use cases using a nationally representative sample. We also have yet to determine what are the key predictors of evaluations of predictive automation. This article uses the American Trends Panel’s 2018 wave ($n=4,594$) to study whether American adults think predictive automation is fair across four use cases: helping credit decisions, assisting parole decisions, filtering job applicants based on interview videos, and assessing job candidates based on resumes. Results from lasso regressions trained with 112 predictors reveal that people’s evaluations of predictive automation align with their views about social media, technology, and politics.

Supplementary material

 Supplementary Material
This article includes a replication file with an R project, unprocessed and processed data, and a table listing all the predictors used in the models, how they are measured, and their pre-processing. The online appendix referred to in the text is also available as a supplement online at the journal’s website.

References

 
Angwinn J, Mattu S, Kirchner L (2016). Machine Bias. ProPublica.
 
Araujo T, Helberger N, Kruikemeier S, de Vreese CH (2020). AI we trust? Perceptions about Automated Decision-Making by Artificial Intelligence. AI & Society, 35: 611–623.
 
Baleis J, Keller B, Starke C, Marcinkowski F (2019). Cognitive and Emotional Response to Fairness in AI – A Systematic Review.
 
Benjamin R (2019). Race After Technology: Abolitionist Tools for the New Jim Code. Polity, Medford, MA.
 
Binns R, Van Kleek M, Veale M, Lyngs U, Zhao J, Shadbolt N (2018). ‘It’s reducing a human being to a percentage’: Perceptions of justice in algorithmic decisions. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI ‘18. ACM, New York, NY, USA. Event-place: Montreal QC, Canada.
 
Breiman L (2001). Statistical modeling: The two cultures (with comments and a rejoinder by the author). Statistical science, 16(3): 199–231.
 
Brynjolfsson E, Kahin B (2000). Understanding the Digital Economy: Data, Tools, and Research. MIT Press, Cambridge.
 
Bucher T (2017). The algorithmic imaginary: Exploring the ordinary affects of Facebook algorithms. Information, Communication & Society, 20(1): 30–44. Taylor & Francis.
 
Carlsson B (2004). The digital economy: What is new and what is not? Structural Change and Economic Dynamics, 15(3): 245–264.
 
Chafkin M (2021). The Contrarian: Peter Thiel and Silicon Valley’s Pursuit of Power. Penguin Press, New York.
 
Chipman HA, George EI, McCulloch RE (2010). BART: Bayesian additive regression trees. The Annals of Applied Statistics, 4(1): 266–298. Institute of Mathematical Statistics.
 
Clayton J (2022). A Year On, has Trump Benefited from a Twitter Ban? BBC News.
 
Dawes RM (1979). The robust beauty of improper linear models in decision making. American Psychologist, 34(7): 571–582. American Psychological Association.
 
Dietvorst BJ, Simmons JP, Massey C (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them Err. Journal of Experimental Psychology: General, 114–126(1): 114.
 
Dodge J, Liao QV, Zhang Y, Bellamy RK, Dugan C (2019). Explaining models: an empirical study of how explanations impact fairness judgment. In: Proceedings of the 24th International Conference on Intelligent User Interfaces, 275–285.
 
Dubber MD, Pasquale F, Das S (2020). The Oxford Handbook of Ethics of AI. Oxford University Press, Oxford.
 
Eslami M, Rickman A, Vaccaro K, Aleyasen A, Vuong A, Karahalios K, et al. (2015). “I always assumed that I wasn’t really that close to [her]” Reasoning about Invisible Algorithms in News Feeds. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 153–162.
 
Eubanks V (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin’s Press, New York. Google-Books-ID: pn4pDwAAQBAJ.
 
Ferguson AG (2017). The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement. NYU Press, New York.
 
González F, Yu Y, Figueroa A, López C, Aragon C (2019). Global reactions to the Cambridge analytica scandal: A cross-language social media study. In: Companion Proceedings of The 2019 World Wide Web Conference, WWW ‘19, 799–806. Association for Computing Machinery, New York, NY, USA.
 
Gran AB, Booth P, Bucher T (2020). To be or not to be algorithm aware: A question of a new digital divide? Information, Communication & Society, 24(12): 1779–1796. Taylor & Francis.
 
Grgic-Hlaca N, Redmiles EM, Gummadi KP, Weller A (2018). Human perceptions of fairness in algorithmic decision making: A case study of criminal risk prediction. In: Proceedings of the 2018 World Wide Web Conference on World Wide Web – WWW ‘18, 903–912. ACM Press, Lyon, France.
 
Grgić-Hlača N, Weller A, Redmiles EM (2020). Dimensions of Diversity in Human Perceptions of Algorithmic Fairness. arXiv preprint: https://arxiv.org/abs/2005.00808.
 
Haidt J, Kesebir S (2010). Morality. In: Handbook of Social Psychology, 797–832. John Wiley & Sons, Inc., Hoboken, NJ.
 
Harcourt BE (2008). Against prediction: Profiling, Policing, and Punishing in an Actuarial Age. University of Chicago Press, Chicago.
 
Hastie T, Tibshirani R, Friedman J (2009). The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer, New York.
 
Hidalgo C, Orghiain D, JA dAF, Martin N C (2021). How Humans Judge Machines. MIT Press, Cambridge, MA.
 
James G, Witten D, Hastie T, Tibshirani R (2013). An Introduction to Statistical Learning with Applications in R, volume 112. Springer.
 
Kiviat B (2021). Which data fairly differentiate? American views on the use of personal data in two market settings. Sociological Science, 8: 26–47.
 
Kizilcec RF (2016). How much information? Effects of transparency on trust in an algorithmic interface. In: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 2390–2395.
 
Langer M, König CJ, Papathanasiou M (2019). Highly automated job interviews: Acceptance under the influence of stakes. International Journal of Selection and Assessment, 27(3): 217–234. eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/ijsa.12246.
 
Leavitt K, Schabram K, Hariharan P, Barnes CM (2021). Ghost in the machine: On organizational theory in the age of machine learning. Academy of Management Review, 46(4): 750–777. Academy of Management.
 
Lee MK (2018). Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data & Society, 5(1): 1–12. SAGE Publications Ltd.
 
Lee MK, Jain A, Cha HJ, Ojha S, Kusbit D (2019). Procedural justice in algorithmic fairness: Leveraging transparency and outcome control for fair algorithmic mediation. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW): 1–26. ACM, New York, NY, USA.
 
Lehoucq E, Tarrow S (2020). The rise of a transnational movement to protect privacy. Mobilization, 25(2): 161–184. Mobilization: An International Quarterly.
 
Lizardo O, Mowry R, Sepulvado B, Stoltz DS, Taylor MA, Van Ness J, et al. (2016). What are dual process models? Implications for cultural analysis in sociology. Sociological Theory, 34(4): 287–310. SAGE Publications Sage CA, Los Angeles, CA.
 
Logg JM, Minson JA, Moore DA (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151: 90–103.
 
Meehl PE (1954). Clinical Versus Statistical Prediction: A Theoretical Analysis and a Review of the Evidence. University of Minnesota Press, Minneapolis.
 
Molina M, Garip F (2019). Machine learning for sociology. Annu. Rev. Sociol., 45: 1–26.
 
Mullainathan S, Spiess J (2017). Machine learning: An applied econometric approach. Journal of Economic Perspectives, 31(2): 87–106.
 
Newman DT, Fast NJ, Harmon DJ (2020). When eliminating bias isn’t fair: Algorithmic reductionism and procedural justice in human resource decisions. Organizational Behavior and Human Decision Processes, 160: 149–167.
 
News B (2014). Facebook Emotion Experiment Sparks Criticism. BBC News.
 
Noble SU (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press, New York.
 
O’Mara M (2019). The Code: Silicon Valley and the Remaking of America. Penguin Press, New York.
 
O’Neil C (2017). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown Random House, Danvers, MA.
 
Oppenheimer PC (2020). The Long Good Buy: Analysing Cycles in Markets. John Wiley & Sons, Chichester, UL.
 
Rahnama H, Pentland AS (2022). The New Rules of Data Privacy. Harvard Business Review. Section: Data management.
 
Rousseau DM, Sitkin SB, Burt RS, Camerer C (1998). Introduction to special topic forum: Not so different after all: A cross-discipline view of trust. The Academy of Management Review. 23(3): 393–404. Academy of Management.
 
Torpey J (2020). A sociological agenda for the tech age. Theory and Society, 49(5): 749–769. Springer.
 
Vaisey S (2009). Motivation and justification: A dual-process model of culture in action. American Journal of Sociology, 114(6): 1675–1715. The University of Chicago Press.
 
Varian HR (2014). Big data: New tricks for econometrics. Journal of Economic Perspectives, 28(2): 3–28.
 
Wang R, Harper FM, Zhu H (2020). Factors influencing perceived fairness in algorithmic decision-making algorithm outcomes, development procedures, and individual differences. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1–14.
 
Wilson JQ (1997). The Moral Sense. Simon and Schuster, New York.
 
Woodruff A, Fox SE, Rousso-Schindler S, Warshaw J (2018). A qualitative exploration of perceptions of algorithmic fairness. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems – CHI ‘18, 1–14. ACM Press, Montreal QC, Canada.
 
Zuboff S (2019). The Age of Surveillance Capitalism: The Fight for the Future at the New Frontier of Power. PublicAffairs, New York.

Related articles PDF XML
Related articles PDF XML

Copyright
2022 The Author(s). Published by the School of Statistics and the Center for Applied Statistics, Renmin University of China.
by logo by logo
Open access article under the CC BY license.

Keywords
algorithmic fairness artificial intelligence machine learning public understanding of science and technology

Metrics
since February 2021
3171

Article info
views

478

PDF
downloads

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

Journal of data science

  • Online ISSN: 1683-8602
  • Print ISSN: 1680-743X

About

  • About journal

For contributors

  • Submit
  • OA Policy
  • Become a Peer-reviewer

Contact us

  • JDS@ruc.edu.cn
  • No. 59 Zhongguancun Street, Haidian District Beijing, 100872, P.R. China
Powered by PubliMill  •  Privacy policy