Pub. online:26 Jan 2026Type:Philosophies Of Data ScienceOpen Access
Journal:Journal of Data Science
Volume 24, Issue 1 (2026): Special Issue: Statistical aspects of Trustworthy Machine Learning, pp. 4–25
Abstract
A central focus of data science is the transformation of empirical evidence into knowledge. By “knowledge,” we mean claims that are (i) supported by data through an explicit inferential procedure and (ii) accompanied by calibrated measures of uncertainty. As such, the scientific insights and attitudes of deep thinkers like Ronald A. Fisher, Karl R. Popper, and John W. Tukey are expected to inspire exciting new advances in machine learning and artificial intelligence in years to come. Along these lines, the present paper advances a novel typicality principle which states, roughly, that if the observed data is sufficiently “atypical” in a certain sense relative to a posited theory, then that theory is unwarranted. This emphasis on typicality brings familiar but often overlooked background notions like model-checking to the inferential foreground. One instantiation of the typicality principle is in the context of parameter estimation, where we propose a new typicality-based regularization strategy that leans heavily on goodness-of-fit testing. The effectiveness of this new regularization strategy is illustrated in three non-trivial examples where ordinary maximum likelihood estimation fails miserably. We also demonstrate how the typicality principle fits within a bigger picture of reliable and efficient uncertainty quantification.
Getting a machine to understand the meaning of language is a largely important goal to a wide variety of fields, from advertising to entertainment. In this work, we focus on Youtube comments from the top twohundred trending videos as a source of user text data. Previous Sentiment Analysis Models focus on using hand-labelled data or predetermined lexicon-s.Our goal is to train a model to label comment sentiment with emoticons by training on other user-generated comments containing emoticons. Naive Bayes and Recurrent Neural Network models are both investigated and im- plemented in this study, and the validation accuracies for Naive Bayes model and Recurrent Neural Network model are found to be .548 and .812.