Black-box machine learning models are recognized as useful tools for prediction applications, but the algorithmic complexity of some models causes interpretation challenges. Explainability methods have been proposed to provide insight into these models, but there is little research focused on supervised modeling with functional data inputs. We argue that, especially in applications of high consequence, it is important to explicitly model the functional dependence in a black-box analysis to not obscure or misrepresent patterns in explanations. As such, we propose the Variable importance Explainable Elastic Shape Analysis (VEESA) pipeline for training supervised machine learning models with functional inputs. The pipeline is an analysis process that includes the data preprocessing, modeling, and post-hoc explanations. The preprocessing is done using elastic functional principal components analysis, which accounts for vertical and horizontal variability in functional data and, ultimately, allows for explanations in the original data space that identify the important functional variability without bias due to correlated variables. Here, we demonstrate the pipeline on two high-consequence applications: explosives classification for national security and inkjet printer identification in forensic science. The applications exhibit the VEESA pipeline’s ability to provide an understanding of the characteristics of the functional data useful for prediction. Code for implementing the pipeline is available in the veesa R package (and supplemental python code).
Attention mechanism has become an almost ubiquitous model architecture in deep learning. One of its distinctive features is to compute non-negative probabilistic distribution to re-weight input representations. This work reconsiders attention weights as bidirectional coefficients instead of probabilistic measures for potential benefits in interpretability and representational capacity. After analyzing the iteration process of attention scores through backwards gradient propagation, we proposed a novel activation function, TanhMax, which possesses several favorable properties to satisfy the requirements of bidirectional attention. We conduct a battery of experiments to validate our analyses and advantages of proposed method on both text and image datasets. The results show that bidirectional attention is effective in revealing input unit’s semantics, presenting more interpretable explanations and increasing the expressive power of attention-based model.
Pub. online:2 May 2024Type:Data Science In ActionOpen Access
Journal:Journal of Data Science
Volume 22, Issue 2 (2024): Special Issue: 2023 Symposium on Data Science and Statistics (SDSS): “Inquire, Investigate, Implement, Innovate”, pp. 191–207
Abstract
Attention Deficit Hyperactivity Disorder (ADHD) is a frequent neurodevelopmental disorder in children that is commonly diagnosed subjectively. The objective detection of ADHD based on neuroimaging data has been a complex problem with low ranges of accuracy, possibly due to (among others) complex diagnostic processes, the high number of features considered and imperfect measurements in data collection. Hence, reliable neuroimaging biomarkers for detecting ADHD have been elusive. To address this problem we consider a recently proposed multi-model selection method called Sparse Wrapper AlGorithm (SWAG), which is a greedy algorithm that combines screening and wrapper approaches to create a set of low-dimensional models with good predictive power. While preserving the previous levels of accuracy, SWAG provides a measure of importance of brain regions for identifying ADHD. Our approach also provides a set of equally-performing and simple models which highlight the main feature combinations to be analyzed and the interactions between them. Taking advantage of the network of models resulting from this approach, we confirm the relevance of the frontal and temporal lobes as well as highlight how the different regions interact to detect the presence of ADHD. In particular, these results are fairly consistent across different learning mechanisms employed within the SWAG (i.e. logistic regression, linear and radial-kernel support vector machines) thereby providing population-level insights, as well as delivering feature combinations that are smaller and often perform better than those that would be used if employing their original versions directly.
Abstract: The study of factor analytic models often has to address two im portant issues: (a) the determination of the “optimum” number of factors and (b) the derivation of a unique simple structure whose interpretation is easy and straightforward. The classical approach deals with these two tasks separately, and sometimes resorts to ad-hoc methods. This paper proposes a Bayesian approach to these two important issues, and adapts ideas from stochastic geometry and Bayesian finite mixture modelling to construct an ergodic Markov chain having the posterior distribution of the complete col lection of parameters (including the number of factors) as its equilibrium distribution. The proposed method uses an Automatic Relevance Determi nation (ARD) prior as the device of achieving the desired simple structure. A Gibbs sampler updating scheme is then combined with the simulation of a continuous-time birth-and-death point process to produce a sampling scheme that efficiently explores the posterior distribution of interest. The MCMC sample path obtained from the simulated posterior then provides a flexible ingredient for most of the inferential tasks of interest. Illustrations on both artificial and real tasks are provided, while major difficulties and challenges are discussed, along with ideas for future improvements.