Piecewise linear-quadratic (PLQ) functions are a fundamental function class in convex optimization, especially within the Empirical Risk Minimization (ERM) framework, which employs various PLQ loss functions. This paper provides a workflow for decomposing a general convex PLQ loss into its ReLU-ReHU representation, along with a Python implementation designed to enhance the efficiency of presenting and solving ERM problems, particularly when integrated with ReHLine (a powerful solver for PLQ ERMs). Our proposed package, plqcom, accepts three representations of PLQ functions and offers user-friendly APIs for verifying their convexity and continuity. The Python package is available at https://github.com/keepwith/PLQComposite.
Deep neural networks have a wide range of applications in data science. This paper reviews neural network modeling algorithms and their applications in both supervised and unsupervised learning. Key examples include: (i) binary classification and (ii) nonparametric regression function estimation, both implemented with feedforward neural networks ($\mathrm{FNN}$); (iii) sequential data prediction using long short-term memory ($\mathrm{LSTM}$) networks; and (iv) image classification using convolutional neural networks ($\mathrm{CNN}$). All implementations are provided in $\mathrm{MATLAB}$, making these methods accessible to statisticians and data scientists to support learning and practical application.
Historical data or real-world data are often available in clinical trials, genetics, health care, psychology, environmental health, engineering, economics, and business. The power priors have emerged as a useful class of informative priors for a variety of situations in which historical data are available. In this paper, an overview of the development of the power priors is provided. Various variations of the power priors are derived under a binomial regression model and a normal linear regression model. The development of software on the power priors is also briefly reviewed. Throughout this paper, the data from the Kociba study and the National Toxicology Program study as well as the data from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) study are used to demonstrate the derivations of the power priors and their variations. Detailed analyses of the data from these studies are carried out to further demonstrate the usefulness of the power priors and their variations in these real applications. Finally, the directions of future research on the power priors are discussed.
The last decade has seen a vast increase of the abundance of data, fuelling the need for data analytic tools that can keep up with the data size and complexity. This has changed the way we analyze data: moving from away from single data analysts working on their individual computers, to large clusters and distributed systems leveraged by dozens of data scientists. Technological advances have been addressing the scalability aspects, however, the resulting complexity necessitates that more people are involved in a data analysis than before. Collaboration and leveraging of other’s work becomes crucial in the modern, interconnected world of data science. In this article we propose and describe an open-source, web-based, collaborative visualization and data analysis platform RCloud. It de-couples the user from the location of the data analysis while preserving security, interactivity and visualization capabilities. Its collaborative features enable data scientists to explore, work together and share analyses in a seamless fashion. We describe the concepts and design decisions that enabled it to support large data science teams in the industry and academia.
Estimating healthcare expenditures is important for policymakers and clinicians. The expenditure of patients facing a life-threatening illness can often be segmented into four distinct phases: diagnosis, treatment, stable, and terminal phases. The diagnosis phase encompasses healthcare expenses incurred prior to the disease diagnosis, attributed to frequent healthcare visits and diagnostic tests. The second phase, following diagnosis, typically witnesses high expenditure due to various treatments, gradually tapering off over time and stabilizing into a stable phase, and eventually to a terminal phase. In this project, we introduce a pre-disease phase preceding the diagnosis phase, serving as a baseline for healthcare expenditure, and thus propose a five-phase to evaluate the healthcare expenditures. We use a piecewise linear model with three population-level change points and $4p$ subject-level parameters to capture expenditure trajectories and identify transitions between phases, where p is the number of covariates. To estimate the model’s coefficients, we apply generalized estimating equations, while a grid-search approach is used to estimate the change-point parameters by minimizing the residual sum of squares. In our analysis of expenditures for stages I–III pancreatic cancer patients using the SEER-Medicare database, we find that the diagnostic phase begins one month before diagnosis, followed by an initial treatment phase lasting three months. The stable phase continues until eight months before death, at which point the terminal phase begins, marked by a renewed increase in expenditures.
Large pretrained transformer models have revolutionized modern AI applications with their state-of-the-art performance in natural language processing (NLP). However, their substantial parameter count poses challenges for real-world deployment. To address this, researchers often reduce model size by pruning parameters based on their magnitude or sensitivity. Previous research has demonstrated the limitations of magnitude pruning, especially in the context of transfer learning for modern NLP tasks. In this paper, we introduce a new magnitude-based pruning algorithm called mixture Gaussian prior pruning (MGPP), which employs a mixture Gaussian prior for regularization. MGPP prunes non-expressive weights under the guidance of the mixture Gaussian prior, aiming to retain the model’s expressive capability. Extensive evaluations across various NLP tasks, including natural language understanding, question answering, and natural language generation, demonstrate the superiority of MGPP over existing pruning methods, particularly in high sparsity settings. Additionally, we provide a theoretical justification for the consistency of the sparse transformer, shedding light on the effectiveness of the proposed pruning method.
When computations such as statistical simulations need to be carried out on a high performance computing (HPC) cluster, typical questions arise among researchers or practitioners. How do I interact with a HPC cluster? Do I need to type a long host name and also a password on every single login or file transfer? Why does my locally working code not run anymore on the HPC cluster? How can I install the latest versions of software on a HPC cluster to match my local setup? How can I submit a job and monitor its progress? This tutorial provides answers to such questions with experiments on an example HPC cluster.
Connections between subpar dietary choices and negative health consequences are well established in the field of nutritional epidemiology. Consequently, in the United States, there is a standard practice of conducting regular surveys to evaluate dietary habits. One notable example is the National Health and Nutrition Examination Survey (NHANES) conducted every two years by the Center for Disease Control (CDC). Several scoring methods have been developed to assess the quality of diet in the overall population as well as different pertinent subgroups using dietary recall data collected in these surveys. The Healthy Eating Index (HEI) is one such metric, developed based on recommendations from the United States Department of Health and Human Services (HHS) and Department of Agriculture (USDA) and widely used by nutritionists. Presently, there is a scarcity of user-friendly statistical software tools implementing the scoring of these standard scoring metrics. Herein, we develop an R package heiscore to address this need. Our carefully designed package, with its many user-friendly features, increases the accessibility of the HEI scoring using three different methods outlined by the National Cancer Institute (NCI). Additionally, we provide functions to visualize multidimensional diet quality data via various graphing techniques, including bar charts and radar charts. Its utility is illustrated with many examples, including comparisons between different demographic groups.
Yang et al. (2004) developed the two-dimensional principal component analysis (2DPCA) for image representation and recognition, widely used in different fields, including face recognition, biometrics recognition, cancer diagnosis, tumor classification, and others. 2DPCA has been proven to perform better and computationally more efficiently than traditional principal component analysis (PCA). However, some theoretical properties of 2DPCA are still unknown, including determining the number of principal components (PCs) in the training set, which is the critical step in applying 2DPCA. Without rigorous criteria for determining the number of PCs hampers the generalization of the application of 2DPCA. Given this issue, we propose a new method based on parallel analysis to determine the number of PCs in 2DPCA with statistical justification. Several image classification experiments demonstrate that the proposed method compares favourably to other state-of-the-art approaches regarding recognition accuracy and storage requirement, with a low computational cost.
The ultrasonic testing has been considered a promising method for diagnosing and characterizing masonry walls. As ultrasonic waves tend to travel faster in denser materials, their use is common in evaluating the conditions of various materials. Presence of internal voids, e.g., would alter the wave path, and this distinct behavior could be employed to identify unknown conditions within the material, allowing for the assessment of its condition. Therefore, we applied mixed models and Gaussian processes to analyze the behavior of ultrasonic waves on masonry walls and identify relevant factors impacting their propagation. We observed that the average propagation time behavior differs depending on the material for both models. Additionally, the condition of the wall influences the propagation time. Gaussian process and mixed model performances are compared, and we conclude that these models can be useful in a classification model to automatically identify anomalies within masonry walls.