Abstract: Comparative mathematical textbook analysis aims at the de termination of differences among countries concerning the development and transmission of mathematics. On the other hand, textual statistics provides a means to quantify a text by applying multivariate statistical techniques. So far this statistical approach has not been applied to comparative math ematical textbook analysis yet. The object of this paper is to quantify and compare the style of a number of textbooks on differential calculus writ ten in 18th century Europe. To that purpose two multivariate statistical techniques have been applied: 1) simple correspondence analysis and 2) hi erarchical clustering analysis. The results of both analysis help to detect some interesting associations among the analysed textbooks.
Abstract: Correlation coefficients are generally viewed as summaries, causing them to be underutilized. Creating functions from them leads to their use in diverse areas of statistics. Because there are many correlation coefficients (see, for example, Gideon (2007)) this extension makes possible a very broad range of statistical estimators that rivals least squares. The whole area could be called a “Correlation Estimation System.” This paper outlines some of the numerous possibilities for using the system and gives some illustrative examples. Detailed explanations are developed in earlier papers. The formulae to make possible both the estimation and some of the computer coding to implement it are given. This approach has been taken in hopes that this condensed version of the work will make the ideas accessible, show their practicality, and promote further developments.
In this paper, a comparison is provided for volatility estimation in Bayesian and frequentist settings. We compare the predictive performance of these two approaches under the generalized autoregressive conditional heteroscedasticity (GARCH) model. Our results indicate that the frequentist estimation provides better predictive potential than the Bayesian approach. The finding is contrary to some of the work in this line of research. To illustrate our finding, we used the six major foreign exchange rate datasets.
It is well known that under certain regularity conditions the boot- strap sampling distributions of common statistics are consistent with their true sampling distributions. However, the consistency results rely heavily on the underlying regularity conditions and in fact, a failure to satisfy some of these may lead us to a serious departure from consistency. Consequently, the ‘sufficient bootstrap’ method (which only uses distinct units in a bootstrap sample in order to reduce the computational burden for larger sample sizes) based sampling distributions will also be inconsistent. In this paper, we combine the ideas of sufficient and m-out-of-n (m/n) bootstrap methods to regain consistency. We further propose the iterated version of this bootstrap method in non-regular cases and our simulation study reveals that similar or even better coverage accuracies than percentile bootstrap confidence inter- vals can be obtained through the proposed iterated sufficient m/n bootstrap with less computational time each case.
Abstract: According to the available literature, long-term survival and success rates of one-stage, non-submerged dental implant (A dental implant is not totally buried beneath the gum.) are predictable. However, until now there is no similar study in Taiwan regarding to the efficacy of one-stage, non-submerged dental implant. This prospective study from August 1997 to the end of 2005 includes 316 patients who received the dental implants and prosthesis and were followed up at least 6 months. The total implants are 717. Life table analysis is used to analyze the effectiveness of the one stage, non-submerged dental implant. Our result indicates the survival rate and success rate are 99.58% and 96.13%, respectively, from this seven-year follow-up study. This study strongly demonstrates that the efficacy of one stage, non-submerged dental implant is also predictable in Taiwan if the patients are under regular follow-up after active treatments.
In this article, we introduce an extension referred to as the exponentiated Weibull power function distribution based on the exponentiated Weibull-G family of distributions. The proposed model serves as an extension of the two-parameter power function distribution as well as a generalization to the Weibull power function presented by Tahir et al. (2016 a). Various mathematical properties of the subject distribution are studied. General explicit expressions for the quantile function, expansion of density and distribution functions, moments, generating function, incomplete moments, conditional moments, residual life function, mean deviation, inequality measures, Rényi and q – entropies, probability weighted moments and order statistics are obtained. The estimation of the model parameters is discussed using maximum likelihood method. Finally, the practical importance of the proposed distribution is examined through three real data sets. It has been concluded that the new distribution works better than other competing models.
Abstract: Identification of representative regimes of wave height and direction under different wind conditions is complicated by issues that relate to the specification of the joint distribution of variables that are defined on linear and circular supports and the occurrence of missing values. We take a latent-class approach and jointly model wave and wind data by a finite mixture of conditionally independent Gamma and von Mises distributions. Maximum-likelihood estimates of parameters are obtained by exploiting a suitable EM algorithm that allows for missing data. The proposed model is validated on hourly marine data obtained from a buoy and two tide gauges in the Adriatic Sea.
Pupillary response behavior (PRB) refers to changes in pupil diameter in response to simple or complex stimuli. There are underlying, unique patterns hidden within complex, high-frequency PRB data that can be utilized to classify visual impairment, but those patterns cannot be described by traditional summary statistics. For those complex high-frequency data, Hurst exponent, as a measure of long-term memory of time series, becomes a powerful tool to detect the muted or irregular change patterns. In this paper, we proposed robust estimators of Hurst exponent based on non-decimated wavelet transforms. The properties of the proposed estimators were studied both theoretically and numerically. We applied our methods to PRB data to extract the Hurst exponent and then used it as a predictor to classify individuals with different degrees of visual impairment. Compared with other standard wavelet-based methods, our methods reduce the variance of the estimators and increase the classification accuracy.
Abstract: Good inference for the random effects in a linear mixed-effects model is important because of their role in decision making. For example, estimates of the random effects may be used to make decisions about the quality of medical providers such as hospitals, surgeons, etc. Standard methods assume that the random effects are normally distributed, but this may be problematic because inferences are sensitive to this assumption and to the composition of the study sample. We investigate whether using a Dirichlet process prior instead of a normal prior for the random effects is effective in reducing the dependence of inferences on the study sample. Specifically, we compare the two models, normal and Dirichlet process, emphasizing inferences for extrema. Our main finding is that using the Dirichlet process prior provides inferences that are substantially more robust to the composition of the study sample.
Abstract: A multilevel model (allowing for individual risk factors and geo graphic context) is developed for jointly modelling cross-sectional differences in diabetes prevalence and trends in prevalence, and then adapted to provide geographically disaggregated diabetes prevalence forecasts. This involves a weighted binomial regression applied to US data from the Behavioral Risk Factor Surveillance System (BRFSS) survey, specifically totals of diagnosed diabetes cases, and populations at risk. Both cases and populations are dis aggregated according to survey year (2000 to 2010), individual risk factors (e.g., age, education), and contextual risk factors, namely US census division and the poverty level of the county of residence. The model includes a linear growth path in decadal time units, and forecasts are obtained by extending the growth path to future years. The trend component of the model controls for interacting influences (individual and contextual) on changing prevalence. Prevalence growth is found to be highest among younger adults, among males, and among those with high school education. There are also regional shifts, with a widening of the US “diabetes belt”.