The Lindley distribution has been generalized by many authors in recent years. However, all of the known generalizations so far have restricted tail behaviors. Here, we introduce the most flexible generalization of the Lindley distribution with its tails controlled by two independent parameters. Various mathematical properties of the generalization are derived. Maximum likelihood estimators of its parameters are derived. Fisher’s information matrix and asymptotic confidence intervals for the parameters are given. Finally, a real data application shows that the proposed generalization performs better than all known ones
Abstract: The spread of crises from one country to another, named “con tagion”, has been one of the most debated issues in international finance in the last two decades. The presence of contagion can be detected by the increase in conditional correlation during the crisis period compared to the previous period. The paper presents a brief review of three of the most used techniques to estimate conditional correlation: exponential weighted mov ing average, multivariate GARCH models and factor analysis with stochastic volatility models. These methods are applied to analyze the contagion be tween the stock market of three major Latin American economies (Brazil, Mexico and Argentina) and two emerging markets (Malaysia and Russia). The data cover the period from 09/05/1995 to 12/30/2004, which includes several crises. In general, the three methods yielded similar results, but there is no general agreement. All the methods agreed that the contagion occurred mostly during the Asian crisis.
Abstract: The aim of this paper is to investigate the flexibility of the skewnormal distribution to classify the pixels of a remotely sensed satellite image. In the most of remote sensing packages, for example ENVI and ERDAS, it is assumed that populations are distributed as a multivariate normal. Then linear discriminant function (LDF) or quadratic discriminant function (QDF) is used to classify the pixels, when the covariance matrix of populations are assumed equal or unequal, respectively. However, the data was obtained from the satellite or airplane images suffer from non-normality. In this case, skew-normal discriminant function (SDF) is one of techniques to obtain more accurate image. In this study, we compare the SDF with LDF and QDF using simulation for different scenarios. The results show that ignoring the skewness of the data increases the misclassification probability and consequently we get wrong image. An application is provided to identify the effect of wrong assumptions on the image accuracy.
Abstract: Comparative mathematical textbook analysis aims at the de termination of differences among countries concerning the development and transmission of mathematics. On the other hand, textual statistics provides a means to quantify a text by applying multivariate statistical techniques. So far this statistical approach has not been applied to comparative math ematical textbook analysis yet. The object of this paper is to quantify and compare the style of a number of textbooks on differential calculus writ ten in 18th century Europe. To that purpose two multivariate statistical techniques have been applied: 1) simple correspondence analysis and 2) hi erarchical clustering analysis. The results of both analysis help to detect some interesting associations among the analysed textbooks.
Abstract: Correlation coefficients are generally viewed as summaries, causing them to be underutilized. Creating functions from them leads to their use in diverse areas of statistics. Because there are many correlation coefficients (see, for example, Gideon (2007)) this extension makes possible a very broad range of statistical estimators that rivals least squares. The whole area could be called a “Correlation Estimation System.” This paper outlines some of the numerous possibilities for using the system and gives some illustrative examples. Detailed explanations are developed in earlier papers. The formulae to make possible both the estimation and some of the computer coding to implement it are given. This approach has been taken in hopes that this condensed version of the work will make the ideas accessible, show their practicality, and promote further developments.
In this paper, a comparison is provided for volatility estimation in Bayesian and frequentist settings. We compare the predictive performance of these two approaches under the generalized autoregressive conditional heteroscedasticity (GARCH) model. Our results indicate that the frequentist estimation provides better predictive potential than the Bayesian approach. The finding is contrary to some of the work in this line of research. To illustrate our finding, we used the six major foreign exchange rate datasets.
It is well known that under certain regularity conditions the boot- strap sampling distributions of common statistics are consistent with their true sampling distributions. However, the consistency results rely heavily on the underlying regularity conditions and in fact, a failure to satisfy some of these may lead us to a serious departure from consistency. Consequently, the ‘sufficient bootstrap’ method (which only uses distinct units in a bootstrap sample in order to reduce the computational burden for larger sample sizes) based sampling distributions will also be inconsistent. In this paper, we combine the ideas of sufficient and m-out-of-n (m/n) bootstrap methods to regain consistency. We further propose the iterated version of this bootstrap method in non-regular cases and our simulation study reveals that similar or even better coverage accuracies than percentile bootstrap confidence inter- vals can be obtained through the proposed iterated sufficient m/n bootstrap with less computational time each case.
Abstract: According to the available literature, long-term survival and success rates of one-stage, non-submerged dental implant (A dental implant is not totally buried beneath the gum.) are predictable. However, until now there is no similar study in Taiwan regarding to the efficacy of one-stage, non-submerged dental implant. This prospective study from August 1997 to the end of 2005 includes 316 patients who received the dental implants and prosthesis and were followed up at least 6 months. The total implants are 717. Life table analysis is used to analyze the effectiveness of the one stage, non-submerged dental implant. Our result indicates the survival rate and success rate are 99.58% and 96.13%, respectively, from this seven-year follow-up study. This study strongly demonstrates that the efficacy of one stage, non-submerged dental implant is also predictable in Taiwan if the patients are under regular follow-up after active treatments.
In this article, we introduce an extension referred to as the exponentiated Weibull power function distribution based on the exponentiated Weibull-G family of distributions. The proposed model serves as an extension of the two-parameter power function distribution as well as a generalization to the Weibull power function presented by Tahir et al. (2016 a). Various mathematical properties of the subject distribution are studied. General explicit expressions for the quantile function, expansion of density and distribution functions, moments, generating function, incomplete moments, conditional moments, residual life function, mean deviation, inequality measures, Rényi and q – entropies, probability weighted moments and order statistics are obtained. The estimation of the model parameters is discussed using maximum likelihood method. Finally, the practical importance of the proposed distribution is examined through three real data sets. It has been concluded that the new distribution works better than other competing models.
Abstract: Identification of representative regimes of wave height and direction under different wind conditions is complicated by issues that relate to the specification of the joint distribution of variables that are defined on linear and circular supports and the occurrence of missing values. We take a latent-class approach and jointly model wave and wind data by a finite mixture of conditionally independent Gamma and von Mises distributions. Maximum-likelihood estimates of parameters are obtained by exploiting a suitable EM algorithm that allows for missing data. The proposed model is validated on hourly marine data obtained from a buoy and two tide gauges in the Adriatic Sea.