Abstract: The study of factor analytic models often has to address two im portant issues: (a) the determination of the “optimum” number of factors and (b) the derivation of a unique simple structure whose interpretation is easy and straightforward. The classical approach deals with these two tasks separately, and sometimes resorts to ad-hoc methods. This paper proposes a Bayesian approach to these two important issues, and adapts ideas from stochastic geometry and Bayesian finite mixture modelling to construct an ergodic Markov chain having the posterior distribution of the complete col lection of parameters (including the number of factors) as its equilibrium distribution. The proposed method uses an Automatic Relevance Determi nation (ARD) prior as the device of achieving the desired simple structure. A Gibbs sampler updating scheme is then combined with the simulation of a continuous-time birth-and-death point process to produce a sampling scheme that efficiently explores the posterior distribution of interest. The MCMC sample path obtained from the simulated posterior then provides a flexible ingredient for most of the inferential tasks of interest. Illustrations on both artificial and real tasks are provided, while major difficulties and challenges are discussed, along with ideas for future improvements.
A new distribution called the log generalized Lindley-Weibull (LGLW) distribution for modeling lifetime data is proposed. This model further generalizes the Lindley distribution and allows for hazard rate functions that are monotonically decreasing, monotonically increasing and bathtub shaped. A comprehensive investigation and account of the mathematical and statistical properties including moments, moment generating function, simulation issues and entropy are presented. Estimates of model parameters via the method of maximum likelihood are given. Real data examples are presented to illustrate the usefulness and applicability of this new distribution.
Abstract: It is well known that the ordinary least squares (OLS) regression estimator is not robust. Many robust regression estimators have been proposed and inferential methods based on these estimators have been derived. However, for two independent groups, let θj (X) be some conditional measure of location for the jth group, given X, based on some robust regression estimator. An issue that has not been addressed is computing a 1 − confidence interval for θ1(X) − θ2(X) in a manner that allows both within group and between group hetereoscedasticity. The paper reports the finite sample properties of a simple method for accomplishing this goal. Simulations indicate that, in terms of controlling the probability of a Type I error, the method performs very well for a wide range of situations, even with a relatively small sample size. In principle, any robust regression estimator can be used. The simulations are focused primarily on the Theil-Sen estimator, but some results using Yohai’s MM-estimator, as well as the Koenker and Bas sett quantile regression estimator, are noted. Data from the Well Elderly II study, dealing with measures of meaningful activity using the cortisol awakening response as a covariate, are used to illustrate that the choice between an extant method based on a nonparametric regression estimator, and the method suggested here, can make a practical difference.
Abstract: The present paper addresses the propensity to vote with data from the third and fourth rounds of the European Social Survey. The regression of voting propensities on true predictor scores is made possible by estimates of predictor reliabilities (Bechtel, 2010; 2011). This resolves two major problems in binary regression, i.e. errors in variables and imputation errors. These resolutions are attained by a pure randomization theory that incorporates fixed measurement error in design-based regression. This type of weighted regression has long been preferred by statistical agencies and polling organizations for sampling large populations.
We propose a new generator of continuous distributions, so called the transmuted generalized odd generalized exponential-G family, which extends the generalized odd generalized exponential-G family introduced by Alizadeh et al. (2017). Some statistical properties of the new family such as; raw and incomplete moments, moment generating function, Lorenz and Bonferroni curves, probability weighted moments, Rényi entropy, stress strength model and order statistics are investigated. The parameters of the new family are estimated by using the method of maximum likelihood. Two real applications are presented to demonstrate the effectiveness of the suggested family.
Abstract: Fisher’s exact test (FET) is a conditional method that is frequently used to analyze data in a 2 × 2 table for small samples. This test is conservative and attempts have been made to modify the test to make it less conservative. For example, Crans and Shuster (2008) proposed adding more points in the rejection region to make the test more powerful. We provide another way to modify the test to make it less conservative by using two independent binomial distributions as the reference distribution for the test statistic. We compare our new test with several methods and show that our test has advantages over existing methods in terms of control of the type 1 and type 2 errors. We reanalyze results from an oncology trial using our proposed method and our software which is freely available to the reader.
Abstract: In this article, we studied three types of time series analysis methods in modeling and forecasting the severe acute respiratory syndrome (SARS) epidemic in mainland China. The first model was a Box-Jenkins model, autoregressive model with order 1 (AR(1)). The second model was a random walk (ARIMA(0,1,0)) model on the log transformed daily reported SARS cases and the third one was a combination of growth curve fitting and autoregressive moving average model, ARMA(1,1). We applied all these three methods to monitor the dynamic of SARS in China based on the daily probable new cases reported by the Ministry of Health of China.
In this paper Zografos Balakrishnan Power Lindley (ZB-PL) distribution has been obtained through the generalization of Power Lindley distribution using Zografos and Balakrishnan (2009) technique. For this technique, density of upper record values exists as their special case. Probability density (pdf), cumulative distribution (cdf) and hazard rate function (hrf) of the proposed distribution are obtained. The probability density and cumulative distribution function are expanded as linear combination of the density and distribution function of Exponentiated Power Lindley (EPL) distribution. This expansion is further used to study different properties of the new distribution. Some mathematical and statistical properties such as asymptotes, quantile function, moments, mgf, mean deviation, renyi entropy and reliability are also discussed. Probability density (pdf), cumulative distribution (cdf) and hazard rate (hrf) functions are graphically presented for different values of the parameters. In the end Maximum Likelihood Method is used to estimate the unknown parameters and application to a real data set is provided a. It has been observed that the proposed distribution provides superior fit than many useful distributions for given data set.
In this article, we introduce a new class of five-parameter model called the Exponentiated Weibull Lomax arising from the Exponentiated Weibull generated family. The new class contains some existing distributions as well as some new models. Explicit expressions for its moments, distribution and density functions, moments of residual life function are derived. Furthermore, Rényi and q–entropies, probability weighted moments, and order statistics are obtained. Three suggested procedures of estimation, namely, the maximum likelihood, least squares and weigthed least squares are used to obtain the point estimators of the model parameters. Simulation study is performed to compare the performance of different estimates in terms of their relative biases and standard errors. In addition, an application to two real data sets demonstrate the usefulness of the new model comparing with some new models.