Abstract: This paper, evaluates and compares the heterogeneous balance variation order pair of any two decision-making trial and evaluation laboratory (DEMATEL) theories, in which one has a larger balance and a smaller variation. In contrast, the other one has a smaller balance and a larger variation. With this said, the first author proposed a useful integrated validity index to evaluate any DEMATEL theory presence by combining Liu's balanced coefficient and Liu's variation coefficient. Applying this new validity index, three DEMATELs kinds with a same direct relational matrix are compared that are: the traditional, shrinkage, and balance. Furthermore, conducted is a simple validity experiment Results. show that the balance DEMATEL has the best performance. And that, the shrinkage coefficient's performance is better than that of the traditional DEMATEL.
This article addresses the various mathematical and statistical properties of the Burr type XII distribution (such as quantiles, moments, moment generating function, hazard rate, conditional moments, mean residual lifetime, mean past lifetime, mean deviation about mean and median, stochasic ordering, stress-strength parameter, various entropies, Bonferroni and Lorenz curves and order statistics) are derived. We discuss some exact expressions and recurrence relations for the single and product moments of upper record values. Further, using relations of single moments, we have tabulated the means and variances of upper record values from samples of sizes up to 10 for various values of the α and β. Finally a characterization of this distribution based on conditional moments of record values and recurrence relation of kth record values is presented.
The normal distribution is the most popular model in applications to real data. We propose a new extension of this distribution, called the Kummer beta normal distribution, which presents greater flexibility to model scenarios involving skewed data. The new probability density function can be represented as a linear combination of exponentiated normal pdfs. We also propose analytical expressions for some mathematical quantities: Ordinary and incomplete moments, mean deviations and order statistics. The estimation of parameters is approached by the method of maximum likelihood and Bayesian analysis. Likelihood ratio statistics and formal goodnessof-fit tests are used to compare the proposed distribution with some of its sub-models and non-nested models. A real data set is used to illustrate the importance of the proposed model.
Abstract: Affymetrix high-density oligonucleotide microarray makes it possible to simultaneously measure, and thus compare the expression profiles of hundreds of thousands of genes in living cells. Genes differentially expressed in different conditions are very important to both basic and medical research. However, before detecting these differentially expressed genes from a vast number of candidates, it is necessary to normalize the microarray data due to the significant variation caused by non-biological factors. During the last few years, normalization methods based on probe level or probeset level intensities were proposed in the literature. These methods were motivated by different purposes. In this paper, we propose a multivariate normalization method, based on partial least squares regression, aiming to equalize the central tendency, reduce and equalize the variation of the probe level intensities in any probeset across the replicated arrays. By so doing, we hope that one can precisely estimate the gene expression indexes.
In the recent statistical literature, the difference between explanatory and predictive statistical models has been emphasized. One of the tenets of this dichotomy is that variable selection methods should be applied only to predictive models. In this paper, we compare the effectiveness of the acquisition strategies implemented by Google and Yahoo for the management of innovations. We argue that this is a predictive situation and thus apply lasso variable selection to a Cox regression model in order to compare the Google and Yahoo results. We show that the predictive approach yields different results than an explanatory approach and thus refutes the conventional wisdom that Google was always superior to Yahoo during the period under consideration.
Abstract: Epidemiological cohort study that adopts a two-phase design raises serious issue on how to treat a fairly large amount of missing val ues that are either Missing At Random (MAR) due to the study design or potentially Missing Not At Random (MNAR) due to non-response and loss to follow-up. Cognitive impairment (CI) is an evolving concept that needs epidemiological characterization for its maturity. In this work, we attempt to estimate the incidence rate CI by accounting for the aforemen tioned missing-data process. We consider baseline and first follow-up data of 2191 African-Americans enrolled in a prospective epidemiological study of dementia that adopted a two-phase sampling design. We developed a multiple imputation procedure in the mixture model framework that can be easily implemented in SAS. Sensitivity analysis is carried out to assess the dependence of the estimates on specific model assumptions. It is shown that African-Americans in the age of 65-75 have much higher incidence rate of CI than younger or older elderly. In conclusion, multiple imputation pro vides a practical and general framework for the estimation of epidemiological characteristics in two-phase sampling studies.
Abstract: True-value theory (Bechtel, 2010), as an extension of randomization theory, allows arbitrary measurement errors to pervade a survey score as well as its predictor scores. This implies that true scores need not be expectations of observed scores and that expected errors need not be zero within a respondent. Rather, weaker assumptions about measurement errors over respondents enable the regression of true scores on true predictor scores. The present paper incorporates Sarndal-Lundstrom (2005) weight calibration into true-value regression. This correction for non-response is illustrated with data from the fourth round of the European Social Survey (ESS). The results show that a true-value regression coefficient can be corrected even with a severely unrepresentative sample. They also demonstrate that this regression slope is attenuated more by measurement error than by non-response. Substantively, this ESS analysis establishes economic anxiety as an important predictor of life quality in the financially stressful year of 2008.
Abstract: Markov chain Monte Carlo simulation techniques enable the ap plication of Bayesian methods to a variety of models where the posterior density of interest is too difficult to explore analytically. In practice, how ever, multivariate posterior densities often have characteristics which make implementation of MCMC methods more difficult. A number of techniques have been explored to help speed the convergence of a Markov chain. This paper presents a new algorithm which employs some of these techniques for cases where the target density is bounded. The algorithm is tested on sev eral known distributions to empirically examine convergence properties. It is then applied to a wildlife disease model to demonstrate real-world appli cability.
Abstract: In this article, a Bayesian model averaging approach for hier archical log-linear models is considered. Posterior model probabilities are approximately calculated for hierarchical log-linear models. Dimension of interested model space is reduced by using Occam’s window and Occam’s razor approaches. 2002 road traffic accident data of Turkey is analyzed by using the considered approach
Abstract: The detection of slope change points in wind curves depends on linear curve-fitting. Hall and Titterington’s algorithm based on smoothing is adapted and compared to a Bayesian method of curve-fitting. After prior spline smoothing of the data, the algorithms are tested and the errors between the split-linear fitted wind and the real one are estimated. In our case, the adaptation of the edge-preserving smoothing algorithm gives the same good performance as automatic Bayesian curve-fitting based on a Monte Carlo Markov chain algorithm yet saves computation time.