Abstract: This paper studies the affect the tax environment has on health care coverage of individuals. This study adds to the current literature of health care policy by examining how individuals switch types of health care coverage given a change in the tax environment. The distribution of health care coverage will be investigated using transition matrices. Then, a model is used to determine how the individuals might be expected to switch insurance types given a change in the tax environment. Based on the results of this study, the authors give some recommendations on what the implications of the results may mean to health care policy makers.
Abstract: The Center for Neural Interface Design of the Biodesign Institute at Arizona State University conducted an experiment to investigate how the central nervous system controls hand orientation and movement direction during reach-to-grasp movements. ANOVA (Analysis of Variance), a conventional data analysis widely used in neural science, was performed to categorized different neural activities. Some preliminary studies on data analysis methods have shown that the principal assumption of ANOVA is violated and some characteristics of data are missing from taking the ratio of recorded data. To compensate the deficiency of ANOVA, ANCOVA (Analysis of covariance) is introduced in this paper. By considering neural firing counts and temporal intervals respectively, we expect to extract more useful information for determining the correlations among different types of neurons with motor behavior. Comparing to ANOVA, ANCOVA can be one step further to identify which direction or orientation is favored during which epoch. We find that a considerable number of neurons are involved in movement direction, hand orientation, or both combined, and some are significant in more than one epoch, which indicates there exists a network with unknown pathways connecting neurons in motor cortex throughout the entire movement. For the future studies we suggest to integrate this study into neural networking in order to simulate the whole reach-to-grasp process.
Factor Analysis is one of the data mining methods that can be used to analyse, mainly large-scale, multi-variable datasets. The main objective of this method is to derive a set of uncorrelated variables for further analysis when the use of highly inter-correlated variables may give misleading results in regression analysis. In the light of the vast and broad advances that have occurred in factor analysis due largely to the advent of electronic computers, this article attempt to provide researchers with a simplified approach to comprehend how exploratory factors analysis work, and to provide a guide of application using R. This multivariate mathematical method is an important tool which very often used in the development and evaluation of tests and measures that can be used in biomedical research. The paper comes to the conclusion that the factor analysis is a proper method used in biomedical research, just because clinical readers can better interpret and evaluate their goal and results.
Abstract: The concept of frailty provides a suitable way to introduce random effects in the model to account for association and unobserved heterogeneity. In its simplest form, a frailty is an unobserved random factor that modifies multiplicatively the hazard function of an individual or a group or cluster of individuals. In this paper, we study positive stable distribution as frailty distribution and two different baseline distributions namely Pareto and linear failure rate distribution. We estimate parameters of proposed models by introducing Bayesian estimation procedure using Markov Chain Monte Carlo (MCMC) technique. In the present study a simulation is done to compare the true values of parameters with the estimated value. We try to fit the proposed models to a real life bivariate survival data set of McGrilchrist and Aisbett (1991) related to kidney infection. Also, we present a comparison study for the same data by using model selection criterion, and suggest a better model.
In this paper, we introduce the alternative methods to estimation for the new weibull-pareto distribution parameters. We discussed of point estimation and interval estimation for parameters of the new weibull-pareto distribution. We have also discussed the method of Maximum Likelihood estimation, the method of Least Squares estimation, the method of Weighted Least Squares estimation and the method of Maximum Product Spacing estimation. In addition, we discussed the raw moment of random variable X and the reliability functions (survival and hazard functions). Further, we compared between the results of the methods that have been discussed using Monte Carlo Simulation method and application study.
Abstract: Panel data transcends cross-sectional data by tapping pooled inter- and intra-individual differences, along with between and within individual variation separately. In the present study these micro variations in ill-being are predicted by psychological indicators constructed from the British Household Panel Survey (BHPS). Panel regression effects are corrected for errors-in-variables, which attenuate slopes estimated by traditional panel regressions. These corrections reveal that unhappiness and life dissatisfaction are distinct variables that have different psychological causations.
Abstract: In the face of global uncertainty and a growing reliance on third party indices to gain a snapshot of a country’s operations, accurate decision making makes or breaks relationships in global trade. Under this aegis, we question the validity of traditional logistic regression using the maximum likelihood estimator (MLE) in classifying countries for doing business. This paper proposes that a weighted version of the Bianco and Yohai (BY) estimator is a superlative and robust (outlier resistant) tool in the hands of practitioners to gauge the correct antecedents of a country’s internal environment and decide whether to do or not do business with that country. In addition, this robust process is effective in differentiating between “problem” countries and “safe” countries for doing business. An existing “R” program for the BY estimation technique by Croux and Haesbroeck has been modified to fit our cause.
Abstract: The assumption that is usually made when modeling count data is that the response variable, which is the count, is correctly reported. Some counts might be over- or under-reported. We derive the Generalized PoissonPoisson mixture regression (GPPMR) model that can handle accurate, underreported and overreported counts. The parameters in the model will be estimated via the maximum likelihood method. We apply the GPPMR model to a real-life data set.
Researchers and practitioners of many areas of knowledge frequently struggle with missing data. Missing data is a problem because almost all standard statistical methods assume that the information is complete. Consequently, missing value imputation offers a solution to this problem. The main contribution of this paper lies on the development of a random forest-based imputation method (TI-FS) that can handle any type of data, including high-dimensional data with nonlinear complex interactions. The premise behind the proposed scheme is that a variable can be imputed considering only those variables that are related to it using feature selection. This work compares the performance of the proposed scheme with other two imputation methods commonly used in literature: KNN and missForest. The results suggest that the proposed method can be useful in complex scenarios with categorical variables and a high volume of missing values, while reducing the amount of variables used and their corresponding preliminary imputations.
bstract: In this article we propose further extension of the generalized Marshall Olkin-G ( GMO - G ) family of distribution. The density and survival functions are expressed as infinite mixture of the GMO - G distribution. Asymptotes, Rényi entropy, order statistics, probability weighted moments, moment generating function, quantile function, median, random sample generation and parameter estimation are investigated. Selected distributions from the proposed family are compared with those from four sub models of the family as well as with some other recently proposed models by considering real life data fitting applications. In all cases the distributions from the proposed family out on top.