Inferences about the ratio of two lognormal means δ can depend on plausible values of ρ, the ratio of the normal standard deviations associated to these distributions. This aspect is not usually considered in most of the analyses carried out in some applied sciences. In this paper we propose a profile likelihood approach that allows the comparison of two independent lognormal data sets in a more exhaustive way. Inferences about δ, ρ and (δ, ρ) are jointly analyzed through a simple closed-form expression obtained for the profile likelihood function of the parameter vector (δ, ρ). A similar analysis is done for ψ and ρ, where ψ is the ratio of two lognormal medians, obtaining also a simple closed-form expression for the profile likelihood function of these parameters. These expressions allow us to construct likelihood contour plots that capture most of the information provided by the samples and become valuable to identify if a trade-off between the parameters under study occurs; in case of that, individual inferences should be analyzed carefully. A detailed series of Monte Carlo simulations are included; they illustrate the performance of profile likelihood and parametric bootstrap approaches, for different sample sizes and parameter values.
Abstract: Li and Tiwari (2008) recently developed a corrected Z-test statistic for comparing the trends in cancer age-adjusted mortality and incidence rates across overlapping geographic regions, by properly adjusting for the correlation between the slopes of the fitted simple linear regression equations. One of their key assumptions is that the error variances have unknown but common variance. However, since the age-adjusted rates are linear combinations of mortality or incidence counts, arising naturally from an underlying Poisson process, this constant variance assumption may be violated. This paper develops a weighted-least-squares based test that incorporates heteroscedastic error variances, and thus significantly extends the work of Li and Tiwari. The proposed test generally outperforms the aforementioned test through simulations and through application to the age-adjusted mortality data from the Surveillance, Epidemiology, and End Results (SEER) Program of the National Cancer Institute.
Abstract: In this paper we consider clinical trials with two treatments and a non-normally distributed response variable. In addition, we focus on ap plications which include only discrete covariates and their interactions. For such applications, the semi-parametric Area Under the ROC Curve (AUC) regression model proposed by Dodd and Pepe (2003) can be used. However, because a logistic regression procedure is used to obtain parameter estimates and a bootstrapping method is needed for computing parameter standard errors, their method may be cumbersome to implement. In this paper we propose to use a set of AUC estimates to obtain parameter estimates and combine DeLong’s method and the delta method for computing parameter standard errors. Our new method avoids heavy computation associated with the Dodd and Pepe’s method and hence is easy to implement. We conduct simulation studies to show that the two methods yield similar results. Finally, we illustrate our new method using data from urinary incontinence clinical trials.
Abstract: Of interest in this paper is the development of a model that uses inverse sampling of binary data that is subject to false-positive misclassification in an effort to estimate a proportion. From this model, both the proportion of success and false positive misclassification rate may be estimated. Also, three first-order likelihood based confidence intervals for the proportion of success are mathematically derived and studied via a Monte Carlo simulation. The simulation results indicate that the score and likelihood ratio intervals are generally preferable over the Wald interval. Lastly, the model is applied to a medical data set.
Abstract: We study the spatial distribution of clusters associated to the aftershocks of the megathrust Maule earthquake MW 8.8 of 27 February 2010. We used a recent clustering method which hinges on a nonparametric estimation of the underlying probability density function to detect subsets of points forming clusters associated with high density areas. In addition, we estimate the probability density function using a nonparametric kernel method for each of these clusters. This allows us to identify a set of regions where there is an association between frequency of events and coseismic slip. Our results suggest that high coseismic slip is spatially related to high aftershock frequency.
The power generalized Weibull distribution due to Bagdonovacius and Nikulin (2002) is an alternative,and always provides better fits than the exponentiated Weibull family for modeling lifetime data. In this paper, we consider the generalized order statistics (GOS) from this distribution. We obtain exact explicit expressions as well as recurrence relations for the single, product and conditional moments of generalized order statistics from the power generalized Weibull distribution and then we use these results to compute the means and variances of order statistics and record values for samples of different sizes for various values of the shape and scale parameters.
Abstract: This paper studies the affect the tax environment has on health care coverage of individuals. This study adds to the current literature of health care policy by examining how individuals switch types of health care coverage given a change in the tax environment. The distribution of health care coverage will be investigated using transition matrices. Then, a model is used to determine how the individuals might be expected to switch insurance types given a change in the tax environment. Based on the results of this study, the authors give some recommendations on what the implications of the results may mean to health care policy makers.
Abstract: The Center for Neural Interface Design of the Biodesign Institute at Arizona State University conducted an experiment to investigate how the central nervous system controls hand orientation and movement direction during reach-to-grasp movements. ANOVA (Analysis of Variance), a conventional data analysis widely used in neural science, was performed to categorized different neural activities. Some preliminary studies on data analysis methods have shown that the principal assumption of ANOVA is violated and some characteristics of data are missing from taking the ratio of recorded data. To compensate the deficiency of ANOVA, ANCOVA (Analysis of covariance) is introduced in this paper. By considering neural firing counts and temporal intervals respectively, we expect to extract more useful information for determining the correlations among different types of neurons with motor behavior. Comparing to ANOVA, ANCOVA can be one step further to identify which direction or orientation is favored during which epoch. We find that a considerable number of neurons are involved in movement direction, hand orientation, or both combined, and some are significant in more than one epoch, which indicates there exists a network with unknown pathways connecting neurons in motor cortex throughout the entire movement. For the future studies we suggest to integrate this study into neural networking in order to simulate the whole reach-to-grasp process.
Factor Analysis is one of the data mining methods that can be used to analyse, mainly large-scale, multi-variable datasets. The main objective of this method is to derive a set of uncorrelated variables for further analysis when the use of highly inter-correlated variables may give misleading results in regression analysis. In the light of the vast and broad advances that have occurred in factor analysis due largely to the advent of electronic computers, this article attempt to provide researchers with a simplified approach to comprehend how exploratory factors analysis work, and to provide a guide of application using R. This multivariate mathematical method is an important tool which very often used in the development and evaluation of tests and measures that can be used in biomedical research. The paper comes to the conclusion that the factor analysis is a proper method used in biomedical research, just because clinical readers can better interpret and evaluate their goal and results.
Abstract: The concept of frailty provides a suitable way to introduce random effects in the model to account for association and unobserved heterogeneity. In its simplest form, a frailty is an unobserved random factor that modifies multiplicatively the hazard function of an individual or a group or cluster of individuals. In this paper, we study positive stable distribution as frailty distribution and two different baseline distributions namely Pareto and linear failure rate distribution. We estimate parameters of proposed models by introducing Bayesian estimation procedure using Markov Chain Monte Carlo (MCMC) technique. In the present study a simulation is done to compare the true values of parameters with the estimated value. We try to fit the proposed models to a real life bivariate survival data set of McGrilchrist and Aisbett (1991) related to kidney infection. Also, we present a comparison study for the same data by using model selection criterion, and suggest a better model.