In this paper, a new four parameter zero truncated Poisson Frechet distribution is defined and studied. Various structural mathematical properties of the proposed model including ordinary moments, incomplete moments, generating functions, order statistics, residual and reversed residual life functions are investigated. The maximum likelihood method is used to estimate the model parameters. We assess the performance of the maximum likelihood method by means of a numerical simulation study. The new distribution is applied for modeling two real data sets to illustrate empirically its flexibility.
Abstract: This paper introduces the beta linear failure rate geometric (BLFRG) distribution, which contains a number of distributions including the exponentiated linear failure rate geometric, linear failure rate geometric, linear failure rate, exponential geometric, Rayleigh geometric, Rayleigh and exponential distributions as special cases. The model further generalizes the linear failure rate distribution. A comprehensive investigation of the model properties including moments, conditional moments, deviations, Lorenz and Bonferroni curves and entropy are presented. Estimates of model parameters are given. Real data examples are presented to illustrate the usefulness and applicability of the distribution.
Abstract: Accelerated degradation tests (ADTs) can provide timely relia bility information of product. Hence ADTs have been widely used to assess the lifetime distribution of highly reliable products. In order to properly predict the lifetime distribution, modeling the product’s degradation path plays a key role in a degradation analysis. In this paper, we use a stochastic diffusion process to describe the product’s degradation path and a recursive formula for the product’s lifetime distribution can be obtained by using the first passage time (FPT) of its degradation path. In addition, two approxi mate formulas for the product’s mean-time-to-failure (MTTF) and median life (B50) are given. Finally, we extend the proposed method to the case of ADT and a real LED data is used to illustrate the proposed procedure. The results demonstrate that the proposed method has a good performance for the LED lifetime prediction.
The paper deals with robust ANCOVA when there are one or two covariates. Let Mj (Y |X) = β0j + β1j X1 + β2j X2 be some conditional measure of location associated with the random variable Y , given X, where β0j , β1j and β2j are unknown parameters. A basic goal is testing the hypothesis H0: M1(Y |X) = M2(Y |X). A classic ANCOVA method is aimed at addressing this goal, but it is well known that violating the underlying assumptions (normality, parallel regression lines and two types of homoscedasticity) create serious practical concerns. Methods are available for dealing with heteroscedasticity and nonnormality, and there are well-known techniques for controlling the probability of one or more Type I errors. But some practical concerns remain, which are reviewed in the paper. An alternative approach is suggested and found to have a distinct power advantage.
Abstract: Conceptually, a moderator is a variable that modifies the effect of a predictor on a response. Analytically, a common approach as used in most moderation analyses is to add analytic interactions involving the predictor and moderator in the form of cross-variable products and test the significance of such terms. The narrow scope of such a procedure is inconsistent with the broader conceptual definition of moderation, leading to confusion in interpretation of study findings. In this paper, we develop a new approach to the analytic procedure that is consistent with the concept of moderation. The proposed framework defines moderation as a process that modifies an existing relationship between the predictor and the outcome, rather than simply a test of a predictor by moderator interaction. The approach is illustrated with data from a real study.
Abstract: A Bayesian hierarchical model is developed for multiple com parisons in mixed models with missing values where the population means satisfy a simple order restriction. We employ the Gibbs sampling and Metropolis-within-Gibbs sampling techniques to obtain parameter estimates and estimates of the posterior probabilities of the equality of the mean pairs. The latter estimates are used to test whether any two means are significantly different, and to test the global hypothesis of the equality of all means. The performance of the model is investigated in simulations by means of both multiple imputations and ignoring missingness. We also illustrate the utility of the model in a real data set. The results show that the proposed hierarchical model can effectively unify parameter estimation, multiple imputations, and multiple comparisons in one setting.
In this paper an attempt has been made to analyze the child mortality by use of a hazard model in Bayesian environment, family effect through multiplicative random effect is also incorporated in the model. For fitting this model real data has taken from District Level Household and Facility Survey (DLHS)-3. The largest state (in population) of India i.e. Uttar Pradesh data is taken for analysis. Deviance information criteria are used for comparison of models. It found that the model with family frailty gives better fit. All the analysis is performed in winBUGS software, which is used Markov chain monte carlo simulation under gibbs sampling.
Abstract: Count data often have excess zeros in many clinical studies. These zeros usually represent “disease-free state”. Although disease (event) free at the time, some of them might be at a high risk of having the putative outcome while others may be at low or no such risk. We postulate these zeros as a one of the two types, either as ‘low risk’ or as ‘high risk’ zeros for the disease process in question. Low risk zeros can arise due to the absence of risk factors for disease initiation/progression and/or due to very early stage of the disease. High risk zeros can arise due to the presence of significant risk factors for disease initiation/ progression or could be, in rare situations, due to misclassification, more specific diagnostic tests, or below the level of detection. We use zero inflated models which allows us to assume that zeros arise from one of the two separate latent processes-one giving low-risk zeros and the other high-risk zeros and subsequently propose a strategy to identify and classify them as such. To illustrate, we use data on the number of involved nodes in breast cancer patients. Of the 1152 patients studied, 38.8% were node- negative (zeros). The model predicted that about a third (11.4%) of negative nodes are “high risk” and the remaining (27.4%) are at “low risk” of nodal positivity. Posterior probability based classification was more appropriate compared to other methods. Our approach indicates that some node negative patients may be re-assessed for their diagnosis about nodal positivity and/or for future clinical management of their disease. The approach developed here is applicable to any scenario where the disease or outcome can be characterized by count-data.