The choice of an appropriate bivariate parametrical probability distribution for pairs of lifetime data in presence of censored observations usually is not a simple task in many applications. Each existing bivariate lifetime probability distribution proposed in the literature has different dependence structure. Commonly existing classical or Bayesian discrimination methods could be used to discriminate the best among different proposed distributions, but these techniques could not be appropriate to say that we have good fit of some particular model to the data set. In this paper, we explore a recent dependence measure for bivariate data introduced in the literature to propose a graphical and simple criterion to choose an appropriate bivariate lifetime distribution for data in presence of censored data.
In this work, we introduce a new distribution for modeling the extreme values. Some important mathematical properties of the new model are derived. We assess the performance of the maximum likelihood method in terms of biases and mean squared errors by means of a simulation study. The new model is better than some other important competitive models in modeling the repair times data and the breaking stress data.
In the recent statistical literature, the difference between explanatory and predictive statistical models has been emphasized. One of the tenets of this dichotomy is that variable selection methods should be applied only to predictive models. In this paper, we compare the effectiveness of the acquisition strategies implemented by Google and Yahoo for the management of innovations. We argue that this is a predictive situation and thus apply lasso variable selection to a Cox regression model in order to compare the Google and Yahoo results. We show that the predictive approach yields different results than an explanatory approach and thus refutes the conventional wisdom that Google was always superior to Yahoo during the period under consideration.
The concept of ranked set sampling (RSS) is applicable whenever ranking on a set of sampling units can be done easily by a judgment method or based on an auxiliary variable. In this work, we consider a study variable π correlated with auxiliary variable π which is used to rank the sampling units. Further (π, π) is assumed to have Morgenstern type bivariate generalized uniform distribution. We obtain an unbiased estimator of a scale parameter associated with the study variable π based on different RSS schemes and censored RSS. Efficiency comparison study of these estimators is also performed and presented numerically.
Compound distributions gained their importance from the fact that natural factors have compound effects, as in the medical, social and logical experiments. Dubey (1968) introduced the compound Weibull by compounding Weibull distribution with gamma distribution. The main aim of this paper is to define a bivariate generalized Burr (compound Weibull) distribution so that the marginals have univariate generalized Burr distributions. Several properties of this distribution such as marginals, conditional distributions and product moments have been discussed. The maximum likelihood estimates for the unknown parameters of this distribution and their approximate variance- covariance matrix have been obtained. Some simulations have been performed to see the performances of the MLEs. One data analysis has been performed for illustrative purpose.
In this paper, a new five-parameter extended Burr XII model called new modified Singh-Maddala (NMSM) is developed from cumulative hazard function of the modified log extended integrated beta hazard (MLEIBH) model. The NMSM density function is left-skewed, right-skewed and symmetrical. The Lambert W function is used to study descriptive measures based on quantile, moments, and moments of order statistics, incomplete moments, inequality measures and residual life function. Different reliability and uncertainty measures are also theoretically established. The NMSM distribution is characterized via different techniques and its parameters are estimated using maximum likelihood method. The simulation studies are performed on the basis of graphical results to illustrate the performance of maximum likelihood estimates (MLEs) of the parameters. The significance and flexibility of NMSM distribution is tested through different measures by application to two real data sets.
In this paper we introduce the generalized extended inverse Weibull finite failure software reliability growth model which includes both increasing/decreasing nature of the hazard function. The increasing/decreasing behavior of failure occurrence rate fault is taken into account by the hazard of the generalized extended inverse Weibull distribution. We proposed a finite failure non-homogeneous Poisson process (NHPP) software reliability growth model and obtain unknown model parameters using the maximum likelihood method for interval domain data. Illustrations have been given to estimate the parameters using standard data sets taken from actual software projects. A goodness of fit test is performed to check statistically whether the fitted model provides a good fit with the observed data. We discuss the goodness of fit test based on the Kolmogorov-Smirnov (K-S) test statistic. The proposed model is compared with some of the standard existing models through error sum of squares, mean sum of squares, predictive ratio risk and Akaikes information criteria using three different data sets. We show that the observed data fits the proposed software reliability growth model. We also show that the proposed model performs satisfactory better than the existing finite failure category models
In this paper, a comparison is provided for volatility estimation in Bayesian and frequentist settings. We compare the predictive performance of these two approaches under the generalized autoregressive conditional heteroscedasticity (GARCH) model. Our results indicate that the frequentist estimation provides better predictive potential than the Bayesian approach. The finding is contrary to some of the work in this line of research. To illustrate our finding, we used the six major foreign exchange rate datasets.
In this paper, we introduce the alternative methods to estimation for the new weibull-pareto distribution parameters. We discussed of point estimation and interval estimation for parameters of the new weibull-pareto distribution. We have also discussed the method of Maximum Likelihood estimation, the method of Least Squares estimation, the method of Weighted Least Squares estimation and the method of Maximum Product Spacing estimation. In addition, we discussed the raw moment of random variable X and the reliability functions (survival and hazard functions). Further, we compared between the results of the methods that have been discussed using Monte Carlo Simulation method and application study.
istribution of Lindley distribution constructed by combining the cumulative distribution function (cdf) of Lomax and Lindley distributions. Some mathematical properties of the new distribution are discussed including moments, quantile and moment generating function. Estimation of the model parameters is carried out using maximum likelihood method. Finally, real data examples are presented to illustrate the usefulness and applicability of this new distribution.