Abstract: Missing values are not uncommon in longitudinal data studies. Missingness could be due to withdrawal from the study (dropout) or intermittent. The missing data mechanism is termed non-ignorable if the probability of missingness depends on the unobserved (missing) observations. This paper presents a model for continuous longitudinal data with non-ignorable non-monotone missing values. Two separate models, for the response and missingness, are assumed. The response is modeled as multivariate nor mal whereas the binomial model for missingness process. Parameters in the adopted model are estimated using the stochastic EM algorithm. The proposed model (approach) is then applied to an example from the International Breast Cancer Study Group.
Abstract: This paper estimates the interest rate term structures of Treasury and individual corporate bonds using a robust criterion. The Treasury term structure is estimated with Bayesian regression splines based on nonlinear least absolute deviation. The number and locations of the knots in the regression splines are adaptively chosen using the reversible jump Markov chain Monte Carlo method. Due to the small sample size, the individual corporate term structure is estimated by adding a positive parametric credit spread to the estimated Treasury term structure using a Bayesian approach. We present a case study of U.S. Treasury STRIPS (Separate Trading of Registered Interest and Principal of Securities) and AT&T bonds from April 1994 to December 1996. Compared with several existing term structure estimation approaches, the proposed method is robust to outliers in our case study.
Abstract: In this paper, we introduce an extended four-parameter Fr´echet model called the exponentiated exponential Fr´echet distribution, which arises from the quantile function of the standard exponential distribution. Various of its mathematical properties are derived including the quantile function, ordinary and incomplete moments, Bonferroni and Lorenz curves, mean deviations, mean residual life, mean waiting time, generating function, Shannon entropy and order statistics. The model parameters are estimated by the method of maximum likelihood and the observed information matrix is determined. The usefulness of the new distribution is illustrated by means of three real lifetime data sets. In fact, the new model provides a better fit to these data than the Marshall-Olkin Fr´echet, exponentiated-Fr´echet and Fr´echet models.
The Pareto distribution is a power law probability distribution that is used to describe social scientific, geophysical, actuarial, and many other types of observable phenomena. A new weighted Pareto distribution is proposed using a logarithmic weight function. Several statistical properties of the weighted Pareto distribution are studied and derived including cumulative distribution function, location measures such as mode, median and mean, reliability measures such as reliability function, hazard and reversed hazard functions and the mean residual life, moments, shape indices such as skewness and kurtosis coefficients and order statistics. A parametric estimation is performed to obtain estimators for the distribution parameters using three different estimation methods the maximum likelihood method, the L-moments method and the method of moments. Numerical simulation is carried out to validate the robustness of the proposed distribution. The distribution is fitted to a real data set to show its importance in real life applications.
The Weibull distribution due to its suitability to adequately model data with high degree of positive skewness which is a typical characteristics of the claim amounts, is considered a versatile model for loss modeling in general Insurance. In this paper, the Weibull distribution is fitted to a set of insurance claim data and the probability of ultimate ruin has been computed for Weibull distributed claim data using two methods, namely the Fast Fourier Transform and the 4 moment Gamma De Vylder approximation. The consistency has been found in the values obtained from the both the methods. For the same model, the first two moments of the time to ruin, deficit at the time of ruin and the surplus just prior to ruin have been computed numerically. The moments are found to be exhibiting behavior consistent to what is expected in practical scenario. The influence of the surplus process being subjected to the force of interest earnings and tax payments on the probability of ultimate ruin, causes the later to be higher than what is obtained in the absence of these factors.
Abstract: Comparison of more than two diagnostic or screening tests for prediction of presence vs. absence of a disease or condition can be com plicated when attempting to simultaneously optimize a pair of competing criteria such as sensitivity and specificity. A technique for quantifying rel ative superiority of a diagnostic test when a gold standard exists in this setting is described. The proposed superiority index is used to quantify and rank performance of diagnostic tests and combinations of tests. Develop ment of a validated model containing a subset of the tests may be improved by eliminating tests having a very small value for this index. To illustrate, we present an example using a large battery of neuropsychological tests for prediction of cognitive impairment. Using the proposed index, the battery is reduced with favorable results.
Abstract: Analysis of footprint data is important in the tire industry. Estimation procedures for multiple change points and unknown parameters in a segmented regression model with unknown heteroscedastic variances are developed for analyzing such data. Our approaches include both likelihood and Bayesian, with and without continuity constraints at the change points. A model selection procedure is also proposed to choose among competing models for fitting a middle segment of the data between change points. We study the performance of the two approaches and apply them to actual tire data examples. Our Maximization–Maximization–Posterior (MMP) algorithm and the likelihood–based estimation are found to be complimentary to each other.