Abstract: Breast cancer is the second most common type of cancer in the world (World Cancer Report, 2014 a, b). The evolution of breast cancer treatment usually allows a longer life of patients as well in many cases a relapse of the disease. Usually medical researchers are interested to analyze data denoting the time until the occurrence of an event of interest such as the time of death by cancer in presence of right censored data and some covariates. In some situations, we could have two lifetimes associated to the same patient, as for example, the time free of the disease until recurrence and the total lifetime of the patient. In this case, it is important to assume a bivariate lifetime distribution which describes the possible dependence between the two observations. We consider as an application, different parametric bivariate lifetime distributions to analyze a breast cancer data set considering continuous or discrete data. Inferences of interest are obtained under a statistical Bayesian approach. We get the posterior summaries of interest using existing MCMC (Markov Chain Monte Carlo) methods. The main goal of the study, is to compare the bivariate continuous and discrete distributions that better describes the breast cancer lifetimes.
Abstract: Price limits are applied to control risks in various futures mar kets. In this research, we proposed an adapted autoregressive model for the observed futures return by introducing dummy variables that represent limit moves. We also proposed a stochastic volatility model with dummy variables. These two models are used to investigate the existence of price de layed discovery effect and volatility spillover effect from price limits. We give an empirical study of the impact of price limits on copper and natural rubble futures in Shanghai Futures Exchange (SHFE) by using MCMC method. It is found that price limits are efficient in controlling copper futures price, but the rubber futures price is distorted significantly. This implies that the effects of price limits are significant for products with large fluctuation and frequent limits hit.
Abstract: Nowadays, extensive amounts of data are stored which require the development of specialized methods for data analysis in an understandable way. In medical data analysis many potential factors are usually introduced to determine an outcome response variable. The main objective of variable selection is enhancing the prediction performance of the predictor variables and identifying correctly and parsimoniously the faster and more cost-effective predictors that have an important influence on the response. Various variable selection techniques are used to improve predictability and obtain the “best” model derived from a screening procedure. In our study, we propose a variable subset selection method which extends to the classification case the idea of selecting variables and combines a nonparametric criterion with a likelihood based criterion. In this work, the Area Under the ROC Curve (AUC) criterion is used from another viewpoint in order to determine more directly the important factors. The proposed method revealed a modification (BIC) of the modified Bayesian Information Criterion (mBIC). The comparison of the introduced BIC to existing variable selection methods is performed by some simulating experiments and the Type I and Type II error rates are calculated. Additionally, the proposed method is applied successfully to a high-dimensional Trauma data analysis, and its good predictive properties are confirmed.
Unit root tests that are in common use today tend to over-reject the stationarity of economic ratios like the consumption-income ratio or rates like the average tax rate. The meaning of a unit root in such bounded series is not very clear. We use a mixed-frequency regression technique to develop a test for the null hypothesis that a series is stationary. The focus is on regression relationships, not so much on individual series. What is noteworthy about this moving average (MA) unit root test, denoted as z(MA) test, based on a variance-difference, is that, instead of having to deal with non-standard distributions, it takes testing back to normal distribution and offers a way to increase power without having to increase the sample size substantially. Monte Carlo simulations show minimal size distortions even when the AR root is close to unity and the test offers substantial gains in power relative to some popular tests against near-null alternatives in moderate size samples. Applying this test to log of consumption-income ratio of 21 OECD countries shows that the z(MA) test favors stationarity of 15 series, KPSS test 8 series, Johansen test 6 series and ADF test 5 series.
Though, fertility is a biological phenomenon but it depends heavily on socioeconomic, demographic and cultural factors; therefore, this article describes a regression technique to estimate the TFR under dierent proposed model assumptionsand the effects of socioeconomic and demographic factors on TFR as well. The developed methodology also leads to estimate the number of births averted due to the use of family planning methods and percent of increase in births in the absence of birth control devices for 29 states of India using three different methods of births aversion through the National Family Health Survey (NFHS-III) data. The finding shows that there is a variation in number of births averted and percent of increase in births in the absence of family planning methods at the state level in India. The effective use of contraception and maximum number of births avoided due to use of family planning is in Maharashtra and Uttar pradesh. Highest percent of increase in births in the absence of contraception is in Himachal Pradesh and Andhra Pradesh
Abstract: In this article, a group acceptance sampling plan (GASP) for lot resubmitting is developed to ensure quality of the product lifetime assuming that the product’s lifetime follows the half logistic distribution. The parameters of the GASP are determined by satifying the specified producer’s and consumer’s risks according to the experiment termination time and the number of testers. A comparison between this proposed group sampling and the ordinary group sampling plan is discussed. This proposed plan is justified with an illustration.
Abstract: In this study, we compared various block bootstrap methods in terms of parameter estimation, biases and mean squared errors (MSE) of the bootstrap estimators. Comparison is based on four real-world examples and an extensive simulation study with various sample sizes, parameters and block lengths. Our results reveal that ordered and sufficient ordered non-overlapping block bootstrap methods proposed by Beyaztas et al. (2016) provide better results in terms of parameter estimation and its MSE compared to conventional methods. Also, sufficient non-overlapping block bootstrap method and its ordered version have the smallest MSE for the sample mean among the others.
Abstract: This paper reviews zero-inflated count models and applies them to modelling annual trends in incidences of occupational allergic asthma, dermatitis and rhinitis in France. Based on the data collected from 2001 to 2009, the study uses the incidence rate ratios (IRR) as percentage of changes in incidences and plots them as function of the years to obtain trends. The investigation reveals that the trend is decreasing for asthma and rhinitis, and increasing for dermatitis, and that there is a possible positive association between the three diseases.
Abstract: PSA measurements are used to assess the risk for prostate cancer. PSA range and PSA kinetics such as PSA velocity have been correlated with in creased cancer detection and assist the clinician in deciding when prostate biopsy should be performed. Our aim is to evaluate the use of a novel, maxi mum likelihood estimation - prostate specific antigen (MLE-PSA) model for predicting the probability of prostate cancer using serial PSA measurements combined with PSA velocity in order to assess whether this reduces the need for prostate biopsy. A total of 1976 Caucasian patients were included. All these patients had at least 6 PSA serial measurements; all underwent trans-rectal biopsy with minimum 12 cores within the past 10 years. A multivariate logistic re gression model was developed using maximum likelihood estimation (MLE) based on the following parameters (age, at least 6 PSA serial measurements, baseline median natural logarithm of the PSA (ln(PSA)) and PSA velocity (ln(PSAV)), baseline process capability standard deviation of ln(PSA) and ln(PSAV), significant special causes of variation in ln(PSA) and ln(PSAV) detected using control chart logic, and the volatility of the ln(PSAV). We then compared prostate cancer probability using MLE-PSA to the results of prostate needle biopsy. The MLE-PSA model with a 50% cut-off probability has a sensitivity of 87%, specificity of 85%, positive predictive value (PPV) of 89%, and negative predictive value (NPV) of 82%. By contrast, a single PSA value with a 4ng/ml threshold has a sensitivity of 59%, specificity of 33%, PPV of 56%, and NPV of 36% using the same population of patients used to generate the MLE-PSA model. Based on serial PSA measurements, the use of the MLE-PSA model significantly (p-value < 0.0001) improves prostate cancer detection and reduces the need for prostate biopsy.