Abstract: Foreign direct investment (FDI) has been traditionally considered an important channel in the diffusion of advanced technology. Whether it can promote technology progress for the host country is a focused problem. This paper analyzes the relationship between FDI and regional innovation capability (RIC). We find that the spillover effects of FDI are not as signif icant as it is usually thought. It is found that the impact of FDI on RIC is weak; the entry of FDI has no use for enhancing indigenous innovation capability. Moreover inward FDI might have the crowding-out effect on in novation and domestic R&D activity. The research manifests that increasing domestic R&D inputs, strengthening the innovation capabilities and absorp tive capacity in domestic enterprises are determinants to improve RIC.
In this paper we introduce the generalized extended inverse Weibull finite failure software reliability growth model which includes both increasing/decreasing nature of the hazard function. The increasing/decreasing behavior of failure occurrence rate fault is taken into account by the hazard of the generalized extended inverse Weibull distribution. We proposed a finite failure non-homogeneous Poisson process (NHPP) software reliability growth model and obtain unknown model parameters using the maximum likelihood method for interval domain data. Illustrations have been given to estimate the parameters using standard data sets taken from actual software projects. A goodness of fit test is performed to check statistically whether the fitted model provides a good fit with the observed data. We discuss the goodness of fit test based on the Kolmogorov-Smirnov (K-S) test statistic. The proposed model is compared with some of the standard existing models through error sum of squares, mean sum of squares, predictive ratio risk and Akaikes information criteria using three different data sets. We show that the observed data fits the proposed software reliability growth model. We also show that the proposed model performs satisfactory better than the existing finite failure category models
In a Bayesian approach, uncertainty explained by a prior distribution that contains information about an uncertain parameter. Determination of the prior distribution is important in because it impacts the posterior inference. The objective of this study is to use metaanalysis for proportion to obtain prior information about patients with breast cancer stage I who undergoing modified radical mastectomy treatment and applied Bayesian approach. R and WinBUGS programs are performed for meta-analysis and Bayesian approach respectively.
Abstract: While conducting a social survey on stigmatized/sensitive traits, obtaining efficient (truthful) data is an intricate issue and estimates are generally biased in such surveys. To obtain trustworthy data and to reduce false response bias, a technique, known as randomized response technique, is now being used in many surveys. In this study, we performed a Bayesian analysis of a general class of randomized response models. Suitable simple Beta prior and mixture of Beta priors are used in a common prior structure to obtain the Bayes estimates for the proportion of a stigmatized/sensitive attributes in the population of interest. We also extended our proposal to stratified random sampling. The Bayes and the maximum likelihood estimators are compared. For further understanding of variability, we have also compared the prior and posterior distributions for different values of the design constants through graphs and credible intervals. The condition to develop a new randomized response model is also discussed.
Abstract: We apply methodology robust to outliers to an existing event study of the effect of U.S. financial reform on the stock markets of the 10 largest world economies, and obtain results that differ from the original OLS results in important ways. This finding underlines the importance of han dling outliers in event studies. We further review closely the population of outliers identified using Cook’s distance and find that many of the out liers lie within the event windows. We acknowledge that those data points lead to inaccurate regression fitting; however, we cannot remove them since they carry valuable information regarding the event effect. We study further the residuals of the outliers within event windows and find that the resid uals change with application of M-estimators and MM-estimators; in most cases they became larger, meaning the main prediction equation is pulled back towards the main data population and further from the outliers and indicating more proper fitting. We support our empirical results by pseudo simulation experiments and find significant improvement in determination of both types of the event effect − abnormal returns and change in systematic risk. We conclude that robust methods are important for obtaining accurate measurement of event effects in event studies.
Abstract: Longitudinal data often arise in clinical trials when measure ments are taken from subjects repeatedly over time so that data from each subject are serially correlated. In this paper, we seek some covariance matri ces that make the regression parameter estimates robust to misspecification of the true dependency structure between observations. Moreover, we study how this choice of robust covariance matrices is affected by factors such as the length of the time series and the strength of the serial correlation. We perform simulation studies for data consisting of relatively short (N=3), medium (N=6) and long time series (N=14) respectively. Finally, we give suggestions on the choice of robust covariance matrices under different situ ations.
Abstract: In the area of survival analysis the most popular regression model is the Cox proportional hazards (PH) model. Unfortunately, in practice not all data sets satisfy the PH condition and thus the PH model cannot be used. To overcome the problem, the proportional odds (PO) model ( Pettitt 1982 and Bennett 1983a) and the generalized proportional odds (GPO) model ( Dabrowska and Doksum, 1988) were proposed, which can be considered in some sense generalizations of the PH model. However, there are examples indicating that the use of the PO or GPO model is not appropriate. As a consequence, a more general model must be considered. In this paper, a new model, called the proportional generalized odds (PGO) model, is introduced, which covers PO and GPO models as special cases. Estimation of the regression parameters as well as the underlying survival function of the GPO model is discussed. An application of the model to a data set is presented.
The unknown or unobservable risk factors in the survival analysis cause heterogeneity between individuals. Frailty models are used in the survival analysis to account for the unobserved heterogeneity in individual risks to disease and death. To analyze the bivariate data on related survival times, the shared frailty models were suggested. The most common shared frailty model is a model in which frailty act multiplicatively on the hazard function. In this paper, we introduce the shared inverse Gaussian frailty model with the reversed hazard rate and the generalized inverted exponential distribution and the generalized exponential distribution as baseline distributions. We introduce the Bayesian estimation procedure using Markov Chain Monte Carlo(MCMC) technique to estimate the parameters involved in the models. We present a simulation study to compare the true values of the parameters with the estimated values. Also we apply the proposed models to the Australian twin data set and a better model is suggested.
Abstract:This paper has been proposed to estimate the parameters of Markov based logistic model by Bayesian approach for analyzing longitudinal binary data. In Bayesian estimation selection of appropriate loss function and prior density are most important ingredient. Symmetric and asymmetric loss functions have been used for estimating parameters of two state Markov model and better performance has been observed by Bayesian estimate under squared error loss function.
Abstract: Using financial ratio data from 2006 and 2007, this study uses a three-fold cross validation scheme to compare the classification and pre diction of bankrupt firms by robust logistic regression with the Bianco and Yohai (BY) estimator versus maximum likelihood (ML) logistic regression. With both the 2006 and 2007 data, BY robust logistic regression improves both the classification of bankrupt firms in the training set and the prediction of bankrupt firms in the testing set. In an out of sample test, the BY robust logistic regression correctly predicts bankruptcy for Lehman Brothers; however, the ML logistic regression never predicts bankruptcy for Lehman Brothers with either the 2006 or 2007 data. Our analysis indicates that if the BY robust logistic regression significantly changes the estimated regression coefficients from ML logistic regression, then the BY robust logistic regression method can significantly improve the classification and prediction of bankrupt firms. At worst, the BY robust logistic regression makes no changes in the estimated regression coefficients and has the same classification and prediction results as ML logistic regression. This is strong evidence that BY robust logistic regression should be used as a robustness check on ML logistic regression, and if a difference exists, then BY robust logistic regression should be used as the primary classifier.