Pupillary response behavior (PRB) refers to changes in pupil diameter in response to simple or complex stimuli. There are underlying, unique patterns hidden within complex, high-frequency PRB data that can be utilized to classify visual impairment, but those patterns cannot be described by traditional summary statistics. For those complex high-frequency data, Hurst exponent, as a measure of long-term memory of time series, becomes a powerful tool to detect the muted or irregular change patterns. In this paper, we proposed robust estimators of Hurst exponent based on non-decimated wavelet transforms. The properties of the proposed estimators were studied both theoretically and numerically. We applied our methods to PRB data to extract the Hurst exponent and then used it as a predictor to classify individuals with different degrees of visual impairment. Compared with other standard wavelet-based methods, our methods reduce the variance of the estimators and increase the classification accuracy.
Abstract: Good inference for the random effects in a linear mixed-effects model is important because of their role in decision making. For example, estimates of the random effects may be used to make decisions about the quality of medical providers such as hospitals, surgeons, etc. Standard methods assume that the random effects are normally distributed, but this may be problematic because inferences are sensitive to this assumption and to the composition of the study sample. We investigate whether using a Dirichlet process prior instead of a normal prior for the random effects is effective in reducing the dependence of inferences on the study sample. Specifically, we compare the two models, normal and Dirichlet process, emphasizing inferences for extrema. Our main finding is that using the Dirichlet process prior provides inferences that are substantially more robust to the composition of the study sample.
Abstract: A multilevel model (allowing for individual risk factors and geo graphic context) is developed for jointly modelling cross-sectional differences in diabetes prevalence and trends in prevalence, and then adapted to provide geographically disaggregated diabetes prevalence forecasts. This involves a weighted binomial regression applied to US data from the Behavioral Risk Factor Surveillance System (BRFSS) survey, specifically totals of diagnosed diabetes cases, and populations at risk. Both cases and populations are dis aggregated according to survey year (2000 to 2010), individual risk factors (e.g., age, education), and contextual risk factors, namely US census division and the poverty level of the county of residence. The model includes a linear growth path in decadal time units, and forecasts are obtained by extending the growth path to future years. The trend component of the model controls for interacting influences (individual and contextual) on changing prevalence. Prevalence growth is found to be highest among younger adults, among males, and among those with high school education. There are also regional shifts, with a widening of the US “diabetes belt”.
Abstract: Foreign direct investment (FDI) has been traditionally considered an important channel in the diffusion of advanced technology. Whether it can promote technology progress for the host country is a focused problem. This paper analyzes the relationship between FDI and regional innovation capability (RIC). We find that the spillover effects of FDI are not as signif icant as it is usually thought. It is found that the impact of FDI on RIC is weak; the entry of FDI has no use for enhancing indigenous innovation capability. Moreover inward FDI might have the crowding-out effect on in novation and domestic R&D activity. The research manifests that increasing domestic R&D inputs, strengthening the innovation capabilities and absorp tive capacity in domestic enterprises are determinants to improve RIC.
In this paper we introduce the generalized extended inverse Weibull finite failure software reliability growth model which includes both increasing/decreasing nature of the hazard function. The increasing/decreasing behavior of failure occurrence rate fault is taken into account by the hazard of the generalized extended inverse Weibull distribution. We proposed a finite failure non-homogeneous Poisson process (NHPP) software reliability growth model and obtain unknown model parameters using the maximum likelihood method for interval domain data. Illustrations have been given to estimate the parameters using standard data sets taken from actual software projects. A goodness of fit test is performed to check statistically whether the fitted model provides a good fit with the observed data. We discuss the goodness of fit test based on the Kolmogorov-Smirnov (K-S) test statistic. The proposed model is compared with some of the standard existing models through error sum of squares, mean sum of squares, predictive ratio risk and Akaikes information criteria using three different data sets. We show that the observed data fits the proposed software reliability growth model. We also show that the proposed model performs satisfactory better than the existing finite failure category models
In a Bayesian approach, uncertainty explained by a prior distribution that contains information about an uncertain parameter. Determination of the prior distribution is important in because it impacts the posterior inference. The objective of this study is to use metaanalysis for proportion to obtain prior information about patients with breast cancer stage I who undergoing modified radical mastectomy treatment and applied Bayesian approach. R and WinBUGS programs are performed for meta-analysis and Bayesian approach respectively.
Abstract: While conducting a social survey on stigmatized/sensitive traits, obtaining efficient (truthful) data is an intricate issue and estimates are generally biased in such surveys. To obtain trustworthy data and to reduce false response bias, a technique, known as randomized response technique, is now being used in many surveys. In this study, we performed a Bayesian analysis of a general class of randomized response models. Suitable simple Beta prior and mixture of Beta priors are used in a common prior structure to obtain the Bayes estimates for the proportion of a stigmatized/sensitive attributes in the population of interest. We also extended our proposal to stratified random sampling. The Bayes and the maximum likelihood estimators are compared. For further understanding of variability, we have also compared the prior and posterior distributions for different values of the design constants through graphs and credible intervals. The condition to develop a new randomized response model is also discussed.
Abstract: We apply methodology robust to outliers to an existing event study of the effect of U.S. financial reform on the stock markets of the 10 largest world economies, and obtain results that differ from the original OLS results in important ways. This finding underlines the importance of han dling outliers in event studies. We further review closely the population of outliers identified using Cook’s distance and find that many of the out liers lie within the event windows. We acknowledge that those data points lead to inaccurate regression fitting; however, we cannot remove them since they carry valuable information regarding the event effect. We study further the residuals of the outliers within event windows and find that the resid uals change with application of M-estimators and MM-estimators; in most cases they became larger, meaning the main prediction equation is pulled back towards the main data population and further from the outliers and indicating more proper fitting. We support our empirical results by pseudo simulation experiments and find significant improvement in determination of both types of the event effect − abnormal returns and change in systematic risk. We conclude that robust methods are important for obtaining accurate measurement of event effects in event studies.
Abstract: Longitudinal data often arise in clinical trials when measure ments are taken from subjects repeatedly over time so that data from each subject are serially correlated. In this paper, we seek some covariance matri ces that make the regression parameter estimates robust to misspecification of the true dependency structure between observations. Moreover, we study how this choice of robust covariance matrices is affected by factors such as the length of the time series and the strength of the serial correlation. We perform simulation studies for data consisting of relatively short (N=3), medium (N=6) and long time series (N=14) respectively. Finally, we give suggestions on the choice of robust covariance matrices under different situ ations.
Abstract: In the area of survival analysis the most popular regression model is the Cox proportional hazards (PH) model. Unfortunately, in practice not all data sets satisfy the PH condition and thus the PH model cannot be used. To overcome the problem, the proportional odds (PO) model ( Pettitt 1982 and Bennett 1983a) and the generalized proportional odds (GPO) model ( Dabrowska and Doksum, 1988) were proposed, which can be considered in some sense generalizations of the PH model. However, there are examples indicating that the use of the PO or GPO model is not appropriate. As a consequence, a more general model must be considered. In this paper, a new model, called the proportional generalized odds (PGO) model, is introduced, which covers PO and GPO models as special cases. Estimation of the regression parameters as well as the underlying survival function of the GPO model is discussed. An application of the model to a data set is presented.