Abstract: Until the late 70’s the spectral densities of stock returns and stock index returns exhibited a type of non-constancy that could be detected by standard tests for white noise. Since then these tests have been unable to find any substantial deviations from whiteness. But that does not mean that today’s returns spectra contain no useful information. Using several sophisticated frequency domain tests to look for specific patterns in the periodograms of returns series we find nothing or, more precisely, less than nothing. Actually, there is a striking power deficiency, which implies that these series exhibit even fewer patterns than white noise. To unveil the source of this “super-whiteness” we design a simple frequency domain test for characterless, fuzzy alternatives, which are not immediately usable for the construction of profitable trading strategies, and apply it to the same data. Because the power deficiency is now much smaller, we conclude that our puzzling findings may be due to trading activities based on excessive data snooping.
Survival analysis is the widely used statistical tool for new intervention comparison in presence of hazards of follow up studies. However, it is difficult to obtain suitable survival rate in presence of high level of hazard within few days of surgery. The group of patients can be directly stratified into cured and non-cured strata. The mixture models are natural choice for estimation of cure and non-cure rate estimation. The estimation of cure rate is an important parameter of success of any new intervention. The cure rate model is illustrated to compare the surgery of liver cirrhosis patients with consenting for participation HFLPC (Human Fatal Liver Progenitor Cells) Infusion vs. consenting for participation alone group in South Indian popula-tion. The surgery is best available technique for liver cirrhosis treatment. The success of the surgery is observed through follow up study. In this study, MELD (Model for End-Stage Liver Disease) score is considered as response of interest for cured and non-cured group. The primary efficacy of surgery is considered as covariates of interest. Distributional assumptions of the cure rate are solved with Markov Chain Monte Carlo (MCMC) techniques. It is found that cured model with parametric approach allows more consistent estimates in comparison to standard procedures. The risk of death due to liver transplantation in liver cirrhosis patients including time dependent effect terms has also been explored. The approach assists to model with different age and sex in both the treatment groups.
In DEA framework there are many techniques for finding a common set of efficient weights depend on inputs and outputs values in a set of peer DecisionMaking Units (DMUs). In a lot of papers, has been discussed multiple criteria decision-making techniques and multiple objective-decision criteria for modeling. We know the objective function of a common set of weights is defined like an individual efficiency of one DMU with a basic difference: "trying to maximize the efficiency of all DMUs simultaneously, with unchanged restrictions". An ideal solution for a common set of weights can be the closest set to the derived individual solution of each DMU. Now one question can be: "are the closest set and minimized set, which is found in most of the techniques, are different?" The answer can be: "They are different when the variance between the generated weights of a specific input (output) from n DMUs is big". In this case, we will apply Singular Value Decomposition (SVD) such that, first, the degree of importance weights for each input (output) will be defined and found, then, the Common Set of Weights (CSW) will be found by the closest set to these weights. The degree of importance values will affect the CSW of each DMU directly.
Abstract: Hyperplane fitting factor rotations perform better than conventional rotations in attaining simple structure for complex configurations. Hyperplane rotations are reviewed and then compared using familiar exam es from the literature selected to vary in complexity. Included is a new method for fitting hyperplanes, hypermax, which updates the work of Horst (1941) and Derflinger and Kaiser (1989). Hypercon, a method for confirmatory target rotation, is a natural extension. These performed very well when compared with selected hyperplane and conventional rotations. The concluding sections consider the pros and cons of each method.
Abstract: Friedman’s test is a rank-based procedure that can be used to test for differences among t treatment distributions in a randomized complete block design. It is well-known that the test has reasonably good power under location-shift alternatives to the null hypothesis of no difference in the t treatment distributions. However the power of Friedman’s test when the alternative hypothesis consists of a non-location difference in treatment distributions can be poor. We develop the properties of an alternative rank-based test that has greater power than Friedman’s test in a variety of such circumstances. The test is based on the joint distribution of the t! possible permutations of the treatment ranks within a block (assuming no ties). We show when our proposed test will have greater power than Friedman’s test, and provide results from extensive numerical work comparing the power of the two tests under various configurations for the underlying treatment distributions.
Abstract: Normality (symmetric) of the random effects and the within subject errors is a routine assumptions for the linear mixed model, but it may be unrealistic, obscuring important features of among- and within-subjects variation. We relax this assumption by considering that the random effects and model errors follow a skew-normal distributions, which includes normal ity as a special case and provides flexibility in capturing a broad range of non-normal behavior. The marginal distribution for the observed quantity is derived which is expressed in closed form, so inference may be carried out using existing statistical software and standard optimization techniques. We also implement an EM type algorithm which seem to provide some ad vantages over a direct maximization of the likelihood. Results of simulation studies and applications to real data sets are reported.
Abstract: Since late thirties, factorial analysis of a response measured on the real line has been well established and documented in the literature. No such analysis, however, is available for a response measured on the circle (or sphere in general), despite the fact that many designed experiments in industry, medicine, psychology and biology could result in an angular response. In this paper a full factorial analysis is presented for a circular response using the Spherical Projected Multivariate Linear model. Main and interaction effects are defined, estimated and tested. Analogy to the linear response case, two new effect plots: Circular-Main Effect and Circular Interaction Effect plots are proposed to visualize main and interaction effects on circular responses.
Abstract: This paper investigates the return, volatility, and trading on the Shanghai Stock Exchange with high-frequency intraday five-minute Shanghai Stock Exchange Composite Index (SHCI) data. The random walk hypothesis is rejected, indicating there are predictable components in the index. We adopt a time-inhomogeneous diffusion model using log penalized splines (log P-splines) to estimate the volatility. A GARCH volatility model is also fitted for comparison. A de-volatilized series are obtained by using the de-volatilization technique of Zhou (1991) that resample the data into different de-volatilized series with more desired properties for trading. A trading program based on local trends extracted with a State Space model is then implemented on the de-volatilized five-minute SHCI return series for profit. Volatility estimates from both models are found to be competitive for the purpose of trading.
Abstract: It is believed that overdispersion or extravariation as often re ferred is present more in survey data due to the existence of heterogeneity among and between the units. One approach to address such a phenomenon is to use a generalized Dirichlet-multinomial model. In its application the generalized Dirichlet-multinomial model assumes that the clusters are of equal sizes and the number of clusters remains the same from time to time. In practice this may rarely ever be the case when clusters are observed over time. In this paper the random variability and the varying response rates are accounted for in the model. This requires modeling another level of variation. In effect, this can be considered a hierarchical model that allows varying response rates in the presence of overdispersed multinomial data. The model and its applicability are demonstrated through an illustrative application to a subset of the well known High School and Beyond survey data.