Semi-parametric Cox regression and parametric methods have been used to analyze survival data of cancer; however, no study has focused on the comparison of survival models in genetic association analysis of age at onset (AAO) of cancer. The Hepatocyte nuclear factor-1- beta (HNF1B) gene has been associated with risk of endometrial and prostate cancers; however, no study has focused on the effect of HNF1B gene on the AAO of cancer. This study examined 23 single nucleotide polymorphisms (SNPs) within the HNF1B gene in the Marshfield sample with 716 cancer cases and 2,848 non-cancer controls. Cox proportional hazards models in PROC PHREG and parametric survival models (including exponential, Weibull, log-normal, log-logistic, and gamma models) in PROC LIFEREG in SAS 9.4 were used to detect the genetic association of HNF1B gene with the AAO. The Akaike information criterion (AIC) and Bayesian information criterion (BIC) were used to compare the Cox models and parametric survival models. Both AIC and BIC values showed that the Weibull distribution is the best model for all the 23 SNPs and the Gamma distribution is the second best. The top two SNPs are rs4239217 and rs7501939 with time ratio (TR) =1.08 (p<0.0001 for the AA and AG genotypes, respectively) and 1.07 (p=0.0004 and 0.0002 for CC and CT genotypes, respectively) based on the Weibull model, respectively. This study shows that the parametric Weibull distribution is the best model for the genetic association of AAO of cancer and provides the first evidence of several genetic variants within the HNF1B gene associated with AAO of cancer.
Semi-parametric Cox regression and parametric methods have been used to analyze survival data of cancer; however, no study has focused on the comparison of survival models in genetic association analysis of age at onset (AAO) of cancer. The Hepatocyte nuclear factor-1- beta (HNF1B) gene has been associated with risk of endometrial and prostate cancers; however, no study has focused on the effect of HNF1B gene on the AAO of cancer. This study examined 23 single nucleotide polymorphisms (SNPs) within the HNF1B gene in the Marshfield sample with 716 cancer cases and 2,848 non-cancer controls. Cox proportional hazards models in PROC PHREG and parametric survival models (including exponential, Weibull, log-normal, log-logistic, and gamma models) in PROC LIFEREG in SAS 9.4 were used to detect the genetic association of HNF1B gene with the AAO. The Akaike information criterion (AIC) and Bayesian information criterion (BIC) were used to compare the Cox models and parametric survival models. Both AIC and BIC values showed that the Weibull distribution is the best model for all the 23 SNPs and the Gamma distribution is the second best. The top two SNPs are rs4239217 and rs7501939 with time ratio (TR) =1.08 (p<0.0001 for the AA and AG genotypes, respectively) and 1.07 (p=0.0004 and 0.0002 for CC and CT genotypes, respectively) based on the Weibull model, respectively. This study shows that the parametric Weibull distribution is the best model for the genetic association of AAO of cancer and provides the first evidence of several genetic variants within the HNF1B gene associated with AAO of cancer.
Abstract: Until the late 70’s the spectral densities of stock returns and stock index returns exhibited a type of non-constancy that could be detected by standard tests for white noise. Since then these tests have been unable to find any substantial deviations from whiteness. But that does not mean that today’s returns spectra contain no useful information. Using several sophisticated frequency domain tests to look for specific patterns in the periodograms of returns series we find nothing or, more precisely, less than nothing. Actually, there is a striking power deficiency, which implies that these series exhibit even fewer patterns than white noise. To unveil the source of this “super-whiteness” we design a simple frequency domain test for characterless, fuzzy alternatives, which are not immediately usable for the construction of profitable trading strategies, and apply it to the same data. Because the power deficiency is now much smaller, we conclude that our puzzling findings may be due to trading activities based on excessive data snooping.
Survival analysis is the widely used statistical tool for new intervention comparison in presence of hazards of follow up studies. However, it is difficult to obtain suitable survival rate in presence of high level of hazard within few days of surgery. The group of patients can be directly stratified into cured and non-cured strata. The mixture models are natural choice for estimation of cure and non-cure rate estimation. The estimation of cure rate is an important parameter of success of any new intervention. The cure rate model is illustrated to compare the surgery of liver cirrhosis patients with consenting for participation HFLPC (Human Fatal Liver Progenitor Cells) Infusion vs. consenting for participation alone group in South Indian popula-tion. The surgery is best available technique for liver cirrhosis treatment. The success of the surgery is observed through follow up study. In this study, MELD (Model for End-Stage Liver Disease) score is considered as response of interest for cured and non-cured group. The primary efficacy of surgery is considered as covariates of interest. Distributional assumptions of the cure rate are solved with Markov Chain Monte Carlo (MCMC) techniques. It is found that cured model with parametric approach allows more consistent estimates in comparison to standard procedures. The risk of death due to liver transplantation in liver cirrhosis patients including time dependent effect terms has also been explored. The approach assists to model with different age and sex in both the treatment groups.
In DEA framework there are many techniques for finding a common set of efficient weights depend on inputs and outputs values in a set of peer DecisionMaking Units (DMUs). In a lot of papers, has been discussed multiple criteria decision-making techniques and multiple objective-decision criteria for modeling. We know the objective function of a common set of weights is defined like an individual efficiency of one DMU with a basic difference: "trying to maximize the efficiency of all DMUs simultaneously, with unchanged restrictions". An ideal solution for a common set of weights can be the closest set to the derived individual solution of each DMU. Now one question can be: "are the closest set and minimized set, which is found in most of the techniques, are different?" The answer can be: "They are different when the variance between the generated weights of a specific input (output) from n DMUs is big". In this case, we will apply Singular Value Decomposition (SVD) such that, first, the degree of importance weights for each input (output) will be defined and found, then, the Common Set of Weights (CSW) will be found by the closest set to these weights. The degree of importance values will affect the CSW of each DMU directly.
Abstract: Hyperplane fitting factor rotations perform better than conventional rotations in attaining simple structure for complex configurations. Hyperplane rotations are reviewed and then compared using familiar exam es from the literature selected to vary in complexity. Included is a new method for fitting hyperplanes, hypermax, which updates the work of Horst (1941) and Derflinger and Kaiser (1989). Hypercon, a method for confirmatory target rotation, is a natural extension. These performed very well when compared with selected hyperplane and conventional rotations. The concluding sections consider the pros and cons of each method.
Abstract: Friedman’s test is a rank-based procedure that can be used to test for differences among t treatment distributions in a randomized complete block design. It is well-known that the test has reasonably good power under location-shift alternatives to the null hypothesis of no difference in the t treatment distributions. However the power of Friedman’s test when the alternative hypothesis consists of a non-location difference in treatment distributions can be poor. We develop the properties of an alternative rank-based test that has greater power than Friedman’s test in a variety of such circumstances. The test is based on the joint distribution of the t! possible permutations of the treatment ranks within a block (assuming no ties). We show when our proposed test will have greater power than Friedman’s test, and provide results from extensive numerical work comparing the power of the two tests under various configurations for the underlying treatment distributions.
Abstract: Normality (symmetric) of the random effects and the within subject errors is a routine assumptions for the linear mixed model, but it may be unrealistic, obscuring important features of among- and within-subjects variation. We relax this assumption by considering that the random effects and model errors follow a skew-normal distributions, which includes normal ity as a special case and provides flexibility in capturing a broad range of non-normal behavior. The marginal distribution for the observed quantity is derived which is expressed in closed form, so inference may be carried out using existing statistical software and standard optimization techniques. We also implement an EM type algorithm which seem to provide some ad vantages over a direct maximization of the likelihood. Results of simulation studies and applications to real data sets are reported.