Abstract: Recently, He and Zhu (2003) derived an omnibus goodness-of-fit test for linear or nonlinear quantile regression models based on a CUSUM process of the gradient vector, and they suggested using a particular sim ulation method for determining critical values for their test statistic. But despite the speed of modern computers, execution time can be high. One goal in this note is to suggest a slight modification of their method that eliminates the need for simulations among a collection of important and commonly occurring situations. For a broader range of situations, the modi fication can be used to determine a critical value as a function of the sample size (n), the number of predictors (q), and the quantile of interest (γ). This is in contrast to the He and Zhu approach where the critical value is also a function of the observed values of the q predictors. As a partial check on the suggested modification in terms of controlling the Type I error probability, simulations were performed for the same situations considered by He and Zhu, and some additional simulations are reported for a much wider range of situations.
Abstract: For two independent random variables, X and Y, let p = P(X > Y ) + 0.5P(X = Y ), which is sometimes described as a probabilistic measure of effect size. It has been argued that for various reasons, p represents an important and useful way of characterizing how groups differ. In clinical trials, for example, an issue is the likelihood that one method of treatment will be more effective than another. The paper deals with making inferences about p when three or more groups are to be compared. When tied values can occur, the results suggest using a multiple comparison procedure based on an extension of Cliff’s method used in conjunction with Hochberg’s sequentially rejective technique. If tied values occur with probability zero, an alternative method can be argued to have a practical advantage. As for a global test, extant rank-based methods are unsatisfactory given the goal of comparing groups based on p. The one method that performed well in simulations is based in part on the distribution of the difference between each pair of random variables. A bootstrap method is used where a p-value is based on the projection depth of the null vector relative to the bootstrap cloud. The proposed methods are illustrated using data from an intervention study.
Abstract: When comparing two independent groups, the shift function compares all of the quantiles in a manner that controls the probability of at least one Type I error, assuming random sampling only. Moreover, it provides a much more detailed sense of how groups compare, versus using a single measure of location, and the associated plot of the data can yield valuable insights. This note examines the small-sample properties of an ex tension of the shift function where the goal is to compare the distributions of two specified linear sums of the random variables under study, with an emphasis on a two-by-two design. A very simple method controls the proba bility of a Type I error. Moreover, very little power is lost versus comparing means when sampling is from normal distributions with equal variances.
Abstract: Let ρj be Pearson’s correlation between Y and Xj (j = 1, 2). A problem that has received considerable attention is testing H0: ρ1 = ρ2. A well-known concern, however, is that Pearson’s correlation is not robust (e.g., Wilcox, 2005), and the usual estimate of ρj , rj has a finite sample breakdown point of only 1/n. The goal in this paper is to consider extensions to situations where Pearson’s correlation is replaced by a particular robust measure of association. Included are results where there are p > 2 predictors and the goal to compare any two subsets of m < p predictors.
The paper deals with robust ANCOVA when there are one or two covariates. Let Mj (Y |X) = β0j + β1j X1 + β2j X2 be some conditional measure of location associated with the random variable Y , given X, where β0j , β1j and β2j are unknown parameters. A basic goal is testing the hypothesis H0: M1(Y |X) = M2(Y |X). A classic ANCOVA method is aimed at addressing this goal, but it is well known that violating the underlying assumptions (normality, parallel regression lines and two types of homoscedasticity) create serious practical concerns. Methods are available for dealing with heteroscedasticity and nonnormality, and there are well-known techniques for controlling the probability of one or more Type I errors. But some practical concerns remain, which are reviewed in the paper. An alternative approach is suggested and found to have a distinct power advantage.
Abstract: It is well known that the ordinary least squares (OLS) regression estimator is not robust. Many robust regression estimators have been proposed and inferential methods based on these estimators have been derived. However, for two independent groups, let θj (X) be some conditional measure of location for the jth group, given X, based on some robust regression estimator. An issue that has not been addressed is computing a 1 − confidence interval for θ1(X) − θ2(X) in a manner that allows both within group and between group hetereoscedasticity. The paper reports the finite sample properties of a simple method for accomplishing this goal. Simulations indicate that, in terms of controlling the probability of a Type I error, the method performs very well for a wide range of situations, even with a relatively small sample size. In principle, any robust regression estimator can be used. The simulations are focused primarily on the Theil-Sen estimator, but some results using Yohai’s MM-estimator, as well as the Koenker and Bas sett quantile regression estimator, are noted. Data from the Well Elderly II study, dealing with measures of meaningful activity using the cortisol awakening response as a covariate, are used to illustrate that the choice between an extant method based on a nonparametric regression estimator, and the method suggested here, can make a practical difference.
Abstract: Motivated by a situation encountered in the Well Elderly 2 study, the paper considers the problem of robust multiple comparisons based on K independent tests associated with 2K independent groups. A simple strategy is to use an extension of Dunnett’s T3 procedure, which is designed to control the probability of one or more Type I errors. However, this method and related techniques fail to take into account the overall pattern of p-values when making decisions about which hypotheses should be rejected. The paper suggests a multiple comparison procedure that does take the overall pattern into account and then describes general situations where this alternative approach makes a practical difference in terms of both power and the probability of one or more Type I errors. For reasons summarized in the paper, the focus is on 20% trimmed means, but in principle the method considered here is relevant to any situation where the Type I error probability of the individual tests can be controlled reasonably well.
Abstract: The paper considers the problem of comparing measures of lo cation associated with two dependent groups when values are missing at random, with an emphasis on robust measures of location. It is known that simply imputing missing values can be unsatisfactory when testing hypothe ses about means, so the goal here is to compare several alternative strategies that use all of the available data. Included are results on comparing means and a 20% trimmed mean. Yet another method is based on the usual median but differs from the other methods in a manner that is made obvious. (It is somewhat related to the formulation of the Wilcoxon-Mann-Whitney test for independent groups.) The strategies are compared in terms of Type I error probabilities and power.