Analysis Methods for Supersaturated Design: Some Comparisons

: Supersaturated designs are very cost-eﬀective with respect to the number of runs and as such are highly desirable in many preliminary studies in industrial experimentation. Variable selection plays an important role in analyzing data from the supersaturated designs. Traditional approaches, such as the best subset variable selection and stepwise regression, may not be appropriate in this situation. In this paper, we introduce a variable selection procedure to screen active eﬀects in the SSDs via nonconvex penalized least squares approach. Empirical comparison with Bayesian variable selection approaches is conducted. Our simulation shows that the non-convex penalized least squares method compares very favorably with the Bayesian variable selection approach proposed in Beattie, Fong and Lin (2001).


Introduction
Supersaturated designs (SSD) are useful in many preliminary studies in industry in the presence of a large number of potentially relevant factors, of which actual active effects are believed to be sparse.The SSD was introduced by Satterthwaite (1959) and has again received increasing attention since the appearance of Lin (1993), evidenced by the growth of the literature.Lin (1993) provided a new class of supersaturated designs based on half-fractions of Hadamard matrices.Wu (1993) augmented Hadamard matrices by adding interaction columns.Algorithms for constructing SSD have been studied by many authors, for instance, Lin (1991Lin ( , 1995)), Nguyen (1996), Li and Wu (1997), Tang and Wu (1997), Yamada and Lin (1997), Deng, Lin and Wang (1999) and Fang, Lin and Ma (2000).Since the number of experiments is less than the number of candidate factors in SSD, the analysis is very challenging.
To find the sparse active effects, variable selection becomes fundamental in the analysis stage of such screening experiments.While stepwise variable selection may not be appropriate (see, for example, Westfall, Young and Lin, 1998), some traditional approaches, such as the best subset variable selection, are known to be infeasible.Beattie, Fong and Lin (2001) discuss how to implement Bayesian variable selection for data analysis of SSDs.They first employed the stochastic search variable selection (SSVS) approach, proposed by George and McMulloch (1993) then applied the intrinsic Bayesian factor (IBF), proposed by Berger and Pericchi (1996), to select active effects from the remaining factors in the first stage.In this paper, we introduce an alternative approach via nonconvex penalized least squares.The theory behind our approach is different from the Bayesian variable selection approaches.Fan and Li (2001) proposed a class of variable selection procedures by nonconcave penalized likelihood approaches.Extending the idea of the nonconcave penalized likelihood, Li and Lin (2002) suggested the use of nonconvex penalized least squares to screen active effects for supersaturated design.From their comparison, the penalized least squares performs much better than than the stepwise procedures using various variable selection criteria.In this paper, the performance of the penalized least squares is compared with that of Bayesian variable selection procedures.Our simulation shows that the procedure, proposed by Li and Lin (2002), compares very favorably with Bayesian variable selection approaches.
This paper is organized as follows.In Section 2, we introduce nonconvex penalized least squares, and discuss how to implement the nonconvex penalized least squares approach in practice.Section 3 proposes a new approach to screening active effects via penalized least squares.A real example and empirical comparison are given in Section 4. Final conclusions are given in Section 5.

Nonconvex penalized least squares
Suppose that observations y 1 , • • • , y n are an independent sample from the linear regression model with E(ε i ) = 0, and var(ε i ) = σ 2 , where x i corresponds to a fixed experiment design.Denote y = (y 1 , • • • , y n ) T , and and Li (2001), define penalized least squares as where and λ is a tuning parameter which controls model complexity.Note that λ may depend on the sample size n and can be selected by some data-driven methods, such as generalized cross validation (GCV, Craven and Wahba, 1979).Some existing variable selection criteria are closely related to the penalized least squares criterion (2.2).Take the penalty function to be the entropy penalty, namely, p λ (|θ|) = 1 2 λ 2 I(|θ| = 0), which is also referred to as L 0 -penalty in the literature, where I(•) is an indicator function.Note that the dimension or the size of a model equals to the number of nonzero regression coefficients in the model.This actually equals j I(|β j | = 0).In other words, the PLS (2.2) with the entropy penalty can be rewritten as where |M| = j I(|β j | = 0) is the size of the underlying candidate model.Hence, many popular variable selection criteria can be derived from the PLS (2.3) by choosing different values of λ n .For instance, the AIC (Akaike, 1974) and BIC (Schwarz, 1978) , respectively, although these criteria were motivated from different principles.The L 1 penalty p λ (|β|) = λ|β| results in LASSO (Tibshirani, 1996), and the L q penalty in general leads to a bridge regression (see Frank and Friedman, 1993).Furthermore, the L q penalty can provide the resulting solution a Bayesian interpretation.
To achieve the purpose of variable selection, some conditions on p λ (| • |) are needed.Fan and Li (2001) suggested the use of the smoothly clipped absolute deviation (SCAD), whose first order derivative is defined by , for some a > 2 and β > 0, with p λ (0) = 0.For simplicity of presentation, we will use the name of SCAD for all procedures using the SCAD penalty.The SCAD involves two unknown parameters λ and a. Fan and Li (2001) suggested using a = 3.7 from a Bayesian point of view.They found via Monte Carlo simulation that the performance of SCAD is not sensitive to the choice of a provided that a ∈ (2, 10).They also found that the empirical performances of the SCAD with a = 3.7 is as good as those of the SCAD with the value of a chosen by the GCV.Hence, this value will be used throughout the entire paper.
It is challenging to find solutions of the nonconvex penalized least squares with the SCAD penalty, because the target function is nonconvex and could be high-dimensional.Furthermore, the SCAD penalty function may not have the second order derivative at some points.In order to apply the Newton Raphson algorithm to the penalized least squares, Fan and Li (2001) suggested to locally approximate the SCAD penalty function by a quadratic function as follows.
Given an initial value β (0) that is close to the true value of β, when β (0) j is not very close to 0, the penalty p λ (|β j |) can be locally approximated by the quadratic function as otherwise, set βj = 0.With the local quadratic approximation, the solution for the penalized least squares can be found by iteratively computing the following ridge regression with an initial value β (0) : A standard error formula can be derived from the iterative ridge regression algorithm (2.5); specifically, This formula only applies for non-vanished components of β.

The proposed procedure
We now propose a screening active effect procedure for the SSD.For a given value of λ and an initial value of β, we iteratively compute the ridge regression (2.5) with updating local quadratic approximation (2.4) at each iteration.This can be easily implemented in many statistical packages.Some components of the resulting estimate will exactly be 0.These components correspond to the coefficient of inactive effects.In other words, nonzero components of the resulting estimate correspond to the active effects in the SSD.Following Li and Lin (2002), we choose λ by minimizing the corresponding GCV scores.
Compared with stepwise regression, the proposed procedure is actually an estimation procedure rather than a variable selection procedure.Stochastic errors may be ignored during the course of selecting variables.Compared with the best subset variable selection, our procedure significantly reduces the computational burden.Our procedure simultaneously excludes inactive factors and estimates the regression coefficients of active effects.Assume observations are taken from the model y = X 1 β 1 + X 2 β 2 + ε.Without loss of generality, further assume that all components of the first portion (X 1 ) are active, while the second portion (X 2 ) is not active.An ideal estimator is the oracle estimator: β1 = (X T 1 X 1 ) −1 X T 1 y, and β2 = 0.
The oracle estimator correctly specifies the true model and efficiently estimates the regression coefficients of the significant part.This is a desired property in variable selection.It has been shown in Li and Lin (2002) that the SCAD yields an estimate possessing the oracle property in asymptotic sense.

Numerical comparison and example
The goal of this section is to compare the performance of the penalized least squares approach with the SCAD penalty and the two Bayesian variable selection procedures, SSVS, proposed by George and McMulloch (1993) and extended for SSD by Chipman, Hamada and Wu (1993), denoted by CHW for short, and the two stage procedure (SSVS/IBF) by Beattie, Fong and Lin (2001), denoted by BFL.We assess the performance of these variable selection procedures in terms of their ability to identify the true model, i.e., their ability of identifying the smallest active effect and the size of selected model.In the following examples, we set the level of significance to be 0.1 for both F-enter and F-remove in the stepwise variable selection procedure for choosing an initial value for the penalized least squares approaches.All simulations are conducted using MATLAB codes.
Example 1.In this example, we apply SCAD to the supersaturated design demonstrated first in Lin (1993) and listed in Table 1.Table 2 depicts the five models with the highest posterior probability obtained in CHW, using the SSVS approach.From Table 2, the first two models have very close posterior probability.A question arising here is whether Factor 10 is active.In fact, this is a very challenging example for various approaches.Most of the existing approaches have difficulty in determining whether or not Factor 10 is active.See BFL for more detailed discussions.
We now apply SCAD for this data set.The tuning parameter λ is selected by GCV, resulting λ=6.5673.The final selected models, including estimated coefficients and their standard errors, are shown in Table 3.The SCAD identifies X 15 , X 12 , X 20 and X 4 as active effects.This is consistent with the model concluded in Williams (1968), in which 28 rather than 14 experiments are included in the analysis.
Example 2. The design matrix X is displayed in Table 1.We generate data from the linear model where the random error ε is N(0, 1).We consider three cases for β:  Cases I and II were used in BFL, while Case III was studied by Beattie (1999).Simulation results for SCAD based on 1000 replicates are summarized in Table 4.For convenience, we also present simulation results of Bayesian variable selection approaches obtained in BFL and Beattie (1999).In Table 4, SSVS stands for the Bayesian variable selection approach, proposed by George and McMulloch (1993) and extended to the context of screening variables by CHW.See BFL for the meaning of the hyper-parameters, shown in parentheses.SSVS/IBF refers to the two stages of Bayesian model selection strategy proposed by BFL, retaining good features of the SSVS approach and IBF approach.
There is only a single strong effect among the factors in Case I.All three approaches perform well in terms of the rate of identifying the single active effect.SSVS and SSVS/IBF tend to incorrectly identify inactive effects as active effects.This leads to a low rate of identifying the true model.In this case, SCAD performs the best.
The factors in Case II contain a very strong effect, a medium effect and a relatively small effect.This is challenging for various existing variable selection procedures.The two Bayesian variable selection approaches frequently fail to identify the smallest effect X 9 as active effect and suffer very low identification rate of the true model.The performance of the SCAD is similar to that of Case I, and outperforms the other two approaches.
In Case III, there are 5 active effects, two factors have very strong effects, two have moderate effects, and one has a small effect.In this case, the SSVS/IBF cannot take its advantage in identifying the smallest active effect, compared with SSVS.Again, the SCAD performs the best.

Conclusion
In this paper, we have introduced the SCAD for analyzing data in SSD.The popular William's data were used for illustration.Although we don't know the true answer from William's data, the results from the proposed method is rather consistent with other approachs.Empirical performance of the nonconvex penalized least squares based on heavy simulations are also compared with that of Bayesian variable selection procedures for the SSD.The comparison shows that the SCAD performs better than the proposals in CHW and BFL.

Table 2 :
Posterior Model Probabilities in Example 1

Table 4 :
Summary of Simulation Results in Example 2