A Comparison of Propensity Score and Linear Regression Analysis of Complex Survey Data

We extend propensity score methodology to incorporate survey weights from complex survey data and compare the use of multiple linear regression and propensity score analysis to estimate treatment effects in observational data from a complex survey. For illustration, we use these two methods to estimate the effect of gender on information technology (IT) salaries. In our analysis, both methods agree on the size and statistical significance of the overall gender salary gaps in the United States in four different IT occupations after controlling for educational and job-related covariates. Each method, however, has its own advantages which are discussed. We also show that it is important to incorporate the survey design in both linear regression and propensity score analysis. Ignoring the survey weights affects the estimates of population-level effects substantially in our analysis.


Introduction
We compare the use of multiple linear regression and propensity score analysis to estimate treatment effects in observational data arising from a complex survey.To do this, we extend propensity score methodology to incorporate survey weights from complex survey data.Multiple linear regression is a commonly used technique for estimating treatment effects in observational data, however, the statistical literature suggests that propensity score analysis has several advantages over multiple linear regression (Hill, Reiter, and Zanutto, 2004;Perkins, Tu, Underhill, Zhou, and Murray, 2000;Rubin, 1997) and is becoming more prevalent, for example, in public policy and epidemiologic research (e.g., D 'Agostino, 1998;Dehejia and Wahba, 1999;Hornik et al., 2002;Perkins et al., 2000;Rosenbaum, 1986;Rubin, 1997).Propensity score analysis techniques use observational data to create groups of treated and control units that have similar covariate values so that subsequent comparisons, made within these matched groups, are not confounded by differences in covariate distributions.These groups are formed by matching on the estimated propensity score, which is the estimated probability of receiving treatment given background covariates For illustration, we use these two methods to estimate the effect of gender on information technology (IT) salaries.Although we may not consider the effect of gender on salary to be a treatment effect in the causal sense, because we cannot manipulate gender (Holland, 1986), both propensity score and linear regression methods can be used to make descriptive comparisons of the salaries of similar men and women.We estimate gender gaps in IT salaries using data from the U.S. National Science Foundation's 1997 SESTAT (Scientists and Engineers Statistical Data System) database .Because SESTAT data is obtained using a complex sampling design, we extend propensity score methodology to incorporate survey weights from complex survey data.
The outline of the remainder of this paper follows.Multiple linear regression and propensity score methodologies are summarized in Sections 2 and 3, with a discussion of the necessary modifications to both methods to accommodate complex survey data in Section 4. The results of our data analysis are described in Section 5, with a discussion of the relative advantages of each of the methods in Section 6. Section 7 concludes with an overall discussion.

Multiple Linear Regression
Multiple linear regression can be used to estimate treatment effects in observational data by regressing the outcome on the covariates, including an indicator variable for treatment status and interactions between the treatment variable and each of the covariates.A statistically significant coefficient of treatment or statistically significant coefficient of an interaction involving the treatment variable indicates a treatment effect.This is the most common method, for example, for estimating gender salary gaps after controlling for important covariates such as education, experience, job responsibilities and other market factors such as region of the country (Finkelstein and Levin, 2001;Gastwirth, 1993;Gray, 1993).

Propensity Score Methodology
As an alternative to multiple linear regression, a propensity score analysis of observational data (Rosenbaum andRubin, 1983, 1984;Rubin, 1997) can be used to create groups of treated and control units that have similar characteristics so that comparisons can be made within these matched groups.The propensity score is defined as the conditional probability of receiving treatment given a set of observed covariates.The propensity score is a balancing score, meaning that conditional on the propensity score the distributions of the observed covariates are independent of the binary treatment assignment (Rosenbaum and Rubin, 1983;1984).As a result, subclassifying or matching on the propensity score makes it possible to estimate treatment effects, controlling for covariates, because within subclasses that are homogeneous in the propensity score, the distributions of the covariates are the same for treated and control units (e.g., are "balanced").In particular, for a specific value of the propensity score, the difference between the treated and control means for all units with that value of the propensity score is an unbiased estimate of the average treatment effect at that propensity score, assuming the conditional independence between treatment assignment and potential outcomes given the observed covariates ("strongly ignorable treatment assignment" assumption) (Rosenbaum and Rubin, 1983).In other words, unbiased treatment effect estimates are obtained when we have controlled for all relevant covariates, which is similar to the assumption of no omitted-variable bias in linear regression.
Unlike other propensity score applications (D'Agostino, 1998;Rosenbaum and Rubin, 1984;Rubin, 1997), when estimating the effect of gender on salary we cannot imagine that given similar background characteristics the treatment (gender) was randomly assigned.Nevertheless, we can use the propensity score framework to create groups of men and women who share similar background characteristics to facilitate descriptive comparisons.
The estimated propensity scores can be used to subclassify the sample into strata according to propensity score quantiles, usually quintiles (Rosenbaum and Rubin, 1984).Strata boundaries can be based on the values of the propensity scores for both groups combined or for the treated or control group alone (D'Agostino, 1998).To estimate gender salary gaps in IT, since we are interested in estimating gender salary gaps for women and since there are many fewer women than men, we create strata based on the estimated propensity scores for women, so that each stratum contains an equal number of women.This ensures an adequate number of women in each stratum.As an alternative to subclassification, individual men and women can be matched using estimated propensity scores (Rosenbaum, 2002, chapter 10) however, it is less clear in this case how to incorporate the survey weights from a complex survey design and so we do not use this approach here.
To estimate the average difference in outcomes between treated and control units, using propensity score subclassification, we calculate the average difference in outcomes within each propensity score stratum and then average these differences across all five strata.In the case of estimating average IT salary differences, this is summarized by the following formula: where ∆ 1 is the estimated overall gender difference in salaries, k indexes the propensity score stratum, n F k is the number of women (treated units) in propensity score stratum k (the total sample size in stratum k is used here if quintiles are based on the treated and control units combined), N F k = k n F k , and ȳMk and ȳFk , respectively, are the average salary for men (control units) and women (treated units) within propensity score stratum k.The estimated standard error of this estimated difference is commonly calculated as (Benjamin, 2003;Larsen, 1999;Perkins et al. 2000) ŝ(∆ 1 ) = where n Mk and n F k are the number of men and women, respectively, in stratum k, and s 2 Mk and s 2 F k are the sample variances of salary for men and women, respectively, in stratum k.This standard error estimate is only approximate for several reasons (Du, 1998).It does not account for the fact that since the subclassification is based on propensity scores estimated from the data, the responses within each stratum and between the strata are not independent.Also, the stratum boundary cut-points are sample-dependent and so are the subsequent sample sizes, n Mk and n F k .However, previous studies (Agodini and Dynarski, 2001;Benjamin, 2003) have found this standard error estimate to be a reasonable approximation.
Simple diagnostic tests can be used to assess the degree of covariate balance achieved by the propensity subclassification (Rosenbaum and Rubin, 1984).If differences between the two groups remain after subclassification, the propensity score model should be re-estimated including interaction or quadratic terms of variables that remain out of balance.If differences remain after repeated modeling attempts, regression adjustments can be used at the final stage to adjust for remaining covariate differences (Dehejia and Wahba, 1999;Rosenbaum, 1986).In this case, the regression-adjusted propensity score estimate of the average gender salary gap is: where βk,male is the coefficient of the indicator variable for male (1=male, 0=female) in the linear regression model fit in propensity stratum k that predicts salary (outcome) from the indicator variable for male (treatment indicator) and any other variables that are out of balance after propensity score subclassification.
A standard error estimate is given by ŝ where s.e.( βk,male ) is the usual estimate of the standard error of βk,male .Again, this estimate is only approximate due to the sample-dependent aspects of the propensity score subclassification.

Propensity score example
To briefly illustrate the propensity score subclassification method, we use the following simple example.We generated 1000 observations with two covariates, X 1 and X 2 , both distributed as uniform(0, 2).Each observation was randomly assigned to either the treatment or control group.The probability of being assigned to the treatment group was given by p = (1 + exp(3 − X 1 − X 2 )) −1 , resulting in 30% of the sample being assigned to the treatment group (roughly comparable to the proportion of women in the gender salary data).These treatment assignment probabilities are such that observations with large X 1 +X 2 were likely to be assigned to treatment and those with small values were likely to be assigned to control.This created a dataset in which there were relatively few controls with large propensity score values and relatively few treated units with small propensity score values, a pattern often observed in practice.The outcome was generated as Y = 3Z + 2X 1 + 2X 2 + , where is N (0, 1) and Z = 1 for treated units and Z = 0 for control units, so that the treatment effect is 3.The unadjusted estimate of the treatment effect in the raw data, calculated simply as the difference in average outcomes for treated and control units, is 4.16 (s.e.= 0.12), with treated outcomes larger than control outcomes, which overestimates the treatment effect.However this estimate is clearly confounded by differences in the values of the covariates between the two groups.The average difference between the treated and control units for X 1 is 0.24 (s.e.=0.04) and for X 2 is 0.36 (s.e.=0.04), with covariate values larger in the treated group.
Using the propensity score subclassification method to estimate the average treatment effect, controlling for covariate differences, we estimated the propensity scores using a logistic regression model with X 1 and X 2 as covariates.Then we subclassified the data into five strata based on the quintiles of the estimated propensity scores for the treated units.The resulting estimates of stratum-specific and overall treatment effects and covariate differences and corresponding standard errors (s.e.) are presented in Table 1.Table 1 shows that, within each stratum, the average values of X 1 and X 2 are comparable for treated and control units.A two-way ANOVA with X 1 as the dependent variable and treatment indicator (Z) and propensity score stratum index as the independent variables yields a nonsignificant main effect of treatment and a nonsignificant interaction of treatment and propensity score stratum index, confirming that X 1 is balanced across treated and control groups within strata.Similar results are obtained for X 2 .As a result, within each stratum, estimates of the treatment effect, calculated as the difference between the treated and control mean outcomes ( ȲT − ȲC ), are not confounded by differences in the covariates.As Table 1 shows, the treatment effect estimate is close to 3 within each stratum.The overall treatment effect estimate, calculated using formulas (3.1) and (3.2) is 2.97 (s.e.=0.09) which is very close to the true value.Because propensity score subclassification balances both X 1 and X 2 , no further regression adjustments are necessary.

Complex Survey Design Considerations
Both linear regression and propensity score analyses are further complicated when the data have been collected using a complex sampling design, as is the case with the SESTAT data.In complex surveys, each sample unit is assigned a survey weight, which in the simplest case is the inverse of the probability of selection, but is often modified to adjust for nonresponse and poststratification.These survey weights indicate the number of people that each sampled person represents in the population.A common strategy to incorporate survey weights into linear regression modeling is to fit the regression model using both ordinary least squares and a survey-weighted least squares (e.g.Lohr, 1999, chapter 11).Large differences between the two analyses suggest model misspecification (Du-Mouchel and Duncan, 1983;Lohr and Liu, 1994;Winship and Radbill, 1994).If these differences cannot be resolved by modifying the model (e.g., including more covariates related to the survey weights), then the weighted analysis should be used since the weights may contain information that is not available in the covariates.Survey-weighted linear regression and the associated linearization variance estimates can be computed by statistical analysis software such as Stata1 and SAS (An and Watts, 1998).
Although the implications of complex survey design on propensity score estimates of treatment effects have not been discussed in the statistical literature, similar advice of performing the analysis with and without survey weights should apply.Since the propensity score model is used only to match treated and control units with similar background characteristics together in the sample and not to make inferences about the population-level propensity score model, it is not necessary to use survey-weighted estimation for the propensity score model.However, to estimate a population-level treatment effect, it is necessary to consider the use of survey weights in equations (3.1) and (3.3).A survey-weighted version of (3.1) is: where w i denotes the survey weight for unit i, and S F k and S Mk denote, respectively, the set of females in propensity score stratum k and the set of males in propensity score stratum k.This formula allows for potential differences in distributions between the sample and the population both within and between sample strata.Within a propensity score stratum, some types of people in the sample may be over-or underrepresented relative to other types of people.The use of the weighted averages within each stratum ensures that these averages reflect the distribution of people in the population.This formula also weights each stratum by the estimated population proportion of women in each stratum ensuring that our calculations reflect the population distribution of women across the five sample quintiles.
Noting that (4.1) is a linear combination of subdomain (ratio) estimators, assuming unequal probability sampling without replacement with overall inclusion probabilities 1/w i , an approximate standard error estimate that is analogous to (3.2) is (Lohr, 1999, p. 68) where and where n is the total sample size.A similar formula for s 2 F k applies for women.As in the simple random sampling case, this standard error estimate is only approximate because we are not accounting for the sample-dependent aspects of the propensity score subclassification.We are also not accounting for any extra variability due to sample-based nonresponse or poststratification adjustments to the survey weights.Replication methods can be used to account for this extra source of variability (Canty and Davison, 1999;Korn and Graubard, 1999, chapter 2.5;Wolter, 1985, chapter 2), however this issue is beyond the scope of this paper.
Extensions of these formulas to include regression adjustments within propensity score strata to adjust for remaining covariate imbalance is straightforward.In this case, the vector of estimated regression coefficients in a survey-weighted linear regression model fit in propensity stratum k that predicts salary (outcome) from the indicator variable for male (treatment indicator) and any covariates that remain out of balance after subclassification on the propensity score, is given by where X k is the matrix of explanatory variables, W k is a diagonal matrix of the sample weights, and y k is the vector of responses in propensity score stratum k.
The usual linearization variance estimate of βw k is given by (Binder, 1983;Shah, Holt, and Folsom, 1977) V ( βk where S k denotes the set of sample units in propensity score stratum k, and where x T ik is the i-th row of X k and y ik is the i-th element of y k .The variance estimate in the middle of (4.2) depends on the sample design.For example, for unequal probability sampling without replacement, with overall inclusion probabilities 1/w i , we can use the following approximation for the (j, )-th element of the variance-covariance matrix (Sarndal, Swensson, and Wretman, 1992, p.99) where w i and u ijk = q ijk if unit i is in propensity score stratum k and zero otherwise, where q ijk is the j-th element of q ik .
Letting βw k,male denote the coefficient of the indicator variable for male in the survey-weighted linear regression model in propensity score stratum k, we have the following estimate of gender salary gap after regression adjustment within propensity score strata ) .(4.5)

Data Analysis
The field of Information Technology (IT) has experienced a dramatic growth in jobs in the United States, but there are concerns about women being underpaid in IT occupations (AAUW, 2000;Council of Economic Advisers, 2000;Gearan, 2000aGearan, , 2000b)).To address this issue it is necessary to have an accurate estimate of the gender salary gap.

The data
We analyze data from the 1997 U.S. SESTAT database.This database contains information from several national surveys of people with at least a bachelor's degree in science or engineering or at least a bachelor's degree in a non-science and engineering field but working in science and engineering.For a detailed description of the coverage limitations see NSF 99-337.Our analysis focuses on 2035 computer systems analysts (1497 men, 538 women), 1081 computer programmers (817 men, 264 women), 2495 software engineers (2096 men, 399 women), and 839 information systems scientists (609 men, 230 women) who were working full-time in the United States in 1997 and responded to the U.S. National Survey of College Graduates or the U.S. Survey of Doctoral Recipients.A total of 13 workers with professional degrees (e.g., doctor of medicine (M.D.), doctor of dental sugery (D.D.S.), juris doctor (J.D.)) were excluded from the analysis since this was too small a sample to draw conclusions about workers with professional degrees.Also one extreme outlier was excluded from the sample of information systems scientists.
The sample designs for the component surveys making up the SESTAT database used unequal probability sampling.Although each survey has a different design, generally more of the sample is allocated to women, underrepresented minorities, the disabled, and individuals in the early part of their career, so that these groups of people are overrepresented in the database.Survey weights that adjust for these differential selection probabilities and also for nonresponse and poststratification adjustments are present in the database.We use these weights in the survey-weighted linear regression and propensity analyses in Sections 5.3 and 5.4 to illustrate calculations for an unequal probability sampling design.Refinements to the standard error estimates are possible if additional information about stratification, poststratification, or nonresponse adjustments is available, but that is beyond the scope of this illustration.
A comparison of the weighted and unweighted linear regression and propensity score analyses yielded substantially different results that could not be resolved by modifying the models.Because the survey weights are correlated with salary it is important to incorporate the survey weights into the analysis to accurately estimate the gender salary gap in these populations.Differences in the weighted and unweighted gender gap estimates seem to be related to the differential underrepresentation of lower paid men and women in these samples.We return to this issue in Section 5.5.
Table 2 presents survey-weighted unadjusted average differences in salaries for men and women in the four occupations.On average, women earn 7% to 12% less than men in the same occupation in this population.Similar results have been reported for IT salaries (AAUW, 2000) and engineering salaries (NSF 99-352).Revised estimates of the gender differences, that control for relevant background characteristics, are presented in Sections 5.4 and 5.5.

Confounding variables
To estimate gender differences in salary, it is necessary to control for educational and job-related characteristics.We control for the confounding variables listed in Table 3.Similar covariates have been used in other studies of gender gaps in careers (e.g., Kirchmeyer, 1998;Marini and Fan, 1997;Marini 1989;Schneer and Reitman, 1990;Long, Allison, and McGinnis, 1993;Stanley and Jarrell, 1998;Hull and Nelson, 2000).
We comment here on a few of the variables for clarification.The work activities variables represent whether each activity represents at least 10% of the employee's time during a typical workweek (1=yes, 0=no).The supervisory work variable represents whether the employee's job involves supervising the work of others (1=yes, 0=no).Employer size is measured on a scale of 1-7 (1=under 10 employees, 2=10-24 employees, 3=25-99 employees, 4=100-499 employees, 5=500-999 employees, 6=1000-4999 employees, 7=5000 or more employees).We treat this as a quantitative variable in the regression since larger values are associated with larger employers.Finally, the regression models contain quadratic terms for years since most recent degree and years in current job, since the rate of growth of salaries may slow as employees acquire more experience (Gray, 1993).
To avoid multicollinearity, these variables have been mean-centered before squaring.

Regression results
Table 3 presents the survey-weighted regression results for each of the four IT occupations.To arrive at these final models, first a linear regression model predicting salary from all the covariates in Table 3 along with interactions between all of these covariates and the indicator variable for male was fit.An F -test, using a Wald statistic appropriate for complex survey data (Korn and Graubard, 1990) 4 , was used to test whether the coefficients of the interactions with male were all simultaneously zero.Results are presented in Table 4.When this test was statistically significant, as it was for software engineers, a backward selection procedure was used to identify the significant (p < 0.05) interactions.Residual plots and other diagnostics for these models were satisfactory.Values of Rsquared are comparable to those in similar studies (Schneer and Reitman, 1990;Stroh et al., 1992;Marini and Fan, 1997).The results in Table 3 show that after controlling for educational and jobrelated characteristics, there are significant gender salary gaps in all four occupations.For computer systems analysts, computer programmers and information systems scientists there is a statistically significant shift in the regression equation for men relative for women ($2,429, $3,577, and $4,555 respectively).For male software engineers, there is a shift in the regression equation in the Mid Atlantic ($7,278) and East South Central ($26,078) regions, combined with statistically significant interactions with years since most recent degree ($643) and the quadratic term for years since most recent degree (−$53) suggesting differential rewards for experience for male and female software engineers.Note, however, the gender gap for software engineers in the East South Central region should be interpreted with caution since the sample contains only 40 men and only 6 women in this region.Data in the region of propensity score overlap were subclassfied into five strata based on the quintiles of the estimated propensity scores for women.As a check of the adequacy of the propensity score model, a series of analyses was conducted to assess the covariate balance in the five groups of matched men and women.For each continuous covariate, we fit a survey-weighted two-way ANOVA where the dependent variable was the covariate and the two factors were gender and propensity score stratum index.For each binary covariate, we fit an analogous survey-weighted logistic regression with the covariate as the dependent variable and gender and propensity score stratum index and their interaction as predictors.In these analyses, nonsignificant main effects of gender and nonsignificant effects of the interaction between propensity score stratum index and gender indicate that men and women within the five propensity score strata have balanced covariates.A summary of the number of these gender and gender interaction effects that are statistically significant before and after propensity score subclassification are shown in Tables 6 and 7. Before subclassification, using survey-weighted oneway ANOVAs for continuous covariates and analogous survey-weighted logistic regressions for binary covariates, we found more covariates to be out of balance (as indicated by a statistically significant gender main effect) than we would expect by chance alone.After subclassification, the balance statistics (summarized by the p-values of gender main effects and gender by propensity score-stratum interactions) are much closer to what we would expect in a completely randomized experiment.Regression adjustments were used to adjust for remaining imbalances.Specifically, within each propensity score stratum, a survey-weighted linear regression model predicting salary from the indicator for male and any covariates that were out of balance was fit and equations (4.2), (4.3), (4.4), and (4.5) were used to estimate the gender salary gap and its standard error.
The survey-weighted regression-adjusted propensity score estimates of the gender gaps are shown in Table 8.After controlling for educational and jobrelated covariates, the propensity score analyses show significant gender salary gaps for all four occupations.These results are similar to the results from the linear regression analysis.Note that when comparing the propensity score and linear regression analysis results for software engineers, the linear regression model predicts an overall average gap of $3,690 (s.e.=1,472) when averaging over the women in this population, which is similar to the gap of $4,016 (s.e.=1,627) estimated from the propensity score analysis.

Comparison of Weighted and Unweighted Analysis
To illustrate the effect of ignoring the complex survey design, we compare the results from survey-weighted and unweighted analysis.Summaries of these analyses are presented in Tables 9 and 10.The weighted and unweighted results differ quite substantially in terms of the size of the estimated gender salary gaps and in terms of which interactions with male are significant in the linear regression models.The discrepancies between the weighted and unweighted analyses seem to be related to the differential underrepresentation of lower paid men and women.In particular, unweighted estimates of the salary gap are larger than the weighted estimates for computer programmers and software engineers, where lower paid men are more underrepresented than lower paid women (as seen by a larger negative correlation between the survey weights and salary for men in Table 11).In contrast, unweighted estimates of the salary gap are smaller than the weighted estimates for information systems scientists where lower paid women are more underrepresented than lower paid men.  a Some interactions with male are also significant in this model (see Table 2).This model predicts an average salary gap of $3,690 * * (s.e.=1,472) when averaging over all the women in this population.*** p-value < .01,** .01≤ p-value < .05,* .05≤ p-value < .10.

Comparison of Methodologies
There are several technical advantages of propensity score analysis over multiple linear regression.In particular, when covariate balance is achieved and no further regression adjustment is necessary, propensity score analysis does not rely on the correct specification of the functional form of the relationship (e.g., linearity or log linearity) between the outcome and the covariates.Although such specific assumptions may not be a problem when the groups have similar covariate distributions, when the covariate distributions in the two groups are very different linear regression models depend on the specific form of the model to extrapolate estimates of gender differences (Dehejia and Wahba, 1999;Drake 1993;Rubin, 1997).When regression adjustment is used to adjust for remaining covariate imbalances, previous research has found that such adjustments are relatively robust against violations of the linear model in matched samples (Rubin 1973(Rubin , 1979;;Rubin and Thomas, 2000).Propensity score analysis depends on the specification of the propensity score model, but the diagnostics for propensity score analysis (checking for balance in the covariates) are much more straightforward than those for regression analysis (residual plots, measures of influence, etc.) and, as explained previously, enable the researcher to easily determine the range over which comparisons can be supported.Furthermore, propensity score analysis can be objective in the sense that propensity score modeling and subclassification can be completed without ever looking at the outcome variables.Complete separation of the modeling and outcome analysis can be guaranteed, for example, by withholding the outcome variables until a final subclassification has been decided upon, after which no modifications to the subclassification are permitted.These two aspects of the analysis are inextricably linked in linear regression analysis.
A nontechnical advantage of propensity score analysis is the intuitive appeal of creating groups of similar treated and control units.This idea may be much easier to explain to a nontechnical audience than linear regression.These groups formed by subclassifying or matching on the propensity score are also very similar in concept to audit pairs commonly used in labor or housing discrimination experiments (Darity and Mason, 1998;National Research Council, 2002).In an audit pair study of gender discrimination in hiring, for example, one female and one male job candidate would be matched based on relevant characteristics (and possibly given the same resumes) and then would apply for the same jobs to determine whether their success rates are similar.
An advantage of multiple linear regression, however, is that a linear regression model may indicate a difference between the salaries of men and women due to an interaction with other covariates, such as industry or region of the country, as was the case for software engineers.A propensity score analysis estimates the gender gap averaged over the population, possibly obscuring important interactions.Also, in addition to estimating any gender effects, the regression model also describes the effects of other covariates.For example, our regression models show that higher salaries are associated with more experience, more education, and more supervisory responsibilities.In contrast, propensity score analyses are designed only to estimate the overall gender effect.Of course, these interpretations of the linear regression coefficients are only reliable after a careful fitting of the regression model with appropriate diagnostic checks, including a check of whether there is sufficient overlap in the two groups to facilitate comparisons without dangerous extrapolations.
Both multiple linear regression and propensity score analyses are subject to problems of omitted variables, "tainted" variables and mismeasured variables.A tainted variable is a variable like job rank that, for example, may be affected by gender discrimination in the same way that salary is affected (Finkelstein and Levin, 2001;Haignere, 2002).If we control for job rank, in linear regression or propensity score analysis, this may conceal gender differences in salary due to discrimination in promotion.For example, male and female supervisors may be similarly paid, but women may rarely be promoted to supervisory status.Rosenbaum (1984) discusses the possible biasing effect of controlling for a variable that has been affected by the treatment in the propensity score context.Mismeasured variables may also affect the assessment of gender differences.For example, years from an individual's first bachelor's degree or from their most recent degree is often used as a proxy for years of experience (Gray, 1993;NSF 99-352) but this may overstate the experience of anyone who may have temporarily left the workforce since graduating.
Both linear regression and propensity score analysis are also affected by complex survey designs.The survey design must be incorporated into estimates from both these methods to obtain unbiased estimates of population-level effects.

Discussion
The results from our linear regression and propensity score analyses agree on the size and statistical significance of the gender salary gaps in these four IT occupations after controlling for educational and job-related covariates.Results from our two different analysis methods may agree so closely in this example because there is good overlap in the distribution of covariates for the men and women in each of the four occupations.More specifically, the propensity score overlap regions used in the propensity score analysis do not differ much from the whole samples used by the regression analysis.An example by Hill et al. (2004) suggests that at least some of the benefit of propensity score methods may result from the restriction of the analysis to a reasonable comparison group.Other research has found statistical modeling to be relatively robust in wellmatched samples (Rubin 1973(Rubin , 1979)).These factors may have contributed to the similarity of the results in our analyses.Other studies have found propensity score analysis to more closely estimate known experimental effects than linear regression (Dehejia and Wahba, 1999;Hill et al., 2004).
Our analysis also shows that it is important to incorporate survey weights from the complex survey design into both methodologies.Ignoring the survey weights affects gender salary gap estimates in both the linear regression and propensity score analyses, probably due to the differential underrepresentation of lower paid men and women in these samples.
Finally, the finding of significant gender salary gaps in all four IT occupations agrees with numerous other studies that have shown that gender salary gaps can not usually be fully explained by traditional "human capital" variables such as education, years of experience, job responsibilities (e.g., Bamberger, Admati-Dvir, and Harel, 1995;Jacobs, 1992;Marini, 1989;NSF 99-352;Stanley and Jarrell, 1998).Studies of workers in other fields have estimated similar sized gaps after controlling for covariates similar to the ones used in our study (NSF 99-352, Stanley and Jarrell, 1998).It is possible that the gaps seen in our analysis could be explained by other covariates not available in the SESTAT data, such as quality or diversity of experience, number of years of relevant experience (as opposed to number of years of total experience), job performance, and willingness to move or change employers.

Table 4 :
Tests of interactions with male in the linear regression models a degrees of freedom are(28, 2007), b degrees of freedom are (28, 1053), c degrees of freedom are (28, 2467), and d degrees of freedom are (28, 811).

Table 6 :
Balance statistics before propensity score subclassification a main effect of gender, b interactions between gender and propensity score stratum index.

Table 8 :
Survey weighted propensity score estimates of average gender salary gaps

Table 9 :
Comparison of weighted and unweighted propensity score results

Table 10 :
Comparison of weighted and unweighted regression results

Table 11 :
Correlation between survey weights and salary