When Surveys Lie and People Tell the Truth: How Surveys Oversample Church Attenders

by Robert D. Woodberry
When Surveys Lie and People Tell the Truth: How Surveys Oversample Church Attenders
Robert D. Woodberry
American Sociological Review
Start Page: 
End Page: 
Select license: 
Select License


cal Report #35. Chicago, IL: National Opinion

Research Center.

. 1992. "Discrepancies between Men and

Women in Reporting Number of Sexual Part-

ners: A Summary from Four Countries." Social

Biology 39:203-11. Stolzenberg, Ross M., Mary Blair-Loy, and Linda Waite. 1995. "Religious Participation in Early Adulthood: Age and Family Life Cycle Effects on Church Membership." American Sociologi- cal Review 60:84-103.

Comment on Hadaway, Marler, and Chaves,
ASR, December 1993

Robert D. Woodberry

University of North Carolina, Chapel Hill

adaway, Marler, and Chaves (1993, henceforward HMC) argue that actual church attendance in the United States is only half the level reported by surveys. If this is true, surveys may misreport other behav- iors and attitudes by similar margins. Unfor- tunately, such errors are difficult to detect because we have few reliable behavioral counts to compare with survey estimates (and we have no direct access to people's at- titudes). Moreover, if social desirability causes gaps between what happens and what

" Direct all comments to Robert D. Woodberry, Sociology Department, Hamilton Hall, CB#3210, University of North Carolina, Chapel Hill, NC 275 10 (bobwood@email.unc.edu).I presented ex- panded versions of this paper at the Association for the Sociology of Religion in August 1996, the Society for the Scientific Study of Religion in November 1996, and the American Association for Public Opinion Research in May 1997. Spe- cial thanks to Mike Welch, David Leege, Kevin Christiano, Ken Bollen, Bill Kalsbeek, Chris Smith, Dave Sikkink, Dick Udry, Jim Cavendish, Tom Trozzolo, Andrew Smith, my sociology writ- ing class, and Andrew Grenville of Agnus Reid Polls. This research was aided by small grants from Religious Research Association, and the So- ciety for the Scientific Study of Religion.

surveys report, this would presumably bias correlations between socially desirable be- haviors, and researchers would have diffi- culty distinguishing real correlations from spurious ones.

Thus, the church attendance gap observed by HMC has implications far beyond the so- ciology of religion. Previously I provided a comprehensive examination of this gap, and concluded that little of it appears to be caused by social desirability bias (Wood- berry 1997a). In this study, I suggest that about 29 percent of Americans attend church or synagogue on an average week (i.e., adjusted head counts and reduced sur- vey estimates meet at slightly under 29-per- cent attendance). However, in this comment I restrict my attention to how surveys over- sample church attenders.

Most surveys oversample church-goers be- cause they are easier to contact and more co- operative than non-church-goers. Regular attenders are easier to contact because people with nine-to-five jobs, married couples, families with children, and families in which the wife is a homemaker or works part time all tend to be more religiously active (Woodberry 1997b). Referrals probably ac- centuate this bias because easy-to-contact family members (e.g., homemakers) often tell researchers when difficult-to-contact members (e.g., husbands with busy sched- ules) are likely to be home. Thus, in the 1988-1992 National Election Study (NES) (Miller and NES 1995), when we regress the probability of attending church on the num- ber of calls needed to contact respondents (variable "V9123"), the coefficient is nega- tive and highly significant (b = -.01, S.E. = .002, p = .000, with a range of 1to 33).' For each additional call needed to contact a re- spondent, respondents are typically one per- centage-point less likely to have attended

' The NES asks respondents how often they attend religious services. I recoded their re- sponses into the literal probability of attending church during an average week: "every week" = 1, "almost every week" = 36/52 = .69, "once or twice a month" = 18/52 = .39, "A few times a year" = 4/52 = .08.

church. Telephone surveys accentuate this problem because 5 to 8 percent of the U.S. population do not have a telephone in their homes, and on the 1988-1993 General So- cial Survey (GSS) (Davis and Smith 1996) those without phones are 16.4 percentage points less likely to attend church on an av- erage week (t= 9.70. p = .000).

Highly religious respondents are generally also more cooperative and thus presumably are less likely to refuse an interview. On the GSS and the National Survey of Black Americans, interviewers consistently code highly religious respondents as more friendly and cooperative (Morgan 1983; Ellison 1992). Moreover, on the 1988-1992 NES, re- spondents who required a persuasion letter to induce participation were 5.1 percentage points less likely to attend church (t = 2.79, p = .005). This suggests that surveys with high refusal rates also oversample church attenders.

Unfortunately, current survey weighting techniques do not adequately adjust for non- contacts and refusals. Most researchers ana- lyze these data problems and create weights with no theoretical rationale (Groves and Lyberg 1 988:209).2 Researchers typically analyze and weight data by race, sex, region, age, and education level. These variables are easily measured, and are correlated with noncontact or refusal; but they do not di- rectly cause contactability or cooperative- ness. For example, surveys generally over-sample women, thus researchers often weight surveys to match the census gender ratio. But being female per se does not make women easier to contact-the fact that more women are homemakers, work part time, or are home in the evening and on weekends caring for children makes women easier to contact. If researchers merely weight surveys by the census gender ratio (rather than the ratio of full-time workers to part-time

Presumably, this lack of theoretical founda- tion is why weighted surveys often do not match census data on other variables, and why identi- cally weighted telephone and face-to-face survey samples remain significantly different (Massey and Botman 1988: 159-60).


workers and homemakers), they will undersample women who work full time. Such undersampling inflates attendance es- timates, because women who work full time attend church less regularly-as do their families (Hertel 1995).

The sampling problems mentioned above ac- centuate the church attendance gap in two ways. First, they led HMC to overestimate the Catholic p~pulation.~

Thus when they compare head counts to inflated estimates of the Catholic population, the percentage at- tending church appears artificially low. Sec- ond, sampling problems inflate estimates of church attendance.
Overestimating the Catholic Population

HMC use the National Survey of Religious Identification (NSRI)(Kosmin 199 1) to cal- culate the proportion Catholic within each diocese. They then multiply this proportion by the diocese's census population to esti- mate the number of Catholics in each. How- ever, despite the NSRI's large sample size (N = 113,000), the survey is flawed. Among other problems, the data come from a series of 1 13 different bi-weekly, telephone market- ing polls that has a 50-percent cooperation rate, gives no information on contact rate, and attempts a maximum of only four calls per household (Kosmin and Lachman 1993). These procedures bias the NSRI sample to- ward easy-to-contact and highly cooperative respondents. The higher-quality GSSINES estimates of the Catholic population are lower-92.7 percent of the NSRI e~timate.~

See Woodberry (199713) for a description of problems with HMC's Protestant data.

The 1989-1990 NSRI estimate of the U.S. Catholic population is 26.2 percent (Kosmin and Lachman 1993:299), the 1988-1993 GSS is 24.5 percent, and the 1988-1992 NES is 24.1 percent. No GSS or NES survey has ever estimated the Catholic population as high as 26.2 percent. Moreover, in every year I have analyzed (1974- 1992) the published Gallup estimate of the Catho- lic population is higher than the GSS and NES estimates. This suggests that telephone polls and low-quality face-to-face surveys consistently


Thus, if we multiply the NSRI estimates of the proportion Catholic in each diocese by .927 (see HMC, table 1, p. 745), Catholics attending church increase from 28.0 percent to 30.2 percent of the Catholic population and 2.2 percentage points are removed from the attendance gap.
Overestimating Church Attendance

Sampling problems also make survey esti- mates of church attendance too high. HMC match 1990 Catholic head counts from 18 Catholic diocese to a 1991 Gallup telephone poll; they match Protestant counts/estimates in Ashtabula County, Ohio, to their own tele- phone poll. However, telephone polls strongly oversample church attenders when compared with higher-quality face-to-face surveys. To test this I put the GSS attendance question on the October 1996 Southern Fo- cus Poll (SFP), a national telephone poll with a maximum of 4 call attempt^.^ The national sample of this poll estimates church atten- dance to be 14.9 percentage-points higher than does the 1996 GSS (a face-to-face sur- vey with over 33 call attempts) using exactly the same question (t = 8.84, p = .000).~

I assume this 14.9 percentage-point differ- ence exaggerates the extent of the problem: The SFP has a less sophisticated weighting scheme than Gallup surveys, and the 1996 GSS attendance estimate is unusually low. However, even if we arbitrarily assume 5 percentage points of this difference are due to other causes, sampling problems still can ex- plain away over half of the church attendance gap observed by HMC.

oversample Catholics. Both the GSS and NES are high-quality face-to-face surveys demanding at least 33 call attempts per respondent and show- ing completion rates of over 70 percent.

The SFP is a random-digit-dial telephone poll with a 55-percent cooperation rate among those contacted (N = 1,222). I weighted these data to compensate for the oversampling of people in the South, so the data reflect the national population.

On both surveys I calculated the literal prob- ability of attending church on an average week as follows: "more than once a week" and "every week" as 1.0 (= 100-percent chance); "nearly ev- ery week" as 42/52 = .81; "2 to 3 times a month" as 30152 = .59; "once a month" as 12/52 = .23; "several times a year" as 4/52 = .08; "once a year" as .002; and less than once a year as .001.

This was not obvious earlier because tele- phone polls and high-quality face-to-face surveys have used such differently worded questions that direct comparisons were im- possible. Until recently, questions about church attendance "in the past seven days" were asked only on telephone polls, and de- tailed questions about average attendance were asked on face-to-face surveys. Literal translations of these later questions into the probability of attending church during an av- erage week masked the extent of these sam- pling problems (e.g., Smith 1991; HMC p. 746; see Woodberry 1997b for an empirically based translation).

I suggest that telephone polls seriously oversample church attenders and thus do not accurately reflect the U.S. population. To correct this problem, researchers should use high-quality surveys or more theoretically- grounded weighting techniques (see Wood- berry 1997b). Researchers also should ana- lyze how sampling problems may distort sur- vey information regarding behaviors other than church attendance (e.g., voting behav- ior and voluntarism). Much of the gaps scholars attribute to social desirability bias may actually be caused by sampling prob- lems, methodological flaws, and errors in records (Woodberry 1997a, 1997b).

Robert D. Woodberry is a Ph.D. student at the University of North Carolina, Chapel Hill. With Christian Smith he is co-author of a chapter in the 1998 Annual Review of Sociology entitled

"Fundamentalism et al.: Conservative Protes- tants in America ";with Michael Welch and David Leege he has co-authored a forthcoming article in Social Science Quarterly, "Pro-Life Catholics and Support for Political Lobbying by Religious Groups. " His current research interests include examining the role of religious groups in the democratization process and assessing the rise of the voluntary sector.

Davis, James A. and Tom W. Smith. 1996. Gen

eral Social Surveys, 1972-1996 [MRDF]. Chi-

cago, IL: National Opinion Research Center

[producer]. Storrs, CT: Roper Center for Pub-

lic Opinion Research [distributor]. Ellison, Christopher G. 1992. "Are Religious


People Nice People? Evidence from the Na- tional Survey of Black Americans." Social Forces 71:411-30.

Groves, Robert M. and Lars E. Lyberg. 1988. "An Overview of Nonresponse Issues in Telephone Surveys." Pp. 191-21 1 in Telephone Survey Methodology, edited by R. M. Groves, P. P. Biemer, L. E. Lyberg, J. T. Massey, W. L. Nicholls 11, and J. Wakesberg. New York: John Wiley and Sons.

Hadaway, C. Kirk, Penny Long Marler, and Mark Chaves. 1993. "What the Polls Don't Show: A Closer Look at U.S. Church Attendance." American Sociological Review 58:741-52.

Hertel, Bradley R. 1995. "Work, Family, and Faith." Pp. 81-121 in Work, Family, and Reli- gion in Contemporary Society, edited by N. T. Ammerman and W. C. Roof. New York: Routledge. Kosmin, Barry A. 1991. The National Survey of Religious Identification: 1989-90 [MRDF]. New York: The Graduate School and Univer- sity Center of the City University of New York [producer, distriburtor]. Kosmin, Barry A. and Seymour P. Lachman. 1993. One Nation Under God: Religion in Con- temporary American Society. New York: Har- mony Books.

Massey, James T. and Steven L Botman. 1988. "Weighting Adjustments for Random Digit Di- aled Surveys." Pp. 143-60 in Telephone Sur- vey Methodology, edited by R. M. Groves, P.

P. Biemer, L. E. Lyberg, J. T. Massey, W. L. Nicholls 11, and J. Wakesberg. New York: John Wiley and Sons.

Miller, Warren E. and the National Election Stud- ies (NES). 1995. American National Election Studies Cumulative Data File, 19.52-1994 [MRDF]. 8th ICPSR version. Ann Arbor, MI: University of Michigan, Center for Political Studies [producer]. Ann Arbor, MI: Inter-uni- versity Consortium for Political and Social Re- search [distributor].

Morgan, S. Philip. 1983. "A Research Note on Religion and Morality: Are Religious People Nice People?" Social Forces 61:683-92.

Smith, Tom W. 1991. "Counting Flocks and Lost Sheep: Trends in Religious Preference Since World War 11." GSS Social Change Report. No. 26, revised January 1991. Chicago, IL: Na- tional Opinion Research Center.

University of North Carolina. 1996. Southern Fo- cus Poll [MRDF]. Chapel Hill, NC: University of North Carolina, Institute for Research in So- cial Science [producer and distributor].

Woodberry, Robert D. 1997a. "How Then Shall We Measure?: Adjusting Survey Methodology to Remove the Gap between Head-Counts and Survey Estimates of Church Attendance." Pa- per presented at the annual meeting of the American Association for Public Opinion Re- search, May 15-18, Norfolk, VA.

Woodberry, Robert D. 1997b. "The Missing Fifty-Percent: Accounting for the Gap between Survey Estimates and Head-Counts of Church Attendance." Master's thesis, Department of Sociology, University of Notre Dame, Notre Dame, IN.

Reply to Caplow, Hout and Greeley, and

C. Kirk Hadaway

United Church Board for Homeland Ministries
Penny Long Marler

Samford University
Mark Chaves

University of Illinois at Chicago

n "What the Polls Don't Show: A Closer Look at U.S. Church Attendance" (Hadaway, Marler, and Chaves 1993), we presented evidence that weekly church atten- dance in the United States is substantially below the 40-percent level reported by most social surveys and public opinion polls. We also concluded that the overreporting of church attendance by survey respondents ex- plains a major portion of the "gap" between attendance counts and poll-based estimates. Our findings are questioned by three critical comments. Woodberry (1998) agrees with us that survey-based attendance rates are in- flated, but claims that response bias accounts for most of the inflation. Caplow (1998) and Hout and Greeley (1998) argue that survey- based rates are not substantially inflated, so there is no inflation to be explained. These

* Direct correspondence to C. Kirk Hadaway, UCBHM, 700 Prospect Ave., Cleveland, OH 441 15 (hadawayk@ucc.org), Penny Long Marler (plmarler@samford.edu), or Mark Chaves (chaves@uic.edu). We thank the Lilly Endow- ment, Inc. for funds to support this continuing re- search, and Karl Eschbach, Fred Kniss, Dan Olson, and Mark Shibley for comments on an ear- lier draft.

  • Recommend Us