Exact matches only
//  Main  //  Menu

 
☰︎ Menu | 🔍︎ Search  //  Main  //   ⋯ Miscellanea (Ketubot, Art, Essays on Prayer, &c.)   //   Meta   //   Open Source Judaism   //   Community News   //   Pew Study of American Jewry: A Few Grains of Salt by Dr. Samuel Klausner

Pew Study of American Jewry: A Few Grains of Salt by Dr. Samuel Klausner

"Grains of Salt" (credit: kevin dooley, license: CC-BY)

Grains of Salt” (credit: kevin dooley, license: CC-BY)

Pew Study of American Jewry: A Few Grains of Salt[1] I am grateful for comments on the paper made by Charles Kadushin and Ira Sheskin  

Samuel Z. Klausner
Professor Emeritus of Sociology
University of Pennsylvania
download.

Why have so many of my sociologist friends and leaders of the American Jewish community accepted the Pew report findings at face value? A Portrait of Jewish Americans has received wide attention. An article appeared in the Forward and Arnold Eisen discussed it in his blog. My listserv from the Association for the Social Scientific Study of Jewry (ASSJ) has had a running discussion of both findings and methods. Recently, I received a Board Briefing from the Memorial Foundation for Jewish Culture which describes the report as “important and impartial.” The subtext of “impartial” may account for some of the uncritical impact of the findings. Pew has published “raw” numbers, unexplained summaries of interview responses. The results evoked skepticism in this reader. An examination of how these results were obtained, a methodological critique, confirmed my skepticism.

The report depicts a community growing at a rate slower than that of the American population, an increase in the intermarriage rate and a decline in common measures of religious beliefs and practices. These matters had been documented in all three (1970, 1990, 2000) National Jewish Populations Studies (NJPS in future references).

The methodological appendix to the study warns against comparing the findings with those of other studies because of differing assumptions, different question wording, and so forth. This does not prevent the Pew team from providing time sequences of certain variables with the start points in NJPS. The appendix suggests a way of adjusting the NJPS to the Pew categories. Instructions for these adjustments, a code, is available but has not yet been shared widely. Community leaders hope to develop social policies but the study provides little guidance for relating the published marginals to action for social change. Pew was not expected to do this. The reader of the report is left to recall his or her experience or social knowledge and, not unusually, revives an idea arrived at before the Pew data were available. The Forward has published a small book of essays on the Pew report illustrating such inferences. Every author accepts the findings at face value and then offers ad hoc explanations of the findings. By sequencing several marginals of the response categories they devise a narrative. This is a customary way for the non-specialist to interpret poll results but not a valid path to propositions.

I will argue that (1) the phenomenally low response rate makes it unlikely that the findings are highly representative of the American Jewish community; (2) choosing an opinion poll rather than a research oriented format contributes to an attitude that allows a mid-course report to be mistaken for a wrap up; (3) the typology created to represent distinctly religious or secular Jewish orientations is conceptually ill-fitted to its subject; (4) the measures of support for Israel divert attention from the public interest in change in level of support for Israel; (5) assessing the validity of findings by applying a margin of error measure fails since the sample is not random; and (6) a social policy approach to intermarriage remains inchoate.

The Response Rate

Perhaps the most pernicious source of error is in the response rate of 16%. If one can show that the 16% do not differ materially from the 84% who were non-respondents we might rest assured that this sample might represent the population. Unfortunately, the researchers had not prepared for this outcome by collecting information from or about the non-respondents. Instead, they compared people who completed the questionnaire with those who did not. No information is published on these comparisons, if they are even relevant. The Pew team sought assurance by comparing their results with findings about Jews in other surveys.

Pew conducted the sampling in a normative manner. They began with a random sample of 1,175,367 landline and cell telephones in the United States. After eliminating business, government, and other non-residential numbers 400,751 remained. Of these 71,151 brief interviews were completed. The quotient of 400,751 divided 71, and 151 was 17.8% (unweighted) or 16% (weighted) provided the published response rate. This list was combed for eligible respondents resulting in 5,491 household with at least one Jew. A Jew was anyone who considered himself or herself Jewish or partially Jewish or were raised Jewish or had a Jewish parent. Presumably, had all of the 400,751 been reached and had responded, the study would have culled over 26,000 Jews. Perhaps, this method of sampling needs to be reconsidered.

The Pew Study Format

Raw data are commonly presented in two ways: as a summation of people with one or another attribute or a summation of attributes or variables associated with individuals. The issue between them is not so much right or wrong but a matter of an attitude to the research goal. The principal tables of the Pew report are presented in this way as a survey of attributes of individuals. This form of presentation is traditional in a population poll. The task is considered finished when the poll results are published. A client running an advertising or political campaign would find this useful for selecting target populations. The “unit of analysis,” the object counted here is the individual person. Age, for instance, is an attribute by which people are grouped. Each group is termed a partition of the population by some age or age range.

Alternatively, the attribute, such as age, may be the “unit of analysis.” Viewing the data in this way is preparation for analysis since research analysis seeks to establish propositions expressing the relations among attributes or variables. Thus “frequency of religious attendance” could be predicted as a function of “density of Jewish population,” “culture” or “general Sabbath observance” and so forth. These hypotheses become propositions when the relationships are confirmed.

The Pew interview schedule has about 140 items with response frequencies in Appendices B and C. These appendices are in a form for social scientists to take a first look at the data to establish hypotheses. When the frequencies of the variables are presented the researcher begins to plan for confirming propositions or arriving at explanations. As expected the discussion around the published findings has, almost exclusively, concerned the raw data published in the poll format.

Enumeration of the Jewish Population

The Pew project’s estimate of the size of the American Jewish population as 6.7 million is reasonably accurate. “Reasonably” when compared with the “gold standard” US census. The figure includes children in households of at least one adult and who are being raised as Jews. This estimate is a bit lower than that of Leonard Saxe’s 6.8 million and somewhat higher than the figure proposed by Sergio DellaPergola between 5.2 and 5.7 in 2009.

The US census counted Jews but once in recent history. That was the interdecennial sample census of 1957. This sample of 35,000 did not suffer the response rate problem. The government prefers not to enumerate members of religious groups and Jews are defined as a religious community. Enumeration by race is permitted. If I remember correctly, leaders of the Jewish community supported the government’s position out of fear of discrimination while the scholarly community would have preferred to have the data for study. In this circumstance, each American religious community evolved its own approach to estimating the number of its adherents. Protestant surveys counted church members. Catholics assess the numbers in its Parishes. Catholics count Catholic individuals living within the geographic bounds of a Parish. Protestant denominations have been dependent on ministers’ estimates of church memberships. Since the definition of a member varies, comparability relies on arithmetic juggling. The Jewish community has relied on national and community surveys funded by community agencies and carried out by social survey scholars. Jews count families in calculating synagogue membership while their surveys count individuals.

Pew’s Methodological Appendix shows the division of the sample by several population characteristics. The subsample of the Orthodox respondents is unusual. Of the 3,475 interviewees, 517, 14.9% were Orthodox. These are unweighted data. Pew’s denominational tables report Orthodox as 10% of the population. The 2000 NJPS had reported the same proportion. The Pew report gives a fertility rate of around 4 children for the Orthodox as compared with under 2 children per family in Conservative and Reform households. We are not told whether this is the completed family or average at a point in time. Since the relative proportion of Orthodox among the denominations did not increase. We may suspect some defections from Orthodoxy or accretions to the more liberal denominations.

The denominational question was phrased as “Do you consider yourself Modern Orthodox, Hasidic, Yeshivish or some other type of orthodoxy.” The Hasidic and Yeshivish were classified as Ultraorthodox. The appendix table reports 326 as Ultraorthodox and 154 as Modern Orthodox. These two figures add to 480 which leave 37 of the 517 total. These may be of a variety of smaller groups. The Ultraorthodox constitute 73.8% of all Orthodox. Previous studies have found the Ultraorthodox to be a minority among the Orthodox. Did this occur because numbers of Modern Orthodox synagogue members to appear zealous describe themselves as Ultraorthodox? Is it a distortion caused by the low response rate?

The Typology

The principal dependent variables (judged by the direction of percentaging) are those classifying the Jewish population into types according to whether the subjects think of themselves as Jews on the basis of religion or on the basis of ancestry or by culture. In practice the typologies are also treated as independent, causal, variables. Since assignment to a type is subjective, the boundaries between religion and atheism do not necessarily follow the criteria of theologians or social scientists. Most every researcher of the Jewish community is aware that being Jewish is a composite of religion, cultural tradition, ancestry and social identification. Various weights are given to each of these and other components. Pew constructed the types a priori from their own minds. The respondents were forced to select categories which nearly fit his or her self-concept rather than their objective social status. How did the researcher come to accept a notion that Jews could be clustered or differentiated in this way? Historically, these images are rooted in discussions by public intellectuals who were trying to rationalize the ideologies of certain social movements. The Hamburg Temple’s leadership in the mid-nineteenth century held that Jews are a religious community while they are nationally German. The proto-Zionist authors writing in Hebrew in the late nineteenth century saw Jews as a nation whether or not they were religious or possessed a territory. The question was reexamined through the 1940s and 1950s in Zionist circles. Mordechai Kaplan saw Jewry as a civilization which blended these elements. Further, in defining religion respondents do not have one unique phenomenon in mind. One respondent may think of halakha, law and mitzvoth, commanding behaviors. For another it has to do with spiritual or emotional experiences and for yet another being a Jew allows one to be an atheist or, perhaps, communing with American civil religion. The public discussion applied these four labels to Jews as a community whereas in this study types classify individuals.

The report cites several comparisons between those who say they are Jewish by religion and those not oriented to religion. These are based, in part, on the typology. A question is asked about the importance of religion in one’s life as assessed by the respondent. Of Jews by religion, 31% consider religion as playing a very important role in their lives while but 8% of Jews with no religion assert that religion is very important to them. There must be an interpretation problem here. Perhaps the “importance” needs to be unpacked and scaled. This may be accomplished by “pairwise comparison” of religion with other valued experiences. One might expect that a larger proportion of persons who are Jewish by religion would say it is important to them and, so, validate, the typology. On reflection, however, we realize that both the belief that they are Jews by religion and that religion is important is two ways of saying the same thing. The result is redundant, not a correlation of independent measures.

Asked about belief in God, 39% of Jews by religion believe in God while 18% of Jews not oriented to religion believe in God. The responses were formulated in terms of “certainty.” from “absolutely certain”…to…” not at all certain.” The last might accommodate an agnostic but not an atheist. Perhaps these respondents are telling us that they do not belong to religious organizations but believe in an abstract or universal deity such as Aristotle’s prima mobile. What about institutional participation? We find that 58% of Jews by religion attend synagogue a few times a year, not more often, while 44% of those with no religion attend that often. Clearly, the typology is not assembling the respondents it intended under each type.

In reality, members of the sample should fall into several composite categories such as both by religion and by ancestry. Putting the word “mainly” into the question forces a single choice and obscures this reality. The interviewer is instructed not to ask about multiple bases of identity. However, should the respondent, unprompted, mention more than one category a record is kept. As far as I can see, these multiple category responses were placed “out of range,” that is, not counted.

Single dimensional classification would have been available to the researchers had they taken an “abstract analytic variable” approach rather than one taking the concrete person as the unit if analysis. for creating a typology. A single analytic dimension might be derived empirically from behavioral data in place of subjective self-assessment. The researcher, having immersed himself or herself in the literature on Jewish identity might list a variety of attitudes and behaviors in which Jews, as Jews, might engage. The selections could be entered into a factor analytic matrix in which they are grouped into patterned components. The emergent factors would represent Jewish types as they actually are enacted among Jews at this time in history.

The interview contains several behavioral measure of participation in religious life, which may have served better to define types. They are geared to measure a low level of religious involvement. One can qualify as quite observant by attending a Seder, lighting Ḥanukkah candles and attending synagogue more than a few times a year. If I remember correctly, the 1970-1971 interview schedule allowed the respondent to report daily attendance and fasting on rabbinic fasts. Apparently, as religious observance in the community declines, research lowers its measurement standards to retain a meaningful level of observance. Notably, the interview schedule makes no mention of the Pilgrimage festivals beyond a Passover Seder and no item mentions observance of the dietary laws. Of course, such items would have to be treated differently for the Reform community. The problem here is in a conflict between survey procedures and substantive interest. The proper way to set up the values of a variable is to assure a good distribution. Something like a normal curve of responses. Should be sought. This occurs in the very low range. Substantive concern is with the extent of decline from a traditional norm. This is a deviance measure. We posit a traditional norm of kashrut which includes rules of slaughter, two sets of dishes, etc. So, we want a scale measuring the difference between the traditional norm and current behavior.

Support for Israel

Several studies over the past years have shown declining support for Israel, especially among youth in America. The Pew study reports no significant change. We have here another example of how choice of a measure can shape results to exclude the high end of a distribution. One could argue that the measure used by Pew creates an “edge effect,” higher scores accumulating at the upper bound—which is really low. Again, the measure of interest should be decline from an earlier norm. Pew uses a subjective indicator which sets so low a threshold defining support that it is bound to hold steady The item asks whether the respondent “feels attached (very, somewhat…) to Israel. No “anchor,” as a tests and measurements specialist would say, grounds a scale of “feeling.” A respondent may say “very attached” as he or she is preparing for aliya while another respondent selects that choice as one disturbed on reading about an Israel boycott. A higher threshold would be invoked were the question about the respondent’s child making aliya or whether the respondent would welcome an opportunity of employment in Israel for a year.

Fortunately the study asks about the number of trips the respondent has taken to Israel. This is an objective measure of involvement with Israel. Fifty-seven percent of the respondents have never made a trip to Israel, 20% visited but once while 23% made two or more trips. The idea of attachment could be associated with either support or opposition for policy. People make trips despite unease about Israel government policies. Those who go while angered by the government policies still participate in Hashomer Hatsair’s year long hakhshara program. They are attached and angry. This is not counter-intuitive to one sensitive to the basis of “attachment.” Perhaps “attachment” is ambiguous. Its meaning for the respondent may be ascertained by analyzing its correlates.

The Role of Margins of Error

It is traditional in the world of social research to accompany each table by the number of cases on which the finding depends and on some margin of error or test of significance to ascertain whether a difference in response distribution between two or more items is statistically significant. Providing these figures is not customary in the world of polling. Nevertheless, a Table of Margins of Error is provided in the Methodological Appendix. The margin of error for the respondent population of 3,475 is given as .03. This means that any percentage based on this sample has 95% probability of being +/-3% of its value in the population as a whole. This calculation, though, depends on the sample being a random one. The true random sample selected by Pew, or Abt Associates, consists of the several hundred thousand residential telephone numbers, used as a filter for Jews, in the entire United States. Actual contact was made for a filtering interview with about 70,000 households. About 7,000 of these had at least one Jewish resident. Upon deeper examination for their Jewish bona fides 2000 were deemed ineligible for the study. The calculation of the margin of error is based on the sample and not upon the whole population. Obviously some randomness was lost along the way to the 3,475 who agreed to be interviewed so the margin of error of .03 must be taken as an estimate. The Pew team added to their level of comfort with this number by comparing the characteristics of this group with those of Jews in other studies accomplished at about the same time. Since the various partitions, such as by religion, denomination, etc. have fewer than 3,475 cases and may have larger margins of error. Indeed, as the Pew report shows, they vary from .03 to .13. The latter partition requires a very large difference between two percentages for the finding to be statistically significant. The largest margins of error of .129 and .124 are associated with the categories of Ultraorthodox and Modern Orthodox Jews respectively. The sample contained 659 Conservative Jews or 18%. The 2000 NJPS reported 26%. Were such a decline accurately described, a number of Conservative synagogues would be closing or merging. . Were the basis to be those who identify with one or another denomination, the ratio is 26%, the same proportion as reported by NJPS 2000 where the calculation was conducted in this way.

Intermarriage and Social Policy

In 58% of the marriages involving a Jew, between 2000-2013, the spouse was a non-Jew. In 1990-1994 this rate was 46%.The report provides a table of intermarriages by denomination but these respondents were not necessarily married in the previous decade. In this assessment, 2% of the Orthodox have non-Jewish spouses as do 27% of the Conservatives and 50% of Reform Jews. These denominational affiliations are not necessarily those they identified with at the time of marriage. The Orthodox figure underestimates the number who may have been Orthodox just prior to the marriage since an intermarried couple may not feel welcome and defect from that denomination. They may move to Conservative or Reform congregations or drop out of religious practice. Orthodox Jews may intermarry with the assistance of a Reform rabbi, or civil authorities or with the assistance of the non-Jewish party’s pastor. Unfortunately, Pew did not present their denominational identification of those who intermarried just prior to the study. Given the margins of error among the intermarried, 6.3 and, say, of the Conservatives as a whole, 6.5, the true figures may not be well-estimated.

Reform Judaism has sought to attract the non-Jewish partner to participate in synagogue life with openness to conversion. Their experience might be instructive. Jews use the term “outreach” while Christians take pride in “mission,” the traditional term. How might such a missionary endeavor contribute to conversion to Judaism?

I taught at Union Theological Seminary in the early sixties. Up the stairs and to the right of the Broadway entrance is their mission library, a trove of mission experience. In Christianity mission is The Great Commandment, spreading the Gospel. This pursuit of the “other” has been credited with Christianity’s enjoying the largest number of adherents of any religion in the world. I tend to read the history of Christian, Islamic and Buddhist missions as the history of the sword followed by the Book. This is illustrated by the third century BCE conquest of India by Ashoka. The mass conversion of Persians to Islam is a well-documented example. Cultural and social power has not been on the side of the Jews. A mission program contributes to strengthening commitment among the missionaries as the Mormon program illustrates. The proliferation of formal and informal organization of the intermarried is worth study.

The Conservative movement has a dilemma in that the Rabbinical Assembly prohibits its rabbis from conducting the ceremony. How, then, is the non-Jewish partner to feel welcome thereafter? The pull into the group is balanced with a push away from some other religious society. We cannot learn how to construct a social policy respecting intermarriage from the Pew study. It is attractive to the policy person to focus on intermarriage as a point of leverage. It is also a late point in the process of institutional incorporation. The NJPS had a question about how one might react as a parent to the knowledge that one’s child intended to intermarry. Response choices varied from “prohibit” to “encourage” it. Over the several NJPS studies, the average response has tended toward the permissive end. Intervention might be the most effective at an earlier stage of the family life-cycle.

A social policy begins with “value neutral” social research but since it is practical and goal oriented it requires a value decision. A goal is value-based. This is how I might think academically about collective and individual assimilation of which intermarriage is a stage. The collective context is reflected in parental attitudes. American society struggles to free mate selection from particularistic rules. To this end formal legal norms prohibit invoking anti-miscegenation rules to prevent Caucasians and African Americans from marrying. On the positive side this reflects the underlying value of social integration of groups in the workplace, in church and, ultimately, in the family. On the individual level social permission is a context for realizing various individual motives for intermarrying to come into play. Indeed, conformity with the collective values may become, in itself, a motive for intermarrying. What are some of the other goals of individuals the intermarrying couple may have in mind? Is it the traditional exchange of economic status for social status? Is it a desire to escape anti-Semitism by joining the protected majority? Is it a matter of pheromones? The list goes on. Unfortunately, it is not useful to ask the individual for his or her motive since the response will be formed in a way acceptable to the audience. Any trained social researcher knows how to deal with this problem. This is but one type of analysis which may be hospitable for connecting with a goal to begin shaping social policy.

A Closing Thought

I hasten to say that I do not believe that Pew intended to deceive its client nor can they be accused of shoddy work. Pew did what Pew always does, enumerate populations and summarize the population’s opinions on socially important questions. I suspect that they were contracted for a project that would, as I view it, stop in mid-course. They must have expected, though I am not certain, that social scientists would proceed with the analysis. A body of researchers is eager to do so, though not on Pew’s budget. A contract could have asked Pew, or another agency, to arrange funding for analysts and coordinate them as they prepared research proposals. As it is, the analysis will be haphazard as each researcher pursues a personal interest and seeks funding individually.

Pew subcontracted a portion of the work to Abt Associates. If their division of labor was similar to that of a sub-contract when I presided over a social research institute, Pew would have designed the study and created and pre-tested the instrument. The sub-contractor would have conducted the telephone interviews and coded the responses and Pew would have run the marginals and written the report. A serious question remains. Why did Pew not stop the process when they became aware that the sub-contractor was achieving a 16% response rate? The simple answer is that they have made peace with this problem. On October 22 Greg Smith, a co-author of the Pew report, wrote to Steve Cohen, then President of ASSJ, as follows.

The decline in response rates is a long-term issue that has affected the entire survey research world and the Pew Research Center has been very open and transparent about it. Over the past fifteen years or so, we have seen response rates in our national surveys fall from about 36% on average to less than 10%. The response rate for our recent survey of Jews (16%) was somewhat higher than we now typically obtain in national polls, and was similar to what we’ve obtained in other surveys of rare populations (including Asians).

Smith claims transparency when negotiating the contract. Why did this not give the client pause? The full data, the marginals in Appendices B and C, when ready to be downloaded will provide researchers with a rich source of variables ready for recoding and correlating. The low response rate will impose tentativeness on their findings. On the positive side, let me say that whoever formatted the interview schedule with its instructions to interviewers did splendid work despite the missing question on kashrut.

Repair?

Can anything be done to rescue aspects of the data? Yes! National samples have now been sought five times. It is time to consolidate around one sample. For the next study, instead of drawing another sample, the Pew group might be turned into a panel. Pew has 400,751 “good: numbers. From these they obtained interviews from 1,151 households. This leaves 434,600 not interviewed. After a couple of personal interviews, the researchers will find that they can return to the telephone.

The Jews in the sample who declined to be interviewed or were not available or whose telephone numbers were incorrect might be located and interviewed briefly and added to the 3,475 household interviews. To increase response rate in the future, remaining with the same telephone interview system, the effort might be located in a prestigious university. I would begin with a shorter instrument. Not all information needs to be gathered at one time. Engage interviewers with sophisticated voices and an engaging manner. An example would be the company retained by the Philadelphia Orchestra for its fund-raising. This might increase completed interviews by a few percentage points. The new interviews might be compared with the already interviewed group simply to see if any changes are introduced. This review could be conducted with a 1% sample as a trial run. I suspect that we are approaching a time when telephone interviews will give way to face to face interviews. At the outset this would be costly.

In addition, regarding measures, we might dispense with the typology and replace it with behavioral based measures from the questionnaire as suggested above. Replacement of the subjective measures by empirical ones, such as in the cases of measures of religious practices and relation to Israel would increase their validity.

Notes

Notes
1I am grateful for comments on the paper made by Charles Kadushin and Ira Sheskin

 

 

Comments, Corrections, and Queries