Jump to ContentJump to Main Navigation
Seeking Good Debate"Religion, Science, and Conflict in American Public Life"$

Michael S. Evans

Print publication date: 2016

Print ISBN-13: 9780520285071

Published to California Scholarship Online: September 2016

DOI: 10.1525/california/9780520285071.001.0001

Show Summary Details
Page of

PRINTED FROM CALIFORNIA SCHOLARSHIP ONLINE (www.california.universitypressscholarship.com). (c) Copyright University of California Press, 2017. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a monograph in CALSO for personal use (for details see http://california.universitypressscholarship.com/page/535/privacy-policy-and-legal-notice). Subscriber: null; date: 30 March 2017

(p.187) Methodological Appendix

(p.187) Methodological Appendix

Source:
Seeking Good Debate
Author(s):

Michael S. Evans

Publisher:
University of California Press

Issue Selection

To find issues likely to provoke the public engagement between representatives of religion and science that most scholars of religion and science talk about, I worked from the premise that public conversations between representatives of religion and science were unlikely to occur if there were no claims at stake. So I started by looking for issues in which some participants make claims based on religious authority and some participants make claims based on scientific authority. Even this minimal approach excludes a variety of possible issues that involve something about religion or science, but are unlikely to produce religion-and-science debates because religion or science makes no public claim. For example, there are no significant religious claims about scientific or technical issues such as aerodynamics, and there are no significant scientific claims about religious issues such as veneration of saints.1

To maximize the chance of finding sustained public debates, I eliminated some issues that met the minimum criteria but that did not have significant public policy implications. For example, the bodily resurrection of Jesus is an issue concerning which some participants make claims based on religious authority (e.g., the Bible says that Jesus reanimated after three days in the tomb) and other participants make claims based on scientific authority (e.g., it is medically impossible to reanimate a body after three days). But there are essentially no public policy implications stemming from the bodily resurrection of Jesus, so I did not expect that issue to generate sustained public debate.

I also eliminated issues confined to a very small group of persons. For example, some members of the Church of Jesus Christ of Latter-day Saints claim that Native Americans are a “remnant of the House of Israel” descended from the Tribe of Manasseh through the Mormon prophet Lehi, whereas geneticists claim that there is no scientific evidence that the populations are linked.2 Since (p.188) this is an issue confined to one part of a religious denomination that, as a whole, constitutes less than 2 percent of the U.S. population, I did not expect this issue to generate sustained public debate.

After eliminating many issues in which religion-and-science debate was unlikely to exist, I selected four issues surrounding which I would expect to see sustained public conversation between representatives of religion and science in mass media. These four issues were human origins, stem cell research, environmental policy, and the origins of homosexuality. Each debate involves participants making claims based on religious authority and participants making claims based on scientific authority. Each debate also involves stakes not just for one particular group, but for a variety of different groups for different reasons. And each debate has broad policy implications at local and national levels.

Text Analysis

The process of retrieving and analyzing textual data for this project depended on a combination of manual and automatic processes. For technical purposes, I defined “debate” as the collection of articles on a given religion-and-science issue contained in a sample from mass media. For example, “stem cell research debate” refers to all articles retrieved based on keywords associated with stem cell research. An “article” is the whole body of text contained in a newspaper article. News articles, news commentary, and opinion pieces counted as articles, while book reviews and letters to the editor did not count as articles. I did not limit articles based on word count.

To create the data set for this study, I used a combination of keyword searches and (for debates that resist keyword search) categorical searches to retrieve a set of articles about each issue from the Lexis-Nexis Major US Newspapers database, which contains the full text of articles from approximately thirty of the highest-circulation papers in the United States. Obviously there are many other possibilities for source data, such as local town newspapers, alternative press papers and magazines, and new media sources. But even with these alternatives available in the public sphere, newspapers still provide the “master forum” of American mass media.3

To ensure that the data set included the parts of debate in which someone makes claims based on religious authority and someone makes claims based on scientific authority, each issue required slightly different search and retrieval criteria. For example, environmental policy search results had either to be classified as “about” religion OR science, in addition to having the keywords “climate change” or “global warming.” This combination requirement also helped exclude more general articles about, say, the “environment” in which teachers operate. Similarly, for debate over the origins of homosexuality, a more general search for “gay” or “sexuality” would bring in many articles that did not involve either religion or science, so I chose keywords that reflected arguments about the origins of homosexuality from scientists and from religious groups in order to retrieve a more focused set of results.

For each issue, I initially sought to extract all relevant newspaper articles that had been published within the previous ten years. But because of limited (p.189) computational resources, the system would crash if the sample was too large. In the case of environmental policy and stem cell research, the ten-year window yielded samples that were too large to process successfully. Rather than try to pick and choose which aspects or subsamples of debate should be included, for those debates I simply constrained the size of the data sample by going back five years rather than ten. This still provided enormous amounts of data for those debates.

I used the following keyword/category combinations and date ranges to retrieve the articles for this study from the Lexis-Nexis Major US Newspapers database:

  • Environmental Policy: “global warming” or “climate change,” must have major categories either “religion” or “ecology/environmental science,” 2002–2007

  • Origins of Homosexuality: “gay gene” or “ex-gay” or “reparative therapy,” 1997–2007

  • Stem Cell Research: “stem cell,” 2002–2007

  • Human Origins: “creation*” or “intelligent design,” 1997–2007

I manually reviewed and corrected the search results. I deleted articles that were artifacts of the search engine (e.g., matching “openly gay Gene Robinson” to “gay gene”), contained only a reference to some other article (e.g. “her last article was about the gay gene”), or were otherwise technically correct but irrelevant to the substantive debate (e.g., a photo essay with matching caption text but no article text). I also deleted duplicate articles that appeared in the same paper (e.g., morning and afternoon editions, or national and local editions in the same city), but did not eliminate multiple instances of wire service articles as long as those articles had physically appeared in different papers.

Identifying and Ranking Representatives

I establish at the beginning of this book that everyone who appears in public debate in mass media is a “representative.” To identify and rank representatives across all of the debates in this study, I used a computational linguistics technique called named entity recognition (NER). In NER, entities such as places, persons, dates, organizations, and so forth are not predefined in lists as search terms, but are identified by semantic and/or grammatical rules as they appear in unstructured data. Unlike a traditional information retrieval method, in which one might search a document for “Focus on the Family,” NER allows one to search a document for all named entities without knowing the names or titles of such entities before the search begins.

I analyzed the data set of newspaper articles for each debate using GATE, which is a free and open-source computational linguistics platform, and the ANNIE (A Nearly New Information Extraction) plugin for named entity recognition and extraction.4 ANNIE applies NER and adds contextual annotations to the original data set files. These annotations are additional tags embedded around named entities. For example, the identified personal name “Jerry (p.190) Falwell” would be saved as “<PERSON>Jerry Falwell</PERSON>.” A custom Perl script processed the annotated file, extracted the tagged entities, and wrote a formatted text file for import into a PostgreSQL database. From there I used structured query language (SQL) to construct views and queries for analyzing the article and entity information.

I created views and queries to rank representatives by visibility within each debate as well as across all debates. After testing different ways of ranking representatives, I found that the measure most consistent with my research design is the number of articles in which a person is mentioned. The best alternative is total mentions, but it is possible that one feature article may contain dozens of references to a person, while being the only article ever written about that person. I think of articles as opportunities for readers to become aware of a representative. Many mentions in one article do not provide any visibility if a reader never sees the article, and are therefore misleading as a measure of visibility for representatives across a debate. Across debate as a whole, the most important component of visibility is that there are many opportunities for a reader to become aware of a representative.

I also created views and queries to look at the co-occurrence of representatives and organizations within the same article, to help contextualize how and why representatives showed up in these debates. In addition to providing information about who tends to show up in the same articles together, this query also made it possible to correct for co-mention of synonymous but differently named entities. For example, if an article uses the term “American Civil Liberties Union” once in the article and “ACLU” for the rest of the article, these are treated as separate entities with one article mention each. But since they cooccur in articles, and this is traceable through the query, I could disentangle the entities and correct any ranking counts accordingly.

Because a lot of actual language use does not follow rules, named entity recognition is not perfect. At many different steps in the process I had to intervene to correct, update, and improve the process using my human cognitive skills; for example, by adding complex organizational names to a list of known entities. But NER is very good. I started this process by doing an entirely manual analysis of one debate (approximately three hundred articles), then doing a computational analysis of the same data set and comparing the results. The results were very similar, and when some of the ambiguous entities were disambiguated manually, the resulting ranking of entities was nearly identical to that generated by the entirely manual process, even though it was generated in less than one-twentieth of the time.

Mapping Topics

The debate maps that I created for chapters 2 and 3 draw on a combination of computational linguistics techniques for “topic discovery” and an admittedly subjective translation of raw results in order to communicate the topical structure of public talk in the debates under study. For a human analyst looking at a sample of documents, qualitative discourse analysis usually involves identifying meaningful concepts, themes, or conversations, coding texts using these concepts (p.191) and themes, then aggregating the resulting coding data to look for broader patterns in discourse. By contrast, computational topic discovery looks for textual patterns first, then presents those patterns (called “topics”) to the analyst for qualitative identification and interpretation.5

For topic discovery I used a technique called Latent Dirichlet Allocation (LDA).6 Given a text corpus, LDA calculates topics as a probability distribution over words. Topics are latent patterns in the corpus, rather than direct similarities between documents, or simple clusters or co-occurrence of words. In the LDA model, topics contain words, and a document may contain multiple topics. So, for example, an LDA analysis of scientific abstracts might find one topic that contains the words “genetic embryo somatic dna” and another that contains the words “viral allograft antigen lupus.” LDA could also calculate the probability that “viral” will be associated with “viral allograft antigen lupus,” the probability that this topic will show up in any document, and the exact mixture of topics in any given document. But it would be up to the analyst to interpret the first topic as reproductive genetics and the second topic as immunology.

The relationships between words, topics, and documents that LDA identifies are remarkably similar to the relationships that a human would identify in the same data.7 So an advantage of LDA is that it accurately identifies important qualitative differences over a much larger set of data, and in a much shorter time frame, than a human analyst. Even more important, however, is that because it is a quantitative (if probabilistic) method, the relationships among topics and across documents that it identifies are precisely measured, rather than simply associated. This allows types of analysis simply not possible for human analysts. For example, LDA can plot the relative significance of a topic to discourse by looking at the relationships among topics and how closely they are related to each other and to the corpus as a whole.

I analyzed the four debate corpora using the Topic Modeling Toolbox, a MATLAB toolbox for doing LDA analysis.8 The toolbox, which is free for scientific use, contains all of the functions necessary to create the debate maps in this book, including topic discovery, probability ranking, and calculation of document and topic relationships. The toolbox allows several parameters and hyperparameters to be set by the user, including how many topics should be discovered, how many iterations should be attempted to model the topics, and the cutofffor low-probability topics. There are ways to maximize the probable fit of a model mathematically, but this does not guarantee that the result will be comprehensible to people. LDA still requires a human analyst to make judgments about levels of abstraction and the coherence of qualitatively different topics. So for these settings I relied on conventions in the most current literature, then adjusted the level of abstraction until the topics were clear and coherent to human comprehension.

In raw output form, “topics” show up simply as collections of words identified by a designator (e.g., T_82) and accompanied by probability information (e.g., 0.03). LDA requires the human analyst to interpret the topics that it identifies in each corpus. For each debate, I have interpreted each topic and given it what I judge to be an accurate topic name that describes the subject of its contents (e.g., “Left Behind Series” or “Human Genetic Evolution”). While I have (p.192) made every effort to select appropriate topic names, it is possible that another person might come up with different names for these topics.

To create the final debate maps, I used the toolbox to calculate symmetrized KL distance between probabilities of topics over documents, then created a spatial arrangement of substantive topics using multidimensional scaling. To reduce visual clutter, I removed the lowest-probability and the nonsubstantive topics from the visualization.9 The resulting debate maps are visual translations of the data generated by topic discovery into human-readable and informative maps that expose the topical structure of debate without swamping the reader with raw data. Inevitably there are some subjective elements in this translation. However, I judge the ability to analyze massive data sets for qualitative differences as more important than, on one hand, only reporting raw data or, on the other, executing a far more limited analysis using more conventional qualitative discourse analysis methods.10

Biographical and Archival Research

In chapter 4 I address the question of what representatives think counts as good debate. The ideal way to approach this research question would be to sit down and personally interview every representative in every debate directly, with follow-up questions and plenty of time for discussing the nuances of each representative’s approach. For two major reasons, this is not possible. First, it is physically impossible to interview the thousands of representatives who appear in these debates. Second, even taking a sample of these representatives, it is unreasonable to expect that a sociologist could gain access to, for example, George W. Bush, Laura Schlessinger, Pat Robertson, and Richard Dawkins, to interview them in the first place.11 So I did not attempt to answer this question by interviewing elite representatives directly.

Instead I drew on a variety of secondary sources to answer the question. I began by creating a purposive sample of the representatives who showed up in the computational analysis. I stratified the sample by visibility (number of articles in which the representative was mentioned) and affiliation (science, religion, other). I established cutoffpoints for low, medium, and high visibility in each debate, relative to the overall distribution. I then randomly selected representatives in each category until I returned, at minimum, one religion representative, one science representative, and at least one other representative who was neither a science nor a religion representative, from each level of visibility.

Because representatives are not evenly distributed across debates, I took several corrective measures to ensure fair coverage for the analysis. If a given representative was no longer living (or had not been alive in recent memory), I selected a different representative from the same category and level of visibility. If a representative in one debate had already appeared in another debate, I selected an additional representative from the same category and level of visibility. If either religion or science representatives were not available in the same level of prominence (e.g., no highly prominent science representative), I selected the same category but from a lower level of visibility. And in cases in which the distribution of top representatives included multiple obvious nonreligion and (p.193) nonscience representatives in different areas (e.g., politics, media), I selected additional representatives to ensure coverage of these other areas.

Constructing the sample in this way takes more care and judgment than a simple random sample. However, it is necessary because the distribution of religion and science representatives in these debates is not precisely comparable. Religion representatives tend to be relatively few in number, but more visible. For example, a few members of the Religious Right are highly visible in several debates. By contrast, science representatives tend to be relatively more numerous, but less visible. For example, each newspaper story about a new discovery in genetics tends to quote a scientist in the study, or a local college professor, rather than a single national figure. A random sample would entangle (potential) differences between religion and science representatives with (potential) differences in high- and low-visibility representatives. The purposive sample, in contrast to a random sample, guarantees inclusion of science and religion representatives of similar visibility.

In practice, this selection method resulted in a sample that included the top religion and science representatives in each debate, even if such representatives were not necessarily in the top ten for a given debate, supplemented with similarly prominent representatives in other areas. The resulting sample included forty-three representatives from four debates. Some are prominent across debates, some only in one.

For each representative in the sample, I constructed a biographical profile that included a variety of data, including personal characteristics, samples of public speech or writing, and, where available, biographical profiles, human-interest articles, and media interviews. Of course the amount of available data varied by person. For example, I found many more sources of data about Jerry Falwell than about John Haught, and many more sources of data about George W. Bush than about Christine Gregoire. This material was also used to construct the anonymized résumés and anonymized quotes used in interviews with ordinary respondents (see below).

In addition to constructing the biographical profiles, I also analyzed the discursive material for each representative to see what kinds of qualitative patterns emerged in the public talk of these figures. Most of these sources are transcripts or videos of interviews conducted by others with prominent (or not-so-prominent) representatives. In these texts I looked at how representatives described their own approaches to public debate. The intent was to identify what representatives thought public debate ought to be. I looked at how each selected representative talked about what she or he did in public life, looking in particular for indications of how a normative vision motivated the person’s participation. From that analysis I derived the broad categorical distinction between “public crusade” and “elevating the conversation” reported in chapter 4.

Interviews

The bulk of the data cited or quoted in support of what ordinary people think comes from sixty-two one-on-one, face-to-face interviews that I conducted between April 2008 and April 2009. The only universal restrictions on respondent (p.194) selection were that subjects had to be at least eighteen years old and could not be a public media figure. All interviews were confidential, so all respondent names in this book are pseudonyms, and any personally identifying details have been edited from responses.

The interview itself consisted of five stages for each debate (see full interview schedule below). First, I asked open-ended questions about a given debate (named generically, e.g., “stem cell research debate”) such as “What is this debate about?” and “Who do you think is debating?” Second, I presented a sample of anonymous résumés (one at a time) and asked whom each person represents in that debate, and why the respondent thought so. Third, I presented a sample of quotes (one at a time) and asked whom the respondent thought the person who made that statement represents, and why he or she thought so. Fourth, I went through the top ten names mentioned in each debate and asked whom respondents thought each person represented in that debate, and why. Finally, I asked them to select their ideal committee for making decisions related to that debate.

I repeated this sequence for each debate as time and respondent availability permitted. Each stage represents a different dimension of evaluation: preexisting knowledge of the debate (open-ended), evaluation based on identity (résumés), evaluation based on interests (statements), evaluation based on recognition and association (top ten list), and finally, what qualities of representatives are most important (committee selection). I note that because respondents might understand different questions in different ways, despite interviewer guidance, the findings reported throughout the book do not depend solely on one dimension or interpretation of one question (e.g., not just the committee question), but hold across several different dimensions of evaluation.

I digitally recorded every interview and hired professional transcribers to create interview transcripts from those recordings. I reviewed each transcript and corrected the few misunderstandings or errors with reference to the original recordings. Sixty-two interviews yielded approximately 2,500 pages of transcribed text, or approximately forty pages per interview. In each interview I also took extensive notes about qualitative features of responses to provide a basis for a later coding scheme.12 After all interviews were complete, I followed generally accepted practices of axial and open coding to manually analyze each interview transcript. I initially identified important concepts and themes in the interview data, drawing on my interview notes as a guide to create a preliminary coding structure. When a new theme emerged in analysis, I updated the preliminary coding structure and reviewed prior interviews as necessary.

Sample Description

I used a purposive sampling strategy designed to maximize range rather than achieve statistical representativeness.13 Given the religion-and-science content of these debates, I set explicit recruitment targets using a two-dimensional matrix of religious affiliation and occupation. I purposively sought and enforced heterogeneity in those categories and informally sought heterogeneity in other categories.

(p.195) The target distribution for the religious affiliation dimension was proportional to the general U.S. population: approximately 20 percent mainline Protestant, 33 percent evangelical Protestant, 25 percent Catholic, and 20 percent Other/None. I initially classified respondents on this dimension with reference to denominational membership, and later adjusted each respondent’s classification in response to self-identification where appropriate, for example, in cases where a respondent attended a nondenominational church or was in transition between churches. The target distribution for the occupation dimension was approximately 80 percent nonscientific/technical and 20 percent scientific/technical. I initially classified respondents on this dimension with reference to occupational categories from the Bureau of Labor Statistics, and later adjusted as necessary based on personal knowledge of a given respondent’s specific occupational situation.

I recruited respondents at multiple sites to avoid skewing results toward the idiosyncrasies of one particular site.14 But given limited resources, I limited recruitment to two U.S. cities. I recruited 75 percent of respondents in a Southern California city with a population greater than 1.5 million and a strong high-tech and military employer base. I recruited the remaining 25 percent of respondents in a South Florida city with a population of approximately 200,000, a large tourism employer base, and a large retiree population. The demographic differences provided significant heterogeneity beyond expected geographic and regional differences. For example, the South Florida site skews higher in respondent age, and the distribution of religious affiliations within the Other/None category differs substantially between the two sites.

At each site I recruited the initial set of respondents through intermediaries with access to potential respondents in the target recruitment categories. At each site I also proceeded from that initial set of respondents using a snowball strategy for further recruitment. In Southern California I started by identifying and contacting personal acquaintances with access to local organizations (both religious and nonreligious). In South Florida I started by using public information to identify and contact local congregational leaders and other civic leaders who might act as intermediaries. At both sites I asked these intermediaries to identify (and, if necessary, introduce) potential respondents. I then contacted potential respondents directly, generally by email or telephone. I continued pursuing a snowball strategy and used selective recruitment to enforce heterogeneity for religion and occupation as necessary.

As table 8 shows, the resulting sample generally met the recruitment targets within each location and within the sample as a whole. There was some variation from the exact target percentages. For example, the sample contains a slightly higher than expected number of respondents in the Other/None category.15 This variation is not surprising given the sample size. Beyond the target categories, respondents varied widely in education (high school to PhD), age (18 to 79 years, average 40) and gender (34 women, 28 men). But the sample is not intended to be statistically representative, so there are important variations from the U.S. general population. For example, all respondents had completed high school, and a majority had earned AA/AS or BA/BS degrees. Given the limitations of the purposive sample, I do not make general claims about differences within these categories. (p.196)

Table 8 Purposive Sample Breakdown by Location and Occupation (n = 62)

Mainline

Evangelical

Catholic

Other/None

Southern California

  Scientific

2

 3

4

2

  Nonscientific

6

11

6

13

South Florida

  Scientific

1

 2

  Nonscientific

3

 4

3

2

Interview Schedule

I used an interview schedule to guide the sequence and subject of the interview questions, particularly in the evaluation exercises. In some cases questions (or later sections) were omitted for time, or otherwise altered for clarity. In each case, however, I took care to maintain question order, avoid priming for later responses, and avoid guiding respondents in a particular direction. As Luker points out, this is often overly cautious, since respondents are generally more willing to push back than interviewers assume.16 Note also that many of these questions are designed to prompt reflection. I sometimes asked situation-appropriate follow-up questions to provoke elaboration on brief answers (e.g., about an anecdote that a respondent recounted).

Preliminary Questions

  • Your age?

  • What is your educational background?

  • Have you studied scientific subjects? (if so) At what level?

  • Were you raised in a religious tradition? (if so) Which one?

  • Do you attend church regularly? (if so) How often do you attend? (if so) Did you go last ____?

  • What do you read or watch regularly?

  • Do you consider yourself to be politically active? (if so) In what ways are you active?

  • Are you a member of any clubs or organizations? (if so) Which ones (e.g., professional, hobby, game)?

Issues, Debates, and Media

  1. (1.1) We’re going to go through the same set of questions for four different religion-and-science debate topics: stem cell research, human origins, environmental policy, and origins of homosexuality.

  2. (1.2) Can you briefly describe the ____ debate as you see it?

  3. (1.3) Where did you learn about ____? Can you be specific?

  4. (p.197) (1.4) Do you talk to other people about ____. (if so) Whom?

  5. (1.5) When is the last time you talked to other people about ____?

  6. (1.6) Does ____ matter (to you)?

  7. (1.7) Do you see yourself as participating in the debate over ____?

Representatives

In this section we’re going to talk about people, positions, and viewpoints in debates about the issues we discussed earlier. We’ll go through the questions for each issue separately, but it’s entirely possible that the answers will be similar for different issues. That’s completely okay.

  1. (2.1) When you think of ____, who do you think is debating? Does anyone come to mind?

  2. (2.2) I’m now going to provide you with some profiles of people who participate in the debate, but I’m not going to identify the person. For each profile, I would like to know whom you think the person represents. Also for each profile, I’m going to ask why you think this person represents those particular people or groups.

    1. (2.2.1) (give profile on index card, read out loud for record) Again, for ____, whom do you think this person represents?

    2. (2.2.2) Why do you think this person represents them? What is it about the profile that suggests this person represents them?

  1. (2.3) Now I’m going to provide you with some statements from people who participate in the debate, but I’m not going to identify the person. For each statement, I will ask a series of questions about your agreement or disagreement with the statement, and I’ll also ask questions about whom this person may represent. If you have any questions at all about the statement, I’m happy to try to clarify.

    1. (2.3.1) (give statement on index card, read out loud for record) Do you agree or disagree with this statement, even partially?

    2. (2.3.2) Why do you agree or disagree? (further) Which part of the statement makes the most or least sense to you?

    3. (2.3.3) Do you think that the person who said this represents you?

    4. (2.3.4) (if not you) Whom do you think that this person represents?

    5. (2.3.5) Why do you think this person represents them?

  1. (2.4) Now we’re going to name some names. I will provide some names of people participating in the debate. If you don’t recognize them, please say so. If you do recognize them:

    1. (2.4.1) What is your impression of this person?

    2. (2.4.2) Whom you think this person represents? Why?

  1. (p.198) (2.5) Okay, no more profiles or statements or lists of people. But here is a scenario: if you had to pick a five-person committee to make important decisions about ____, who would be on your committee? Why? (if roles) Can you think of a particular person who fills that role?

    1. (2.5.1) Why would these people be the best committee members?

    2. (2.5.2) (if specific) How did you hear about these people?

    3. (2.5.3) We’ve discussed the committee for ____. Would you want the same people regardless of the topic?

    4. (2.5.4) What changes would you make based on a topic change?

  1. (2.6) If you could have this committee, how would your life change? Would you do anything differently?

  2. (2.7) Do you think there is anything you can do to change the debate about ____, or the people involved in the debate? If so, what might that be?

Representation and Democracy

Okay, we’re on the home stretch. I’m going to ask a short set of questions that are a bit more abstract. I’d like you to think not just about what is happening, but what should happen, in your opinion.

  1. (3.1) Let’s go back to your committee, and let’s say that they came up with a position on ____. If this position went against your beliefs on ____, would you want to vote democratically on the proposal, for example, in a state referendum?

  2. (3.2) Should the committee be allowed to override a democratic vote? Why or why not?

  3. (3.3) Should anyone be allowed to override a democratic vote? Why or why not?

    1. (3.3.1) (if so) Who?

Science and Scientists

Finally, I have a last thought question for you. Let’s say that someone proposed a ten-year moratorium on basic scientific research in order to assess our current data, get consensus on policy positions, and think about moral or ethical implications of science. Would you support such a plan? Why or why not? What alternative might you suggest?

Notes:

(1.) I say “significant” only to guard against the possibility that someone, somewhere, has generated a claim about these issues that I have not seen emerge into public life.

(3.) For discussion of newspapers as the “master forum,” see Ferree et al. 2002.

(9.) For a discussion of how and why topics can be generated that are not relevant to the substantive analysis, see Ramage, Dumais, and Liebling 2010.

(10.) See also M. Evans 2014b.

(11.) Sociologist Michael Lindsay (2007) gained access to hundreds of elite respondents in government and industry, primarily by leveraging religious networks. But this feat is exceptionally difficult and rare. Consider that even top-level journalists rarely get the opportunity to interview more than one of the persons I cited as examples.

(13.) As Weiss (1994) notes, a random sample at this size might not actually capture enough different cases to derive useful theoretical insight.

(15.) This variation is actually consistent with recent findings about the nonreligious in the U.S. general population. See, for example, Hout and Fischer 2002; Baker and Smith 2009.