THE NEED FOR EVIDENCE Almost all reasoning we encounter includes beliefs about the way the world was, is, or is going to be that the communicator wants us to accept as “facts.” These beliefs can be conclusions, reasons, or assumptions. We can refer to such beliefs as factual claims. The first question you should ask about a factual claim is, “Why should I believe it?” Your next question is, “Does the claim need evidence to support it?” If it does, and if there is no evidence, the claim is a mere assertion, meaning a claim that is not backed up in any way. You should seriously question the dependability of mere assertions! If there is evidence, your next question is, “How good is the evidence?” To evaluate reasoning, we need to remember that some factual claims can be counted on more than others. For example, you probably feel quite certain that the claim “most U.S. senators are men” is true, but less certain that the assertion “practicing yoga reduces the risk of cancer” is true. Because it is extremely difficult, if not impossible, to establish the absolute truth or falsity of most claims, rather than asking whether they are true, we prefer to ask whether they are dependable. In essence, we want to ask, “Can we count on such beliefs?” The greater the quality and quantity of evidence supporting a claim, the more we can depend on it, and the more we can call the claim a “fact.” For example, abundant evidence exists that George Washington was the first president of the United States of America. Thus, we can treat that claim as a fact. On the other hand, there is much conflicting evidence for the belief “bottled water is safer to drink than tap water.” We thus can’t treat this belief as a fact. The major difference between claims that are opinions and those that are facts is the present state of the relevant evidence. The more supporting evidence there is for a belief, the more “factual” the belief becomes. Before we judge the persuasiveness of a communication, we need to know which factual claims are most dependable. How do we determine dependability? We ask questions like the following: What is your proof? How do you know that’s true? Where’s the evidence? Why do you believe that? Are you sure that’s true? Can you prove it? You will be well on your way to being among the best critical thinkers when you develop the habit of regularly asking these questions. They require those making arguments to be responsible by revealing the basis for their arguments. Anyone with an argument that you should consider will not hesitate to answer these questions. They know they have substantial support for their claims and, consequently, will want to share their evidence in the hope that you will learn to share their conclusions. When people react to simple requests for evidence with anger or withdrawal, they usually do so because they are embarrassed as they realize that, without evidence, they should have been less assertive about their beliefs. When we regularly ask these questions, we notice that for many beliefs there is insufficient evidence to clearly support or refute them. For example, much evidence supports the assertion that taking an aspirin every other day reduces the risk of heart attack, although some other evidence disputes it. In such cases, we need to make judgments about where the preponderance of evidence lies as we decide on the dependability of the factual claim. Making such judgments requires us to ask the important question, “How good is the evidence?” Chapters 7 to 9 focus on questions we need to ask to decide how well communicators have supported their factual claims. The more dependable the factual claims, the more persuasive the communications. LOCATING FACTUAL CLAIMS We encounter factual claims as (a) descriptive conclusions, (b) reasons used to support either descriptive or prescriptive conclusions, or (c) descriptive assumptions. Let’s examine an example of each within brief arguments. (a) Frequent use of headphones may cause hearing loss. Researchers studied the frequency and duration of headphone use among 251 college students and found that 49 percent of the students showed evidence of hearing impairment. Note that “frequent use of headphones may cause hearing loss” is a factual claim that is a descriptive conclusion supported by research evidence. In this case, we want to ask, “Is that conclusion—a factual claim—justified by the evidence?” (b) This country needs tougher gun regulations. The number of gun-related crimes has increased over the last 10 years. Note that the factual claim here is that “the number of gun-related crimes has increased over the last 10 years,” and it functions as a reason supporting a prescriptive conclusion. In this case, we want to ask, “Is that reason—a factual claim—justified by the evidence?” (c) Professors need to include more active discussions in their classrooms because too many college graduates lack critical thinking skills. An unstated descriptive assumption links the reason to the conclusion: Students learn how to think critically by participating in active classroom discussions. This factual claim is a descriptive assumption, which may or may not be dependable. Before we believe the assumption, and thus the reason, we want to ask, “How well does evidence support the assumption?” You will find that while many communicators perceive the desirability of supporting their reasons with evidence, they don’t see the need to make their assumptions explicit. Thus, evidence for assumptions is rarely presented, even though in many cases such evidence would be quite helpful in deciding the quality of an argument. SOURCES OF EVIDENCE When should we accept a factual claim as dependable? There are three instances in which we will be most inclined to agree with a factual claim: 1. when the claim appears to be undisputed common knowledge, such as the claim “weight lifting increases muscular body mass”; 2. when the claim is the conclusion from a well-reasoned argument; and 3. when the claim is adequately supported by solid evidence in the same communication or by other evidence that we know. Our concern in this chapter is the third instance. Determining the adequacy of evidence requires us to ask, “How good is the evidence?” To answer this question, we must first ask, “What do we mean by evidence?” Attention: Evidence is explicit information shared by the communicator that is used to back up or to justify the dependability of a factual claim (see Chapter 2). In prescriptive arguments, evidence will be needed to support reasons that are factual claims; in descriptive arguments, evidence will be needed to directly support a descriptive conclusion. The quality of evidence depends on the kind of evidence it is. Thus, to evaluate evidence, we first need to ask, “What kind of evidence is it?” Knowing the kind of evidence tells us what questions we should ask. When used appropriately, each kind of evidence can be “good evidence.” It can help support an author’s claim. Like a gold prospector closely examining the gravel in her pan for potentially high-quality ore, we must closely examine the evidence to determine its quality. We want to know, “Does an author’s evidence provide dependable support for her claim?” Thus, we begin to evaluate evidence by asking, “How good is the evidence?” Always keep in the back of your mind that no evidence will be a slam dunk that gets the job done conclusively. You are looking for better evidence; searching for altogether wonderful evidence will be frustrating. EXHIBIT 7-1 Major Kinds of Evidence ✓ intuition ✓ personal experiences ✓ case examples ✓ testimonials ✓ appeals to authorities or experts ✓ personal observations ✓ research studies ✓ analogies In this chapter and in Chapter 8, we examine the kinds of questions we can ask of each type of evidence to help us decide its quality. Kinds of evidence examined in this chapter are intuition, personal experiences, case examples, testimonials, and appeals to authority. INTUITION AS EVIDENCE “I just sense that Janette is the right girl for me, even though my friends think we’re a bad match.” “I just have this feeling that Senator Ramirez will surprise the pollsters and win the election.” “I can tell immediately that this slot machine is going to be a winner for me today.” When we use intuition to support a claim, we rely on “common sense,” or on our “gut feelings,” or on hunches. Listen to Jewell celebrating intuition as a source of understanding: Follow your heart Your intuition It will lead you in the right direction Let go of your mind Your Intuition It’s easy to find —Jewel, “Intuition” When a communicator supports a claim by saying “common sense tells us” or “I just know that it’s true,” she is using intuition as her evidence. Intuition refers to a process in which we believe we have direct insights about something without being able to consciously express our reasons. A major problem with intuition is that it is private; others have no way to judge its dependability. Thus, when intuitive beliefs differ, as they so often do, we have no solid basis for deciding which ones to believe. Also, much intuition relies on unconscious processing that largely ignores relevant evidence and reflects strong biases. Consequently, we must be very wary of claims backed up only by intuition. However, sometimes “intuition” may in fact be relying on some other kind of evidence, such as extensive relevant personal experiences and readings, that have been unconsciously accessed from somewhere in our mind. For example, when an experienced pilot has an intuition that the plane doesn’t feel right as it taxis for takeoff, we might be quite supportive of further safety checks of the plane prior to takeoff. Sometimes, hunches are not blind, just incapable of explanation. As critical thinkers, we would want to find out whether claims relying on intuition have any other kinds of evidential support. PERSONAL EXPERIENCE AS EVIDENCE The following arguments use a particular kind of evidence to support a factual claim. “My friend Judy does really well on her tests when she stays up all night to study for them; so I don’t see the need for getting sleep before taking tomorrow’s test.” “I always feel better after having a big slice of chocolate cake, so I think that anyone who is depressed just needs to eat more chocolate cake.” Both arguments appeal to personal experiences as evidence. Phrases like “I know someone who…,” and “In my experience, I’ve found…” should alert you to such evidence. Because personal experiences are very vivid in our memories, we often rely on them as evidence to support a belief. For example, you might have a really frustrating experience with a car mechanic because she greatly overcharges you for her services, leading you to believe that most car mechanics overcharge. While the generalization about car mechanics may or may not be true, relying on such experiences as the basis for a general belief is a mistake! Because a single personal experience, or even an accumulation of personal experiences, is not enough to give you a representative sample of experiences, personal experiences often lead us to commit the hasty generalization fallacy. A single striking experience or several such experiences can demonstrate that certain outcomes are possible; for example, you may have met several people who claim their lives were saved because they were not wearing their seat belts when they got into a car accident. Such experiences, however, cannot demonstrate that such outcomes are typical or probable. Be wary when you hear yourself or others arguing, “Well, in my experience.…” Fallacy: Hasty Generalization: A person draws a conclusion about a large group based on experiences with only a few members of the group. We will revisit this fallacy in Chapter 8 when we discuss research evidence and issues of sampling. CASE EXAMPLES AS EVIDENCE President of a large university: “Of course our students can move on to high paying jobs and further study at large universities. Why, just this past year we sent one of our students, Mary Nicexample, off to law school at Harvard. In her first year, Mary remained in the top 5 percent of her class. Therefore, our students can certainly achieve remarkable success at elite universities.” A frequently used kind of evidence is the use of a detailed catchy description of, or story about, one or several individuals or events to support a conclusion. Such descriptions are usually based on observations or interviews and vary from being in-depth to being superficial. We call such descriptions case examples. Communicators often begin persuasive presentations with dramatic descriptions of some event to emotionally involve their audience. For example, one way to argue for the banning of cell phone use in cars is to tell heart-wrenching stories of young people dying in car accidents because the driver was talking on a cell phone. Case examples are often compelling to us because of their colorfulness and their interesting details, which make them easy to visualize. Political candidates have increasingly resorted to case examples in their speeches, knowing that the rich details of cases generate an emotional reaction. Because dramatic cases appeal to our emotions, they distract us from paying close attention to their value as evidence and from seeking other more relevant research evidence. For example, imagine a story about a man who tortured and murdered his numerous victims. The emotions triggered by such a story are likely to increase our desire for capital punishment. Yet, the human drama of these crimes may lead us to ignore the fact that such a case is rare and that over the past 30 years, 119 inmates with capital sentences were found to be innocent and released from prison. Be wary of striking case examples as proof! Although case examples will be consistent with a conclusion, do not let that consistency fool you. Always ask yourself: “Is the example typical?” “Are there powerful counterexamples?” “Are there biases in how the example is reported?” Are there times that case examples can be useful, even if they are not good evidence? Certainly! Like personal experiences, they demonstrate important possibilities and put a personal face on abstract statistics. They make it easier for people to relate to an issue and thus take more interest in it. TESTIMONIALS AS EVIDENCE Note on service station wall: “Jane did a wonderful job fixing the oil leak my car had. I strongly recommend that you take your car to Jane to fix any engine problem you have.” This book looks great. On the back cover, comments from readers say, “I could not put this book down.” Commercials, ads for movies, recommendations on the backs of book jackets, and “proofs” of the existence of the paranormal or other controversial or extraordinary life events often try to persuade by using a special kind of appeal to personal experience; they quote particular persons, often a celebrity, as saying that a given idea or product is good or bad, or that extraordinary events have occurred, based upon their personal experiences. Such quoted statements serve as personal testimonials. You may have listened to personal testimonials from college students when you chose your college. Testimonials are thus a form of personal experience in which someone (often a celebrity) provides a statement supporting the value of some product, event, or service and the endorsement lacks any of the information we would need to decide just how much we should let it influence us. How helpful is such evidence? Usually, it is not very helpful at all. In most cases, we should pay little attention to personal testimonials until we find out much more about the expertise, interests, values, and biases behind them. We should be especially wary of each of the following problems with testimonials: • Selectivity. People’s experiences differ greatly. Those trying to persuade us have usually carefully selected the testimony they use. What we are most likely to see on the back of a book jacket is the BEST PRAISE, not the most typical reaction. We should always ask the question, “What was the experience like for those whom we have not heard from?” Also, people who provide the testimonials have often been selective in their attention, paying special attention to information that confirms their beliefs and ignoring disconfirming information. Often, believing is seeing! Our expectancies greatly influence how we experience events. If we believe that aliens live among us, or that humans never really landed on the moon, then we are more likely to see ambiguous images as aliens or as proof of the government conspiracy regarding the moon landing. • Personal interest. Many testimonials such as those used for books, movies, and television products come from people who have something to gain from their testimony. For example, drug companies often give doctors grants to do research, as long as they prescribe the drug company’s brands of medication. Thus, we need to ask, “Does the person providing the testimony have a relationship with what he is advocating such that we can expect a strong bias in his testimony?” • Omitted information. Testimonials rarely provide sufficient information about the basis for the judgment. For example, when a friend of yours encourages you to go see this new movie because it is the “best movie ever,” you should ask, with warmth, about what makes the movie so impressive. Our standards for judgment may well differ from the standards of those giving the testimony. • The human factor. One reason that testimonials are so convincing is that they come from very enthusiastic people, who seem trustworthy, well-meaning, and honest. Such people make us want to believe them. APPEALS TO AUTHORITY AS EVIDENCE According to my doctor, I should be taking antidepressant drugs to help me cope with my recent episodes of depression and I don’t need to worry about side effects. The speaker has defended his claim by appealing to authority—sources that are supposed to know more than most of us about a given topic—so-called experts. When communicators appeal to authorities or experts, they appeal to people who they believe are in a position to have access to certain facts and to have special qualifications for drawing conclusions from the facts. Thus, such appeals potentially provide more oomph to an argument than testimonials, depending on the background of the authority. You encounter appeals to many forms of authority on a daily basis. And you have little choice but to rely on them because you have neither the time nor the knowledge to become adept in more than a few dimensions of our very complicated lives. Movie reviewers: “One of the ten best movies of the year.” Valerie Viewer, Toledo Gazette. Talk show pundits: “The economy is heading for a recession.” Organizations: “The American Medical Association supports this position.” Researchers: “Studies show…” Relatives: “My grandfather says…” Religion: “The Koran says…” Magazines: “According to Newsweek…” We can get expert advice from such sources on how to lose weight, achieve happiness, get rich, lower cholesterol, raise a well-adjusted child, and catch a big fish. You can easily add to our list. It should be obvious that some appeals to authority should be taken much more seriously as evidence than others. Why? Some authorities are much more careful in giving an opinion than others. For example, Newsweek and Time are much more likely to carefully evaluate the available evidence prior to stating an opinion than is the National Enquirer. Articles on schizophrenia are more likely to be based on carefully collected evidence if they are posted on the National Institute of Mental Health Web site than if they are posted on a personal Web page. Our relatives are much less likely than editorial writers for major newspapers to have systematically evaluated a political candidate. You should remember that authorities are often wrong. Also, they often disagree. The following examples, taken from The Experts Speak, are clear reminders of the fallibility of expert opinion (Christopher Cerf and Victor Navasky, 1998, Rev. Ed., Villard Books, New York). “I think there is a world market for maybe five computers.” —Thomas Watson, chairman of IBM, 1943. “Video won’t be able to hold onto any market it captures after the first six months. People will soon get tired of staring at a plywood box every night.” —Darryl F. Zanuck, Head of Twentieth Century Fox Studios, ca. 1946. These quotes should remind us that we need to ask critical questions when communicators appeal to authority. We need to ask, “Why should we believe this authority?” More specifically, we should ask the following questions of authorities. How much expertise, training, or special knowledge does the authority have about the subject about which he is communicating? Is this a topic the person has studied for a long time? Or, has the person had extensive experience related to the topic? Was the authority in a position to have especially good access to pertinent facts? For example, was she involved firsthand with the events about which she makes claims? In general, you should be more impressed by an authority who is a primary source—someone having firsthand involvement with relevant events—than by secondary sources. Time and Newsweek, for example, are secondary sources, while research journals such as the Journal of the American Medical Association are primary sources. Is there good reason to believe that the authority is relatively free of distorting influences? Among the factors that can influence how evidence is reported are personal needs, prior expectations, general beliefs, attitudes, values, theories, and ideologies. For example, if a public university president is asked whether cuts in funding for education are bad for the university, he will in all probability answer “yes” and give a number of good reasons. He may be giving an unbiased view of the situation. Because of his position, however, we would want to be concerned about the possibility that he has sought out only those reasons that justify his own biases. By having bias and prejudice, we mean the existence of a strong personal feeling about the goodness or badness of something up front before we look at the evidence, such that it interferes with our ability to evaluate evidence fairly. Because many factors bias us in virtually all our judgments, we cannot expect any authority to be totally unbiased. We can, however, expect less bias from some authorities than from others and try to determine such bias by seeking information about the authority’s personal interest in the topic. For example, we want to be especially wary when an authority stands to benefit financially from the actions she advocates. We should not reject a claim simply because we suspect that the authority’s personal interests may interfere with her fairness. One helpful step we can take is to check to see whether authorities with diverse attitudes, prior expectations, values, and interests agree. Thus, it is also helpful to ask the questions: “Has the authority developed a reputation for frequently making dependable claims?” You will want to be especially concerned about the quality of authorities when you encounter factual claims on the Internet. When we go online, virtually everyone becomes a potential “authority” because people are free to claim whatever they wish, and there is no built-in process to evaluate such claims. It is clearly a “buyers beware” situation! You should strive to learn as much as you can about the purpose of Web sites, the credentials and experience of the contributors associated with them, and the nature of the reasoning support provided for their conclusions. Pay very close attention to the reasoning structure. Check to see whether the site is associated with or linked to highly reputable sites. Further clues that the site may be undependable include a lack of dates associated with postings, an unprofessional look to the site, claims that are vague, sweeping (e.g., “always,” “never”), and emotional, rather than carefully qualified, a totally one-sided view, the absence of primary source evidence, the presence of hearsay evidence, and numerous reasoning fallacies. Finally, seek out evidence on the same topic from other sites. Problems with Citers Citing Other Citers A particularly troublesome kind of appeal to authority that has become increasingly frequent as the sizes of many news staffs have dwindled is a situation in which one authority supports an opinion by citing another authority. For example, one paper (e.g., the New York Times) cites another paper (e.g., the Washington Post), or one news service (e.g., Reuters) cites another news service (e.g., the Associated Press). Such citations give an illusion of supportive evidence but bypass the most basic question: How dependable was the original authority’s claim to begin with? Citings of other citers are as informative as reading the same newspaper article over and over again hoping to learn something new. A related problem is the citing of “unnamed sources,” or the reference to “some say…” Be especially cautious when you encounter appeals to authority that make it very difficult to pin down the basis for the original claims. USING THIS CRITICAL QUESTION When you identify problems with intuition, personal experience, case examples, testimonials, and appeals to authority as evidence, you have a proper basis for hesitating to accept the conclusion based on that evidence. Knowing these problems gives you some protection against bogus reasoning. However, you do want to work hard to be fair to the arguments that people present for your consideration. So it makes sense to ask those who provide you with insubstantial evidence whether they can give you some better evidence. Give arguments every chance they deserve. EVIDENCE AND YOUR WRITING AND SPEAKING As a writer, you should expect that your readers are also interested in arguments that are supported by strong evidence. Your readers may accept or reject your argument on the basis of your evidence. You should incorporate evidence into your writing as though your readers have your training and expectations. Let’s consider this suggestion further. Anticipating Critical Readers How can you be prepared for readers who have the same set of questions in their toolbox and the same expectations about evidence as you do? You step into their shoes. You anticipate the critical questions that you would ask about the evidence if you were the reader. Then try to answer those questions preemptively. Tell your readers as much as you can about the evidence you have provided. Who published it? Do the authors or the institution who funded the research have any clear biases? What is their background? How current are the data? How generalizable is an observation or experience? Did you notice any potential problems with the evidence such as limited sample size or omitted information? After you bring these concerns to the surface, you will be faced with a judgment call. You must decide whether you have provided enough evidence of strong quality. The decision is not an easy one—every piece of evidence comes with strengths and weaknesses. While we cannot give you a clear-cut set of rules to identify whether your argument needs more evidence or better evidence, we do have a couple rules of thumb. Determining Whether You Need More Evidence The more controversial your conclusion or your reason, the more time you should dedicate to providing evidence. Your audience should quickly accept information that is relatively indisputable, for instance, the name of the governor of Massachusetts, the number of years the sitcom Friends aired, or the capital of Qatar. Your audience will be less willing to accept a controversial point, such as Deval Patrick should be reelected to the governorship, Friends influenced style in the late 1990s more than any other sitcom, or Qatar should host the FIFA World Cup in 2022. These claims are disputable, and, as such, your readers will expect much more from you in terms of evidence before they accept your conclusion. Lastly, you should pay particular attention to arguments that rely on one testimonial, an appeal to authority, or other types of evidence less regarded in academic writing. These sections may warrant more evidence. Our next section will indicate why. Your Academic Writing and Evidence When you commit to a writing project, you also commit to adhere to a set of writing conventions and expectations. Many of these conventions and expectations relate to writing style, for instance, the decision whether to avoid contractions or obscenities. These conventions change based on the circumstances—it may be appropriate to insert an impassioned explicative on a Web forum with friends, but inappropriate to do so in a formal report to your supervisor. This guideline extends to the types of evidence you choose to include in your writing. Some of the evidence we outlined in this chapter tends to be more appropriate for casual writing and communicating, such as writing a review of a new restaurant on Urbanspoon.com or urging your fellow gamers to download the new expansion pack for your multiplayer online role-playing game. We suspect, however, that much of your writing over the next few years of your life will be academic writing. Academic writing comes with certain expectations about the quality of the evidence. Expectations vary depending on the discipline, but they share certain similarities. When you understand these expectations, they can guide you as you make decisions about whether to bulk up your argument with more evidence. In academic writing, a high value is placed on research that is publicly verifiable, conducted according to the scientific method, and reviewed by the authors’ peers before publication. These standards improve the reliability of evidence. They make observations more generalizable. We will discuss why in Chapter 8. For now, keep an eye out in your academic writing for reasons supported by intuition, personal experience, testimonials or appeals to authority. You will probably want to back up these sections with peer-reviewed studies, polls with vigorous research methods, and research conducted with academic standards in mind. In academic writing, your audience will expect and appreciate this evidence. In this chapter, we have focused on the evaluation of several kinds of evidence used to support factual claims: intuition, personal experience and anecdotes, testimonials, and appeals to authorities. Such evidence must be relied on with caution. We have provided you with some questions you should ask to determine whether such evidence is good evidence. In Chapter 8, we discuss other kinds of evidence, as we continue to ask the question, “How good is the evidence?” PRACTICE EXERCISES Critical Question: How good is the evidence: intuition, personal experience, case examples, testimonials, and appeals to authority? Evaluate the evidence in the following three passages. Passage 1 Some well-known basketball players trying to get an edge on the competition have found a cheap and powerful device to improve their shooting ability, the HeadsUp Headband. According to the company that makes them, the band is made of material that interacts with the head’s natural energy field in such a way that shooting concentration is greatly enhanced. Star players now wearing the headband commented about them during interviews with an ESPN sports writer: Lenny Bigscorer: “I wouldn’t play without one now. I can actually feel each shot heading for the sweet spot of the basket.” Dunkin Daniels: “It’s amazing. I’ve never had my head so into the game of basketball. I’m encouraging the entire team to use them.” Passage 2 Are Botox injections a safe alternative to face-lifts? According to an interview with Dr. N.O. Worries published in Cosmo, there are no dangerous side effects associated with Botox injections. Dr. Worries performs hundreds of Botox injections each month, is well established as a physician in New York City, and has her own private practice. She claims she has never had a serious problem with any of her injections, and her patients have never reported any side effects. Furthermore, Hollywood’s Association for Cosmetic Surgeons officially stated in a press release that Botox has never been shown to cause any negative effects, despite what other physicians might argue. Passage 3 Are Macs really better than PCs? The answer is a resounding yes! Computer Nerds Quarterly recently ran an article thoroughly outlining every advantage that Macs have over PCs. Furthermore, just ask Mac users and they will quickly explain how Macs are superior to PCs. For example, Sherry, a Mac user, states, “My Mac is the best thing I ever purchased. It is fast and easy to use. Plus, it has never crashed on me. All of my friends who have PCs have complained about all kinds of problems my Mac has never had.” More importantly, a recent report in Consumer Affairs states that more new businesses are using Mac-based systems than PC-based systems. Clearly, Macs are a cut above the PCs. Sample Responses Passage 1 Conclusion: Wearing HeadsUp Headbands is boosting performance in star basketball players. Reason: Famous basketball players rave about the positive impact of the headbands. We should not rely on these testimonials as good “proof.” This passage illustrates well the weaknesses of testimony as evidence as well as the power of expectations in affecting perceptions. How typical are these success stories? Would randomly selected users of the headband have voiced so much praise? Have the players actually improved their shooting; if so, is the improvement just a chance event? Are there other causes for the improvement? Are these selected athletes highly suggestible? Until more systematic research data are collected, we should not conclude that these headbands cause improved shooting performance in basketball players. Passage 2 Conclusion: Botox injections are safe. Reason: A cosmetic surgeon and a state professional organization claim Botox is safe. How much should we depend on these appeals to authority? Not much. First, both authorities are likely to be very biased. They stand to gain financially by making safety claims. Dr. Worries’s testimony is especially suspect because it is based on her experiences only. She has probably not sought out evidence of failures. The claims of the professional organization are as questionable as those of Dr. Worries because the organization is comprised of cosmetic surgeons, who probably perform Botox injections. If the organization were to have offered some sort of systematic research for why Botox is safe, perhaps its claims would be less suspect.
CHAPTER 8 How Good Is the Evidence: Personal Observation, Research Studies, and Analogies? In this chapter, we continue our evaluation of evidence. We focus on three common kinds of evidence: personal observation, research studies, and analogies. We need to question each of these when we encounter them as evidence. Critical Question: How good is the evidence: personal observation, research studies, and analogies? PERSONAL OBSERVATION AS EVIDENCE The policeman who shot and killed an unarmed man should be charged with a crime. Although he claims he thought the victim was reaching for a gun, onlookers reported that the victim was not making a threatening movement. How much can we count on the observation of such onlookers? One valuable kind of evidence is personal observation, the basis for much everyday reasoning as well as scientific research. For example, we feel confident of something we actually see. Thus, we tend to rely on eyewitness testimony as evidence. For many reasons, however, personal observations turn out to be untrustworthy evidence. Observers, unlike certain mirrors, do not give us “pure” observations. What we “see” and report are filtered through a set of values, biases, attitudes, and expectations. We tend to see or hear what we wish to see or hear, selecting and remembering those aspects of an experience that are most consistent with our prior experience and background. In addition, many situations present major impediments to seeing accurately, such as poor attention, rapid movement of events observed, and stressful environments. Imagine for example possible distortions in your observation if you were standing near a person waving a gun at a bank teller. When reports of observations in newspapers, magazines, books, television, and the Internet, as well as in research studies are used as evidence, you need to determine whether there are good reasons to rely on such reports. The most reliable reports will be based on recent observations made by several people observing under optimal conditions who have no apparent, strong expectations or biases related to the event being observed. RESEARCH STUDIES AS EVIDENCE “Studies show …” “Research investigators have found in a recent survey that …” “A report in the New England Journal of Medicine indicates …” One form of authority that frequently relies a great deal on observation and often carries special weight is the research study: usually a systematic collection of observations by people trained to do scientific research. How dependable are research findings? As is true for appeals to authority in general, we cannot know the answers until we ask lots of questions. Society has turned to the scientific method as an important guide for determining the facts because the relationships among events in our world are very complex, and because humans are fallible in their observations and theories about these events. The scientific method attempts to avoid many of the built-in biases in our observations of the world and in our intuition and common sense. What is special about the scientific method? Above all, it seeks information in the form of publicly verifiable data—that is, data obtained under conditions such that other qualified people can make similar observations and get the same results. Thus, for example, if one researcher reports that she was able to achieve cold fusion in the lab, the experiment would seem more credible if other researchers could obtain the same results. A second major characteristic of scientific method is control—that is, the use of special procedures to reduce error in observations and in the interpretation of research findings. For example, if bias in observations may be a major problem, researchers might try to control this kind of error by using multiple observers to see how well they agree with one another. Physical scientists frequently maximize control by studying problems in the laboratory so that they can minimize extraneous factors. Unfortunately, control is usually more difficult in the social world than in the physical world; thus, it is very difficult to successfully apply the scientific method to many questions about complex human behavior. Precision in language is a third major component of the scientific method. Concepts are often confusing, obscure, and ambiguous. Scientific method tries to be precise and consistent in its use of language. While there is much more to science than we can discuss here, we want you to keep in mind that scientific research, when conducted well, is one of our best sources of evidence because it emphasizes verifiability, control, and precision. Problems with Research Findings Unfortunately, the fact that research has been applied to a problem does not necessarily mean that the research evidence is dependable evidence or that the interpretations of the meaning of the evidence are accurate. Like appeals to any source, appeals to research evidence must be approached with caution. Also, some questions, particularly those that focus on human behavior, can be answered only tentatively even with the best of evidence. Therefore, we have to ask a number of important questions about research studies before we decide how much to depend on their conclusions. When communicators appeal to research as a source of evidence, you should remember the following: 1. Research varies greatly in quality. There is well-done research and there is poorly done research, and we should rely more on the former. Because the research process is so complex and is subject to so many external influences, even those well trained in research practices sometimes conduct research studies that have important deficiencies; publication in a scientific journal does not guarantee that a research study is not flawed in important ways. 2. Research findings often contradict one another. Thus, single research studies presented out of the context of the family of research studies that investigate the question often provide misleading conclusions. Research findings that most deserve our attention are those that have been replicated by more than one researcher or group of researchers. Many claims never get retested, and many of those that are retested fail to replicate the original results. For example, a recent study published in a prestigious medical journal found that 41 percent of retestings of very highly regarded research claims of successful medical interventions convincingly showed the original claims to be wrong or greatly exaggerated (see “Lies, Damned Lies, and Medical Science,” November 2010, Atlantic Magazine). We need to always ask the question: “Have other researchers verified the findings?” 3. Research findings do not prove conclusions. At best, they support conclusions. Such findings do not speak for themselves! Researchers must always interpret the meaning of their findings, and all findings can be interpreted in more than one way (see Chapter 7). Hence, researchers’ conclusions should not be treated as demonstrated “truths.” When you encounter statements such as “research findings show…,” you should retranslate them into “researchers interpret their research findings as showing…” 4. Like all of us, researchers have expectations, attitudes, values, and needs that bias the questions they ask, the way they conduct their research, and the way they interpret their research findings. For example, scientists often have an emotional investment in a particular hypothesis. When the American Sugar Institute is paying for your summer research grant, it will be very difficult for you to find that sugar consumption among teenagers is excessive. Like all fallible human beings, scientists may find it difficult to objectively treat data that conflict with their hypothesis. A major strength of scientific research is that it tries to make public its procedures and results so that others can judge the merit of the research and then try to replicate it. However, regardless of how objective a scientific report may seem, important subjective elements are always involved. 5. Speakers and writers often distort or simplify research conclusions. Major discrepancies may occur between the conclusion merited by the original research and the use of the evidence to support a communicator’s beliefs. For example, researchers may carefully qualify their own conclusions in their original research report only to have the conclusions used by others without the qualifications. 6. Research “facts” change over time, especially claims about human behavior. For example, the following research “facts” have been reported by major scientific sources, yet have been refuted by recent research evidence: • Prozac, Zoloft, and Paxil are more effective than a placebo for most cases of depression. • Taking fish oil, exercising, and doing puzzles helps fend off Alzheimer’s disease. • Measles vaccine causes autism. 7. Research varies in how artificial it is. Often, to achieve the goal of control, research loses some of its real-world quality. The more artificial the research, the more difficult it is to generalize from the research study to the world outside. The problem of research artificiality is especially evident in research studying complex social behavior. For example, social scientists will have people sit in a room with a computer to play games that involve testing people’s reasoning processes. The researchers are trying to figure out why people make certain decisions when confronted with different scenarios. However, we should ask, “Is sitting at the computer while thinking through hypothetical situations too artificial to tell us much about the way people make decisions when confronted with real dilemmas?” 8. The need for financial gain, status, security, and other factors can affect research outcomes. Researchers are human beings, not computers. Thus, it is extremely difficult for them to be totally objective. For example, researchers who want to find a certain outcome through their research may interpret their results in such a way to find the desired outcome. Pressures to obtain grants, tenure, or other personal rewards might ultimately affect the way in which researchers interpret their data. For example, research studies funded by a pharmaceutical company tend to have a much higher rate of positive findings for drug interventions using that company’s drugs than does research studying the same drugs funded by sponsors not associated with that drug company, such as federal government funding agencies. As you can see, despite the many positive qualities of research evidence, we need to avoid embracing research conclusions prematurely. However, you should not REJECT a scientifically based conclusion just because there is SOME doubt associated with it. Certainty is often an impossible goal, but all conclusions are not equally uncertain, and we should be willing to embrace some conclusions much more than others. Thus, when critically evaluating research conclusions, be wary of the reasoning error of demanding certainty in some conclusion when some uncertainty is to be expected but that does not negate the conclusion. We label this reasoning error the impossible certainty fallacy. Fallacy: Impossible Certainty: Assuming that a research conclusion should be rejected if it is not absolutely certain. Clues for Evaluating Research Studies Apply the following questions to research findings to determine whether the findings are dependable evidence. 1. What is the quality of the source of the report? Usually, the most dependable reports are those published in peer-reviewed journals, those in which a study is not accepted until it has been reviewed by a series of relevant experts. Usually—but not always—the more reputable the source, the better designed the study. So, try to find out all you can about the reputation of the source. 2. Other than the quality of the source, are there other clues included in the communication suggesting the research was well done? For example, does the report detail any special strengths of the research? Unfortunately, most reports of research findings encountered in popular magazines, newspapers, television reports, and blogs fail to provide sufficient detail about the research to warrant our judgment of the research quality. 3. How recently was the research conducted, and are there any reasons to believe that the findings might have changed over time? Many research conclusions change over time. For example, the causes of depression, crime, or heart disease in 1980 may be quite different from those in 2010. 4. Have the study’s findings been replicated by other studies? When an association is repeatedly and consistently found in well-designed studies—for example, the link between smoking and cancer—then there is reason to believe it, at least until those who disagree can provide persuasive evidence for their point of view. 5. How selective has the communicator been in choosing studies? For example, have relevant studies with contradictory results been omitted? Has the researcher selected only those studies that support his point? 6. Is there any evidence of strong-sense critical thinking? Has the speaker or writer showed a critical attitude toward earlier research that was supportive of her point of view? Most conclusions from research need to be qualified because of research limitations. Has the communicator demonstrated a willingness to qualify? 7. Is there any reason for someone to have distorted the research? We need to be wary of situations in which the researchers need to find certain kinds of results. 8. Are conditions in the research artificial and therefore distorted? Always ask, “How similar are the conditions under which the research study was conducted to the situation the researcher is generalizing about?” 9. How far can we generalize, given the research sample? Because this is such an important issue, we discuss it in depth in our next section. 10. Are there any biases or distortions in the surveys, questionnaires, ratings, or other measurements that the researcher uses? We need to have confidence that the researcher has measured accurately what she has wanted to measure. The problem of biased surveys and questionnaires is so pervasive in research that we discuss it in more detail in a later section. GENERALIZING FROM THE RESEARCH SAMPLE Speakers and writers usually use research reports to support generalizations, that is, claims about events in general. For example, “the medication was effective in treating cancer for 75 percent of the patients in the study” is not a generalization; “the medication cures pancreatic cancer” is. Most publicized generalizations that we encounter need to be closely examined for the possibility of overgeneralizing! Let’s see why. First, how we sample is crucial in determining to what extent we can generalize. The ability to generalize from research findings depends on the number, breadth, and randomness of events or people in the researcher’s study. The process of selecting events or persons to study is called sampling. Because researchers can never study all events or people about which they want to generalize, they must choose some way to sample; and some ways are preferable to others. You need to keep several important considerations in mind when evaluating the research sample: 1. The sample must be large enough to justify the generalization or conclusion. In most cases, the more events or people researchers observe, the more dependable their conclusion. If we want to form a general belief about how often college students receive help from others on term papers, we are better off studying 1,000 college students than studying 100. 2. The sample must possess as much breadth, or diversity, as the types of events about which conclusions are to be drawn. For example, if researchers want to generalize about college students’ drinking habits in general, their evidence should be based on the sampling of a variety of different kinds of college students in a variety of different kinds of college settings. 3. The more random the sample, the better. When researchers randomly sample, they try to make sure that all events about which they want to generalize have an equal chance of getting sampled; they try to avoid a biased sample. Major polls, like the Gallup Poll, for example, always try to sample randomly. This keeps them from getting groups of events or people that have biased characteristics. Do you see how each of the following samples has biased characteristics? a. People who volunteer to be interviewed about frequency of sexual activity. b. People who have land-line phones only. c. Students in an introductory psychology class. d. Viewers of particular television networks, such as Fox or MSNBC. Thus, we want to ask of all research studies, “How many events or people did they sample, how much breadth did the sample have, and how random was the sample?” Failure to attend sufficiently to the limits of sampling leads to overgeneralizing research findings, stating a generalization that is much broader than warranted by the research. In Chapter 7, we referred to such overgeneralization as the hasty generalization fallacy. Let’s take a close look at a research overgeneralization: People who join online dating services tend to succeed in finding a good match. An online survey of 229 people, aged 18 to 65, who have used Internet dating sites, asked them about their main relationship that they had had online. The research showed that: 94 percent of those surveyed saw their ‘e-partners’ again after first meeting them, and the relationships lasted for an average of at least seven months. Sampling procedures prohibit such a broad generalization. The research report implies the conclusion can be applied to all users of online dating services, when the research studied only one online Web site and only a total of 229 people. The study fails to describe how the sample was selected; hence, the randomness and breadth for this site are unknown. It is quite possible, for example, that those who volunteered to participate were those who had been most successful in finding a good match. The research report is flawed because it greatly overgeneralizes. Attention: We can generalize only to people and events that are like those that we have studied in the research. BIASED SURVEYS AND QUESTIONNAIRES It’s early evening. You have just finished dinner. The phone rings. “We’re conducting a survey of public opinion. Will you answer a few questions?” If you answer “yes,” you will be among thousands who annually take part in surveys—one of the research methods you will encounter most frequently. Think how often you hear the phrase “according to recent polls.” Surveys and questionnaires are usually used to measure people’s behavior, attitudes, and beliefs. Just how dependable are they? It depends! Survey responses are subject to many influences; So, one has to be very cautious in interpreting their meaning. Let’s examine some of these influences. First, for survey responses to be meaningful, they must be answered honestly. That is, verbal reports need to mirror actual beliefs and attitudes. Yet, for many reasons, people frequently shade the truth. For example, they may give answers they think they ought to give, rather than answers that reflect their true beliefs. They may experience hostility toward the questionnaire or toward the kind of question asked. They may give too little thought to the question. If you have ever been a survey participant, you can probably think of other influences. Attention: You cannot assume that survey responses accurately reflect true attitudes. Second, many survey questions are ambiguous in their wording; the questions are subject to multiple interpretations. Different individuals may in essence be responding to different questions! For example, imagine the multiple possible interpretations of the following survey question: “Do you think there is quality programming on television?” The more ambiguous the wording of a survey, the less credibility you can place in the results. You should always ask the question, “How were the survey questions worded?” Usually, the more specifically a question is worded, the more likely that different individuals will interpret it similarly. Third, surveys contain many built-in biases that make them even more suspect. Two of the most important are biased wording and biased context. Biased wording of a question is a common problem; a small change in how a question is asked can have a major effect on how a question is answered. Let’s examine a conclusion based on a recent poll and then look at the survey question. A college professor found that 56 percent of respondents attending his university believe that the Obama healthcare program is a major mistake for the country. Now look closely at the survey question: “What do you think about the president’s misguided efforts to impose Obamacare socialism on the nation?” Do you see the built-in bias? The “leading” words are “the president’s misguided efforts” and “impose Obamacare socialism.” Wouldn’t the responses have been quite different if the question had read, “What do you think about the president’s attempt to provide a health care system that will provide expanded coverage, lower costs, and increased health care coverage to Americans?” Thus, the responses obtained here are a distorted indicator of attitudes concerning the new healthcare program. Survey and questionnaire data must always be examined for possible bias. Look carefully at the wording of the questions! The effect of context on an answer to a question can also be powerful. Even answers to identical questions can vary from poll to poll depending on how the questionnaire is presented and how the question is embedded in the survey. The following question was included in two recent surveys: “Do you think we should lower the drinking age from 21?” In one survey, the question was preceded by another question: “Do you think the right to vote should be given to children at the age of 18 as it currently is?” In the other survey, no preceding question occurred. Not surprisingly, the two surveys showed different results. Can you see how the context might have affected respondents? Another important contextual factor is length. In long surveys, people may respond differently to later items than to earlier items simply because they get tired. Be alert to contextual factors when evaluating survey results. Because the way people respond to surveys is affected by many unknown factors, such as the need to please the interviewer or the interpretation of the question, should we ever treat survey evidence as good evidence? There are heated debates about this issue, but our answer is “yes,” as long as we are careful and do not generalize further than warranted. Some surveys are more reputable than others. The better the quality of the survey, the more you should be influenced by its results. Our recommendation is to examine survey procedures carefully before accepting survey results. Once you have ascertained the quality of the procedures, you can choose to generate your own qualified generalization—one that takes into account any biases you might find. Even biased surveys can be informative; but you need to know the biases in order to not be unduly persuaded by the findings. CRITICAL EVALUATION OF A RESEARCH-BASED ARGUMENT Let’s now use our questions about research to evaluate the following argument in which research evidence has been used to support a conclusion. It is time to abolish tenure in the public school system according to a Time Magazine poll, which asked Americans what they think of the current state of public education. Among the questions the survey addressed was the following: How can policy be changed to make the public-education system better? The following reported results showed a major discontent of the American public with the tenure system: 28% of those surveyed support the current system of tenure for teachers, which makes it difficult to remove them from their jobs; and 56% think tenured long-time teachers are not motivated to work hard. The poll was conducted by telephone in August of 2010 among a national random sample of 1,000 Americans aged 18 and older. How good is the evidence? The research is presented here in an uncritical fashion. We see no sign of strong-sense critical thinking. The report makes no references to special strengths or weaknesses of the study, although it does provide some brief detail about the sampling procedures so that we can speculate about its worth as the basis of a generalization. There is no indication of whether the study has been replicated or how it fits into a broader context of studies about what is needed to improve public education. We do not know what benefits publishing these findings may have had for the person making the argument. Is there any evidence of overgeneralizing? The sample is relatively large and is described as random, two strengths. The survey was done by telephone, however; and thus we don’t know what kinds of selective factors led to people choosing to take part in the survey. Also, we don’t know whether cell phones were used; so participation may be biased against “cell phone only” people. It is impossible to determine sampling breadth because we don’t know what aspects of the general population were represented by the responses to the phone calling. For example, were some areas of the country, some occupations, or some age groups more represented than others? More information about how the survey was introduced and described to those called and characteristics of those who volunteered would be helpful. Could there have been a bias in those willing to cooperate with the callers? Such questions suggest that we should be wary of overgeneralizing from these results. Are the survey questions biased? The argument omits the specific wording of the questions used for the two findings and also fails to list other questions included in the survey; so we can’t determine what kinds of order or context biasing effects might be present. The phrase “which makes it difficult to remove them from their jobs” highlights a negative aspect of tenure, suggesting that the question asked was a question loaded against tenure. We have raised enough questions about the given passage to be wary of the generalizability of its factual claims. We would want to a close look at the entire research study and also rely on much more related research before we could conclude that these claims are dependable. Let’s now look at a very different source of evidence. ANALOGIES AS EVIDENCE Look closely at the structure of the following brief arguments, paying special attention to the reason supporting the conclusion. There is no need to fear that the Internet will lead to the disappearance of newspapers and magazines. After all, TV dinners didn’t make cooking disappear. As an educator, it is important to weed out problem students early and take care of the problems they present because one bad egg ruins the omelet. Both arguments use analogies as evidence, a very different kind of evidence from what we have previously been evaluating. At first glance, analogies often seem very persuasive. But they often deceive us; and we need to ask, “How do we decide whether an analogy is good evidence?” Before reading on, try to determine the persuasiveness of the above-mentioned two arguments. Did you note that the analogies involve comparisons? They rely on resemblance as the major form of evidence. The reasoning is as follows: “We know a lot about something in our world (X), and another event of interest (Y) seems to be like X in some important way. If these two things are alike in one or more respects, then they will probably be alike in other respects as well.” For example, when people get depressed, many psychiatrists treat the depressive behavior as a form of mental disease because they see the behavior as having important similarities to having a physical illness, and thus they treat the person with antidepressant medications. They see the mental problems like a symptom of a physical disorder. If they were to see the behavior as “experiencing a problem in living,” they might treat the patient very differently. We reason in a similar fashion when we choose to buy a CD because a friend recommends it. We reason that because we resemble each other in a number of likes and dislikes, we will enjoy the same music. An argument that uses a well-known similarity between two things as the basis for a conclusion about a relatively unknown characteristic of one of those things is an argument by analogy. Analogies both stimulate insights and deceive us. For example, analogies have been highly productive in scientific and legal reasoning. When we infer conclusions about humans on the basis of research with mice, we reason by analogy. Much of our thinking about the structure of the atom is analogical reasoning. When we make a decision in a legal case, we may base that decision on the similarity of that case to preceding cases. For example, when judges approach the question of whether restricting corporate contributions to political candidates violates the constitutional protection of free speech and freedom of expression, they must decide whether financial contributions are analogous to freedom of speech; thus, they reason by analogy. Such reasoning can be quite insightful and persuasive. Identifying and Comprehending Analogies You can identify an argument by analogy by noticing that something that has well-known characteristics is being used to help explain something that has some similar characteristics. In doing so, the assumption is being made that if the event we’re interested in explaining is like the event to which it is being compared in important ways, it will be like that event in other important ways. For example, consider the analogy, “Relearning geometry is like riding a bike. Once you start, it all comes back to you.” Riding a bicycle, an activity with well-known characteristics, is used to explain relearning geometry, the unknown, which is an activity with some, but not all, similar characteristics. We are familiar with the idea of getting on a bike after a period of time and “it all coming back to us” as we start to ride again. The analogy, therefore, explains relearning geometry in the same way, arguing if one starts to do geometry problems, remembering how to do such problems will simply come back to the person. Note that we started with a similarity—both activities involve learning a skill—and assumed that therefore they would have other important similarities. Once the nature and structure of analogies is understood, you should be able to identify analogies in arguments. It is especially important to identify analogies when they are used to set the tone of the conversation. Such analogies are used to “frame” an argument. To identify framing analogies, look for comparisons that are used to not only explain a point, but also to influence the direction a discussion will take. For example, in the 2004 presidential election, the war in Iraq was an important issue. Opponents of the war used the analogy comparing the war in Iraq to the Vietnam War. The analogy was not only an attempt to explain what is happening in Iraq now, but also to cause people to look negatively upon the war in Iraq. Conversely, proponents of the war in Iraq used the analogy comparing the war to World War II. World War II carries with it more positive connotations than does the Vietnam War, so this analogy was used to reframe the discussion in terms more favorable to the war in Iraq. Always look for comparisons that attempt to direct the reaction to an object through framing. A careful evaluation of framing analogies will prevent you from being misled by a potentially deceptive analogy. Framing analogies is not the only thing to be wary of when looking for analogies in arguments. One must also be careful when evaluating arguments that use overly emotional comparisons. For example, some politicians in arguing against the recent health care bill compared end-of-life planning to death panels. Who could possibly be in favor of a bill that called for death panels? Overly emotional analogies cloud the real issues in arguments and prevent substantive discourse. Try to identify comparisons made that contain significant emotional connotations to avoid being deceived by these analogies. Evaluating Analogies Because analogical reasoning is so common and has the potential to be both persuasive and faulty, you will find it very useful to recognize such reasoning and know how to systematically evaluate it. To evaluate the quality of an analogy, you need to focus on two factors. 1. The ways the two things being compared are similar and different. 2. The relevance of the similarities and the differences. A word of caution: You can almost always find SOME similarities between any two things. So, analogical reasoning will not be persuasive simply because of many similarities. Strong analogies will be ones in which the two things we compare possess relevant similarities and lack relevant differences. All analogies try to illustrate underlying principles. Relevant similarities and differences are ones that directly relate to the underlying principle illustrated by the analogy. Let’s check out the soundness of the following argument by analogy. I do not allow my dog to run around the neighborhood getting into trouble, so why shouldn’t I enforce an 8 o’clock curfew on my 16-year-old? I am responsible for keeping my daughter safe, as well as responsible for what she might do when she is out. My dog stays in the yard, and I want my daughter to stay in the house. This way, I know exactly what both are doing. A major similarity between a pet and a child is that both are thought of as not being full citizens with all the rights and responsibilities of adults. Plus, as the speaker asserts, he is responsible for keeping his dog and daughter safe. We note some relevant differences, however. A dog is a pet that lacks higher order thinking skills and cannot assess right and wrong. A daughter, however, is a human being with the cognitive capacity to tell when things are right and wrong and when she should not do something that might get her (or her parents) in trouble. Also, as a human, she has certain rights and deserves a certain amount of respect for her autonomy. Thus, because a daughter can do things a dog cannot, the differences are relevant in assessing the analogy. The failure of the analogy to allow for the above-listed distinctions causes it to fail to provide strong support for the conclusion. Another strategy that may help you evaluate reasoning by analogy is to generate alternative analogies for understanding the same phenomenon that the author or speaker is trying to understand. Such analogies may either support or contradict the conclusions inferred from the original analogy. If they contradict the conclusion, they then reveal problems in the initial reasoning by analogy. A productive way to generate your own analogies is the following: 1. Identify some important features of what you are studying. 2. Try to identify other situations with which you are familiar that have some similar features. Brainstorm. Try to imagine diverse situations. 3. Try to determine whether the familiar situation can provide you with some insights about the unfamiliar situation For example, in thinking about pornography, you could try to think of other situations in which people repeatedly think something is demeaning because of the way people are treated in a given situation, or because of what watching something might cause others to do. Do segregation, racist/sexist jokes, or employment discrimination come to mind? How about arguments that claim playing violent video games, watching action movies, or listening to heavy metal music cause children to act violently? Do such arguments trigger other ways to think about pornography? You should now be capable of systematically evaluating the two brief analogical arguments at the beginning of this section. Ask the questions you need to ask to recognize an argument by analogy. Then, ask the questions to evaluate the argument. Look for relevant similarities and differences. Usually, the greater the proportion of relevant similarities to relevant differences, the stronger the analogy. An analogy is especially compelling when you can find no relevant difference and you can find good evidence that the relevant similarities do indeed exist. We found a relevant difference that weakens each of our two initial sample analogies. Check your evaluation against our list. (First example) Both TV dinners and the Internet made it quicker and easier to accomplish complex time-consuming tasks. Reading magazines and newspapers, however, may not provide the same kind of pleasure as cooking a gourmet meal. (Second example) The interactions of students in a classroom environment are very complex. The effect any one student might have on the group cannot easily be determined, just as the effects the group might have on the individual are difficult to predict. Conversely, a rotten egg will definitely spoil any food made from it. Also, it is problematic to think of people as unchanging objects, such as rotten eggs, that have no potential for growth and change. Analogies that trick or deceive us fit our definition of a reasoning fallacy; such deception is called the faulty analogy fallacy. Fallacy: Faulty Analogy: Occurs when an analogy is proposed in which there are important relevant dissimilarities. In one sense, all analogies are faulty because they make the mistaken assumption that because two things are alike in one or more respects, they are necessarily alike in some other important respect. It is probably best for you to think of analogies as varying from very weak to very strong. But even the best analogies are only suggestive. Thus, if an author draws a conclusion about one case from a comparison to another case, then she should provide further evidence to support the principle revealed by the most significant similarity. USING EVIDENCE IN YOUR OWN WRITING To help you improve the quality of evidence in your own writing, we have a suggestion. When conducting your own research, you need to observe and record consistently. Before starting an independent research, a researcher develops a set of procedures or rules to guide the process. The formal name for these procedures is methodology. When you carefully decide on a methodology, you often preemptively avoid problems we discussed earlier in this chapter, such as biased questionnaires and sampling issues. Another aspect of conducting your own research is keeping accurate and available records. Our memories are fallible and prone to make errors when we try to recall what we have seen and heard. Technology, however, has created some very useful tools to address this concern. You can video or audio record interviews or observations. You can use Web-based survey tools. Remember you should always date your observations, surveys, or interviews, and you should keep them organized either electronically or with hard copies. Your readers should be able to look over your findings. You may even want to return to them for other projects. Lastly, keep in mind the limitations of your findings. We discussed the risk of overgeneralization earlier in the chapter. If you incorporate your own research into your writing projects, this concern applies especially to you. The implications of your research are limited to the regions you surveyed or observed. If you seek to demonstrate that your findings have far-reaching implications, you may want to supplement your writing with other authors’ findings. Research and the Internet It’s the 21st century. We suspect that you are light-years ahead of technological half-wits like Homer Simpson, who marveled, “They have the Internet on computers, now?” We’d be surprised if you were not taking advantage of the Internet when you prepare to write. Internet research has fundamentally changed evidence gathering for most of us, making information exponentially more accessible. What’s the trade-off for this unprecedented level of availability? We have to consider the evidence we gather with even greater scrutiny. Keep these tips in mind to help you address the particular difficulties that arise with Internet research. Earlier in this chapter, we discussed the importance of investigating an author’s background. We urged you to determine potential biases or conflicts of interests. To weigh the opinion of an authority, we need to know that person’s credentials and potential biases. The Onion, the popular satirical news site, illustrates how the Internet makes this task particular difficult. In its 2008 mock article “Local Idiot to Post Comment on Internet,” it quotes the “local idiot” as he divulges his plans: “Later this evening, I intend to watch the video in question, click the ‘reply’ link above the box reserved for user comments, and draft a response, being careful to put as little thought into it as possible, while making sure to use all capital letters and incorrect punctuation […]. Although I do not yet know exactly what my comment will entail, I can say with a great degree of certainty that it will be incredibly stupid.” If only all contributors to the Internet were so forthright! The importance of investigating a source’s credibility is even greater when we add Internet sources to the equation. The Internet often draws comparisons to the Wild West. There is no sheriff in town making sure that only true and fair statements are published by respective folk. In its current form, it is relatively unrestricted. Anyone can create a Web page or a blog. Web pages can appear to look trustworthy when they are actually published by someone with a hidden agenda. Take a look at some of the Web sites created by the social activists known as The Yes Men, such as http://www.dowethics.com, a site they created to look and sound like the real deal. Upon investigation, visitors to the site discovered that it was not created by Dow. In fact, the site was a biting critique of the chemical company’s environmental practices. While this example is unusual, we hope it reminds you that the creators of a Web site may have a political, commercial, or even artistic agenda that is not readily apparent. Even after you decide that Web-based author is reliable, you should ask more questions. Because the Web does not have a sheriff, evidence that is questionable or untrue can easily be posted. Comedy Central’s satirical pundit Stephen Colbert wanted to demonstrate how easily false information can be posted on the Internet. In one episode of his Colbert Report, he edited the public Internet encyclopedia Wikipedia. For five hours, Wikipedia entries stated that George Washington did NOT own slaves and the population of African elephants tripled in the previous six months. (For another satire of this very real concern, check out the Onion’s 2002 article “Factual Error Found on Internet,” which begins “The Information Age was dealt a stunning blow Monday, when a factual error was discovered on the Internet.”). To combat this problem, avoid writing about evidence that has not been credited to a specific source. Take the time to look up the original source. When a snippet of another article is posted or cited, the author who posted the snippet may have misunderstood or taken the information out of context. PRACTICE EXERCISES Critical Question: How good is the evidence? Evaluate each of these practice passages by examining the quality of the evidence provided. Passage 1 Are children of alcoholics more likely to be alcoholics themselves? In answering the question, researchers sampled 451 people in Alcoholics Anonymous to see how many would say that one, or both, of their parents were alcoholics. People in AA used in the study currently attend AA somewhere in Ohio, Michigan, or Indiana and were asked by people in charge of the local AA programs to volunteer to fill out a survey. The research found that 77 percent of the respondents had at least one parent they classified as an alcoholic. The study also surveyed 451 people randomly from the same states who claim not to be heavy drinkers. Of the nonheavy drinkers, 23 percent would label at least one of their parents as alcoholic. Passage 2 Why shouldn’t law students taking a difficult exam be permitted to use their laptop computers? Attorneys can use their computers to look up information relevant to difficult cases. Passage 3 One of the greatest symbols of the United States is the American flag. While cases in the past have defended desecration of the flag as symbolic speech, I argue, “Where is the speech in such acts?” If you have something bad to say about the United States, say it, but do not cheapen the flag with your actions. Many Americans died to keep that flag flying. Those who want to support flag burning and other such despicable acts are outnumbered. Last month, 75 people were surveyed in a restaurant in Dallas, Texas, and were asked if they supported the unpatriotic desecration of the American flag in an attempt to express some sort of anti-American idea. Ninety-three percent responded that they were not in favor of desecration of the American flag. Therefore, our national lawmakers should pass a law protecting the American flag against such horrible actions. Sample Responses Passage 1 Conclusion: Children of alcoholics are more likely to become alcoholics than are children of nonalcoholics. Reason: More alcoholics than nonalcoholics reported having at least one alcoholic parent. Note that the results presented are from one study without reference to how typical these results are. We also do not know where this information was published, so we can make no assessments regarding how rigorously the study was reviewed before publication. However, we can ask some useful questions about the study. The sample size is quite large, but its breadth is questionable. Although multiple states were sampled, to what extent are the people in the AA programs in these states typical of alcoholics across the nation? Also, how do alcoholics in AA compare to alcoholics who have not sought help? Perhaps the most important sampling problem was the lack of a random sample. While the self-reported nonalcoholics were randomly selected in the three states, the respondents in AA were selected on a voluntary basis. Do those who volunteered to talk about their parents differ greatly from those who did not volunteer? If there is a difference between the volunteers and nonvolunteers, then the sample is biased. How accurate are the rating measurements? First, no definition for alcoholic is given beyond those answering the survey currently being in AA. In addition, we are not told of any criteria given to the research participants for rating parents as alcoholic. Thus, we are uncertain of the accuracy of the judgments about whether someone was an alcoholic. Also, problematic is the fact that the selection of the supposed control group of nonalcoholics is based on self-assessment. We know that there is a socially acceptable answer of not being an alcoholic, and people tend to give socially acceptable answers when they know them. This response tendency could also bias the sampling in the supposed control group. We would want to know more about the accuracy of these ratings before we could have much confidence in the conclusion. Passage 2 Conclusion: Students taking exams should be able to use their laptops. Reason: Students using the laptop to look up answers on difficult exams is like attorneys being able to use laptops to find answers to difficult cases. First we note that the reasoning is based on a comparison. Something we are familiar with, attorneys using their laptops to help with difficult cases, is used to help better understand an event that is similar in some ways: Both situations involve using laptops to look up answers to difficult problems. A significant difference, however, is that students taking an exam are being tested for knowledge that they are supposed to possess without external help. This difference is sufficient for us to reject the analogy as proof for the conclusion.