sumptions. The fundamental difficulty of empirical research is to de- cide what assumptions to maintain. Given that strong conclusions are desirable, why not maintain strong assumptions? There is a tension between the strength of assumptions and their credibility. I have called this (Manski 2003, p. l): The Law of Decreasing Credibility: The credibility of inference decreases with the strength of the assumptions maintained. This ‘law" implies that analysts face a dilemma as they decide what as- sumptions to maintain: Stronger assumptions yield conclusions that are more powerful but less credible. I will use the word credibility throughout this book, but 1 will have to take it as a primitive concept that defies deep definition. The second edition of the Oxford English Dictionary (OED) defines credibility as ”the quality of being credible." The OED defines credible as “capable of being believed; believable." It defines believable as ”able to be believed; credible.” And so we come full circle. Whatever credibility may be, it is a subjective concept. Each person assesses credibility on his or her own terms. When researchers largely agree on the credibility of certain assumptions or conclusions, they may refer to this agreement as 'scientific consensus." Persons sometimes push the envelope and refer to a scientific consensus as a ”fact" or a 'scientific truth." This is overreach. Consensus does not imply truth. Premature scientific consensus sometimes inhibits researchers from exploring fruit- ful ideas. Disagreements occur often. Indeed, they may persist without reso- lution. Persistent disagreements are particularly common when as- sumptions are nonrefutable—that is, when alternative assumptions are consistent with the available data. As a matter of logic alone, disregard- ing credibility, an analyst can pose a nonrefutable assumption and ad- here to it forever in the absence of disproof. Indeed, he can displace the burden of proof, stating 'I will maintain this assumption until it is proved wrong." Analysts often do just this. An observer may question the credibility of a nonrefutable assumption, but not the logic of hold.- ing on to it. To illustrate, American society has long debated the deterrent ef- fect of the death penalty as a punishment for murder. Disagreement -a'mc -~v- ‘ J kw persists in part because empirical research based on available data has not been able to settle the question. With this background, persons find it tempting to pose their personal beliefs as a hypothesis, observe that this hypothesis cannot be rejected empirically, and conclude that soci- ety should act as if their personal belief is correct. Thus, a person who believes that there is no deterrent effect may state that, in the absence of credible evidence for deterrence, society should act as if there is no deterrence. Contrariwise, someone who believes that the death pen- alty does deter may state that, in the absence of credible evidence for no deterrence, society should act as if capital punishment does deter. I will discuss deterrence and the death penalty further in Chapter 2. 1.2. Incentives for Certitude A researcher can illuminate the tension between the credibility and power of assumptions by posing alternative assumptions of varying credibility and determining the conclusions that follow in each case. In practice, policy analysis tends to sacrifice credibility in return for strong conclusions. Why so? A proximate answer is that analysts respond to incentives. I have earlier put it this way (Manski 2007a, 7—8): The scientific community rewards those who produce strong novel find- ings. The public, impatient for solutions to its pressing concerns, rewards those who offer simple analyses leading to unequivocal policy recom- mendations. These incentives make it tempting for researchers to main- tain assumptions far stronger than they can persuasively defend, in order to draw strong conclusions. The pressure to produce an answer, without qualifications, seems particularly intense in the environs of Washington, D.C. A perhaps apoc- ryphal, but quite believable, story circulates about an economist's attempt to describe his uncertainty about a forecast to President Lyndon B. John- son. The economist presented his forecast as a likely range of values for the quantity under discussion. Johnson is said to have replied, ‘Ranges are for cattle. Give me a number.’ When a president as forceful as Johnson seeks a numerical prediction with no expression of uncertainty, it is understandable that his advisers feel compelled to comply. Jerry Hausman, a longtime econometrics colleague, stated the in- centive argument this way at a conference in 1988, when I presented in public my initial findings on policy analysis with credible assump- tions: ”You can’t give the client a bound. The client needs a point." (A bound is synonymous with a range or an interval. A point is an exact prediction.) Hausman's comment reflects a perception that I have found to be common among economic consultants. They contend that policy mak- ers are either psychologically unwilling or cognitively unable to cope with uncertainty. Hence, they argue that pragmatism dictates provi- sion of point predictions, even though these predictions may not be credible. This psychological-cognitive argument for certitude begins from the reasonable premise that policy makers, like other humans, have limited willingness and ability to embrace the unknown. However, I think it too strong to draw the general conclusion that ”the client needs a point." It may be that some persons think in purely deterministic terms. However, a considerable body of research measuring expectations shows that most make sensible probabilistic predictions when asked to do so; see Chap- ter 3 for further discussion and references. I see no reason to expect that policy makers are less capable than ordinary people. Support for Certitude in Philosophy of Science The view that analysts should offer point predictions is not confined to us. presidents and economic consultants. It has a long history in the philosophy of science. Over fifty years ago, Milton Friedman expressed this perspective in an influential methodological essay. Friedman (1953) placed predic- tion as the central objective of science, writing (p. S): ”The ultimate goal of a positive science is the development of a ’theory' or ’hypothe- sis' that yields valid and meaningful (i.e. not truistic) predictions about phenomena not yet observed." He went on to say (p. 10): The choice among alternative hypotheses equally consistent with the available evidence must to some extent be arbitrary, though there is general agreement that relevant considerations are suggested by the criteria ‘simplicity' and ‘fruitfulness,’ themselves notions that defy completely objective specification. Thus, Friedman counseled scientists to choose one hypothesis (that is, make a strong assumption), even though this may require the use of ”to some extent . . . arbitrary" criteria. He did not explain why scien- tists should choose a single hypothesis out of many. He did not enter- tain the idea that scientists might offer predictions under the range of plausible hypotheses that are consistent with the available evidence. The idea that a scientist should choose one hypothesis among those consistent with the data is not peculiar to Friedman. Researchers wanting to justify adherence to a particular hypothesis sometime refer to Ockham's Razor, the medieval philosophical declaration that “plural- ity should not be posited without necessity." The Encyclopaedia Britan- nica Online (2010) gives the usual modern interpretation of this cryptic statement, remarking that “the principle gives precedence to simplicity; of two competing theories, the simplest explanation of an entity is to be preferred." The philosopher Richard Swinburne writes (1997, l): I seek . . . to show that—other things being equal—the simplest hy- pothesis proposed as an explanation of phenomena is more likely to be the true one than is any other available hypothesis, that its predictions are more likely to be true than those of any other available hypothesis, and that it is an ultimate a priori epistemic principle that simplicity is evidence for truth. The choice criterion offered here is as imprecise as the one given by Friedman. What do Britannica and Swinburne mean by 'simplicity"? However one may operationalize the various philosophical dicta for choosing a single hypothesis, the relevance of philosophical thinking to policy analysis is not evident. In policy analysis, knowledge is instrumen- tal to the objective of making good decisions. When philosophers discuss the logical foundations and human construction of knowledge, they do so without posing this or another explicit objective. Does use of criteria such as “simplicity" to choose one hypothesis among those consistent with the data promote good policy making? This is the relevant question for policy analysis. As far as I am aware, philosophers have not addressed it. 1.3. Conventional Certitudes John Kenneth Galbraith popularized the term conventional wisdom, writing (1958, chap. 2): ”It will be convenient to have a name for the