Skip to main content

OPINION article

Front. Psychol., 30 April 2015
Sec. Psychology of Language
This article is part of the Research Topic Context in communication: A cognitive view View all 21 articles

Communicating numeric quantities in context: implications for decision science and rationality claims

  • Socio-Cognitive Systems Section, Defence Research and Development Canada and Department of Psychology, York University, Toronto, ON, Canada

Perhaps more than most areas of cognitive psychology, the study of human judgment and decision-making relies heavily on experimental tasks communicated through written descriptions that convey numeric quantifiers as primary sources of information. Subjects in such studies usually are required to use such information to make choices, indicate preferences, or offer judgments. These responses are compared to normative benchmarks, resulting in the researchers drawing conclusions about the quality, coherence, or rationality of human judgment and decision-making (Arrow, 1982; Tversky and Kahneman, 1986; Stanovich and West, 2000). Most of this body of research has paid little attention (a) to how subjects interpret numeric information conveyed in writing and (b) to how those interpretations are influenced by context (Mandel and Vartanian, 2011; Teigen, in press). More often than not, researchers simply assume that subjects will interpret numeric quantities conveyed in experimental tasks as exact values, and also that subjects should interpret expressed numbers as precise quantities.

Yet it is uncontroversial in linguistics that numeric quantifiers may be treated as exact or approximate values, and where their interpretations are approximate, they may be treated as one-sided (e.g., at least, which is lower bounded, or at most, which is upper bounded) or two-sided (e.g., roughly or about). Linguistic accounts of numeric quantifiers (e.g., Horn, 1989; Carston, 1998; Levinson, 2000; Geurts, 2006; Breheny, 2008) do not support the normative claim (or assumption) that a precise “bilateral” reading of such quantifiers consistent with exactly is the proper reading. Although linguistic accounts differ in what they posit as possible semantic defaults, even those proposing a bilateral semantics, such as Breheny (2008), specify pathways for pragmatically derived unilateral interpretations, such as interpreting a numeric quantifier, x, as at least x or at most x.

More generally, the degree to which decision researchers seem confident in defining the meaning of linguistic terms for others runs counter to a fundamental idea in the philosophy of language, which holds that the meanings of words are definable only through their actual use in language (e.g., Wittgenstein, 1953; Austin, 1979). It also runs counter to psycholinguistic evidence indicating that even 5-year olds understand that numeric quantifiers should be interpreted as “at least” in some contexts (Musolino, 2004). And it runs counter to work in experimental pragmatics indicating that people develop context-sensitive scalar implicatures as they develop. For instance, they come to understand that although some logically entails all, it usually pragmatically excludes all because it would be infelicitous to use some if one meant all (Moxey and Sanford, 2000; Noveck, 2001; Noveck and Reboul, 2008).

Studies of Option Framing as a Case in Point

Consider the following influential test of the coherence of decision-making:

Imagine that the U.S. is preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimates of the consequences of the programs are as follows:

[Positive Frame]

If Program A is adopted, 200 people will be saved.

If Program B is adopted, there is 1/3 probability that 600 people will be saved, and 2/3 probability that no people will be saved.

[Negative Frame]

If Program C is adopted, 400 people will die. If Program D is adopted, there is 1/3 probability that nobody will die, and 2/3 probability that 600 people will die.

According to Tversky and Kahneman (1981), options A and C in the Asian disease problem (ADP) are extensionally equivalent and likewise for options B and D. The former, moreover, are regarded by virtually all researchers who have used or commented on the problem as “certain” or “sure,” whereas the latter are regarded as “uncertain” or “risky.” Coherent choices thus require that a decision-maker who chooses A over B would also choose C over D (or vice versa).

Yet Tversky and Kahneman (1981) and others (e.g., see Levin et al., 1998, for an overview) found that most subjects choose A in the first pair and D in the second, ostensibly violating one of the most consensual normative principles of choice (Tversky and Kahneman, 1986)—description invariance, which states that extensionally equivalent events should not be differentially regarded merely because of the way in which they are described.

I say “ostensibly” because the claim that subjects presented with this problem violate description invariance (and, hence, are incoherent in their decision-making) rests on a shaky argument I call proof by arithmetic, which goes like this:

1. There are exactly 600 people at risk.

2. Option A will save exactly 200 people.

3. Option C will let exactly 400 people die.

4. Option A implies that exactly 400 people will die because exactly 600 minus exactly 200 is equal to exactly 400.

5. Option C implies that exactly 200 people will be saved because exactly 600 minus exactly 400 is equal to exactly 200.

6. Therefore, option A is equal to option C.

A similar argument can be expressed for the claim that options B and D are equivalent.

The reason the proof-by-arithmetic argument is shaky is that it assumes people interpret numeric quantifiers as exact values, when as noted earlier this reflects a naïve view on quantifiers, in particular, and language, in general.

To put that view to a proper test requires asking subjects not only about their choices but also about their interpretations of the quantifiers in the options presented to them. This approach was adopted in a recent experiment (Mandel, 2014, Experiment 3). After presenting subjects with a choice problem much like the ADP (except that it focused on 600 people at risk in a war-torn region rather than 600 people at risk due to an unusual Asian disease), they were asked whether they interpreted “200” in the positive frame or “400” in the negative frame as meaning (a) “at least [n],” (b) “exactly [n],” or (c) “at most [n].” Sixty-four percent responded “at least,” 30% responded “exactly,” and the remaining 6% responded “at most.”

This finding shows how untenable the proof-by-arithmetic argument is as a basis for the claim that subjects violate description invariance in framing problems like the ADP. Simply put, the researchers' interpretation was not shared by most subjects, who instead viewed the quantifiers presented to them as lower bounds. That result has profound consequences for the interpretation of subjects' choice data. Take the modal response: It is evident that saving at least 200 people [out of 600] is objectively better than letting die at least 400 people [out of 600]. Rather than being a “preference reversal” of dubious decision-making quality, for subjects who interpret the options as such, the pattern of choosing A over B and D over C may maximize subjective expected utility. In fact, when subjects' interpretations of the quantifiers in both options (i.e., A and B in the positive frame and C and D in the negative frame) were taken into account, a majority (76%) chose the option that was utility maximizing. Moreover, the framing effect reported by Tversky and Kahneman (1981) was found only in the subsample that reported a lower-bound interpretation of the quantifiers in options A or C. For those subjects who interpreted the quantifiers as exact values, there was no effect of frame.

Teigen and Nikolaisen (2009) also found evidence that numeric quantifiers are often interpreted as lower bounds. In one framing experiment that used a financial version of the ADP, subjects were asked which of two financial forecasters would be more accurate. Forecaster A predicted that NOK 250,000 (of 600,000) would be saved (in the positive frame) or that NOK 350,000 (of 600,000) would be lost (in the negative frame). Forecaster B predicted that NOK 150,000 (of 600,000) would be saved (in the positive frame) or that NOK 450,000 (of 600,000) would be lost (in the negative frame). In fact, NOK 200,000 was saved (NOK 400,000 was lost). In other words, the experiment was set up so that one forecaster overestimated the outcome and the other underestimated it, but they did so by the same amount (NOK 50,000). Supporting the hypothesis that people often spontaneously adopt a lower-bound interpretation of numeric quantifiers, the forecaster who overestimated the actual amount was judged to be more accurate. This was so regardless of whether the outcome entailed saving money or losing it (thus ruling out an alternative explanation based on desirability).

Contexts Matter

The preceding examples suffice to show that it is untenable for decision researchers to assume that subjects interpret numeric quantifiers as exact values. The next examples further demonstrate how aspects of context can moderate those interpretations. First, quantifier interpretations may be affected by the degree to which decision options are explicated. Consider the so-called certain options in the ADP: in A, nothing is said about the remaining 400 people; in C, nothing is said about the remaining 200. In contrast, the so-called uncertain options better (if not fully) resolve the uncertainty resulting from partial explication. That is, in options B and D the explicit probabilities add up to unity and for each possible outcome all 600 people in the focal set are accounted for—either they are all saved or else they all die. In this sense, the certain options seem less certain than the uncertain options. Mandel (2014, Experiment 3) resolved the uncertainty by filling in the missing information:

If Plan A is adopted, it is certain that 200 people will be saved and 400 people will not be saved.

If Plan C is adopted, it is certain that 400 people will die and 200 people will not die.

The effect of explicating the missing information on subjects' numeric quantifier interpretations was striking: only 24% selected “at least [n],” whereas 59% selected “exactly [n].” The remaining percentage of subjects who indicated “at most [n]” also nearly tripled (6 vs. 17%). The direction of these context-induced shifts is predictable: when all members of a focal set are referenced, it is likely that the speaker intends for the quantifiers to be exact. Thus, one might expect the bilateral interpretation to be modal, as was found. Yet one might also expect a smaller shift in favor of “at most,” which may reflect the reader's appreciation that the sum of the quantified subsets cannot exceed the value of the total set.

Moreover, the effect of sentential context (via the manipulation of explication) extends to choice: when both of the paired options were fully explicated, there was no effect of frame on subjects' choices (also see Kühberger, 1995; Mandel, 2001; Tombu and Mandel, 2015). Evidently, the interpretation of numeric quantifiers depends on aspects of sentential context, such as the explication of complementary implicit numeric quantifiers, and these context effects also affect subjects' choices. In this regard, the present discussion adds to a small literature that has highlighted the importance of context on decision-making (e.g., Wagenaar et al., 1988; Hilton, 1995; Goldstein and Weber, 1997; Rettinger and Hastie, 2001; Mandel and Vartanian, 2011).

Numeric quantifier interpretations are also affected by linguistic inferences that may be drawn from the broader semantic context of the decision-making problem. For instance, when a rationale for the values presented in the ADP was provided to subjects—namely, that there were only 200 vaccines for the disease that would be available—then a majority (71%) interpreted “200” as an upper bound (“at most”) in the positive frame, whereas a majority (64%) interpreted “400” as a lower bound (“at least”) in the negative frame (this experiment is reported in the General Discussion of Mandel, 2014). In contrast, when the standard ADP was presented, 58 and 54% gave the “at least” response in the positive and negative frames, respectively.

Once again, the direction of these interpretational shifts (both as a function of frame and whether a rationale was provided to subjects) is predictable, reflecting subjects' awareness that maximum quantities (i.e., having only 200 vaccines) set upper bounds on positive expected outcomes. And, once again, there is evidence that the effect of context on linguistic interpretation, in turn, influences the choices people make. When the vaccine rationale was provided in the ADP, no effect of frame on choice was found (Jou et al., 1996).

Conclusion

William James wrote:

The great snare of the psychologist is the confusion of his own standpoint with that of the mental fact about which he is making his report. I shall hereafter call this the ‘psychologist’s fallacy' par excellence.” (1890/1950, p. 196, italics in original).

Decision researchers have perennially committed this fallacy by projecting their understanding of decision-task structure and meaning onto their subjects and then assessing the rationality of their subjects' judgments and choices as if their subjects invariably shared their views.

Over the years, a minority of psychologists have objected to that approach, having noted how subjects' task construals often differ from those assumed by experimenters (e.g., Henle, 1962; Berkeley and Humphreys, 1982; Phillips, 1983; Hilton, 1995). For instance, Gigerenzer (1996) stated:

Semantic inferences—how one infers the meaning of polysemous terms such as probable from the content of a sentence (or the broader context of communication) in practically no time—are extraordinarily intelligent processes. They are not reasoning fallacies.” (p. 593)

The research and arguments summarized here continue in a similarly critical vein, extending the alternative-task-construal argument to problems involving the linguistic interpretation of numeric quantifiers. As the examples provided here have illustrated, those linguistic interpretations are not only, at times, modally different from experimenters' interpretations, but also predictably moderated by multiple aspects of context. Such findings certainly do not prove that humans are rational, but they do show that some influential claims about human irrationality in decision-making are unwarranted. Such claims would benefit from careful consideration of possible linguistic effects on people's judgments and decisions.

Conflict of Interest Statement

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

Arrow, K. J. (1982). Risk perception in psychology and economics. Econ. Inq. 20, 1–9. doi: 10.1111/j.1465-7295.1982.tb01138.x

CrossRef Full Text | Google Scholar

Austin, J. L. (1979). “The meaning of a word,” in Philosophical Papers, 3rd Edn., eds J. O. Urmson and G. J. Warnock (Oxford, UK: Oxford University Press), 55–75.

Google Scholar

Berkeley, D., and Humphreys, P. (1982). Structuring decision problems and the “bias heuristic.” Acta Psychol. 50, 201–252. doi: 10.1016/0001-6918(82)90042-7

CrossRef Full Text | Google Scholar

Breheny, R. (2008). A new look at the semantics and pragmatics of numerically quantified noun phrases. J. Semant. 25, 93–139. doi: 10.1093/jos/ffm016

CrossRef Full Text | Google Scholar

Carston, R. (1998). “Informativeness, relevance and scalar implicature,” in Relevance Theory: Applications and Implications, eds R. Carston and S. Uchida (Amsterdam: Benjamins), 179–236.

Google Scholar

Geurts, B. (2006). “Take ‘five’: The meaning and use of a number word,” in Non-Definiteness and Plurality, eds S. Vogeleer and L. Tasmowski (Philadelphia, PA: Benjamins), 311–329.

Gigerenzer, G. (1996). On narrow norms and vague heuristics: a reply to Kahneman and Tversky (1996). Psychol. Rev. 103, 592–596.

Google Scholar

Goldstein, W. M., and Weber, E. U. (1997). “Content and discontent: indications and implications of domain specificity in preferential decision making,” in Research on Judgment and Decision Making, eds W. M. Goldstein and R. M. Hogarth (Cambridge, UK: Cambridge University Press), 566–617.

Google Scholar

Henle, M. (1962). On the relation between logic and thinking. Psychol. Rev. 69, 366–378. doi: 10.1037/h0042043

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Hilton, D. J. (1995). The social context of reasoning: conversational inference and rational judgment. Psychol. Bull. 118, 248–271. doi: 10.1037/0033-2909.118.2.248

CrossRef Full Text | Google Scholar

Horn, L. R. (1989). A Natural History of Negation. Chicago, IL: University of Chicago Press.

Jou, J., Shantau, J., and Harris, R. J. (1996). An information processing view of framing effects: the role of causal schemas in decision making. Mem. Cogn. 24, 1–15. doi: 10.3758/BF03197268

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Kühberger, A. (1995). The framing of decisions: a new look at old problems. Organ. Behav. Hum. Decis. Process. 62, 230–240. doi: 10.1006/obhd.1995.1046

CrossRef Full Text | Google Scholar

Levin, I. P., Schneider, S. L., and Gaeth, G. J. (1998). All frames are not created equal: a typology and critical analysis of framing effects. Organ. Behav. Hum. Decis. Process. 76, 149–188. doi: 10.1006/obhd.1998.2804

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Levinson, S. C. (2000). Presumptive Meanings. Cambridge, MA: MIT Press.

Google Scholar

Mandel, D. R. (2001). Gain-loss framing and choice: separating outcome formulations from descriptor formulations. Organ. Behav. Hum. Decis. Process. 85, 56–76. doi: 10.1006/obhd.2000.2932

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Mandel, D. R. (2014). Do framing effects reveal irrational choice? J. Exp. Psychol. Gen. 143, 1185–1198. doi: 10.1037/a0034207

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Mandel, D. R., and Vartanian, O. (2011). “Frames, brains, and content domains: neural and behavioral effects of descriptive content on preferential choice,” in Neuroscience of Decision Making, eds O. Vartanian and D. R. Mandel (New York, NY: Psychology Press), 45–70.

Google Scholar

Moxey, L. M., and Sanford, A. J. (2000). Communicating quantities: a review of psycholinguistic evidence of how expressions determine perspectives. Appl. Cogn. Psychol. 14, 237–255. doi: 10.1002/(SICI)1099-0720(200005/06)14:3<237::AID-ACP641>3.0.CO;2-R

CrossRef Full Text | Google Scholar

Musolino, J. (2004). The semantics and acquisition of number words: integrating linguistic and developmental perspectives. Cognition 93, 1–41. doi: 10.1016/j.cognition.2003.10.002

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Noveck, I. A. (2001). When children are more logical than adults: experimental investigations of scalar implicature. Cognition 78, 165–188. doi: 10.1016/S0010-0277(00)00114-1

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Noveck, I. A., and Reboul, A. (2008). Experimental pragmatics: a Gricean turn in the study of language. Trends Cogn. Sci. 12, 425–431. doi: 10.1016/j.tics.2008.07.009

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Phillips, L. D. (1983). “A theoretical perspective on heuristics and biases in probabilistic thinking,” in Analyzing and Aiding Decision Processes, eds P. C. Humphries, O. Svenson, and A. Vari (New York, NY: Elsevier), 525–543.

Rettinger, D. A., and Hastie, R. (2001). Content effects on decision making. Organ. Behav. Hum. Decis. Process. 85, 336–359. doi: 10.1006/obhd.2000.2948

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Stanovich, K. E., and West, R. F. (2000). Individual differences in reasoning: implications for the rationality debate? Behav. Brain Sci. 23, 645–665. doi: 10.1017/S0140525X00003435

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Teigen, K. H. (in press). “Framing of numerical quantities,” in Blackwell Handbook of Judgment and Decision Making: An Interdisciplinary Perspective, eds G. Keren and G. Wu (Oxford, UK: Blackwell).

Teigen, K. H., and Nikolaisen, M. I. (2009). Incorrect estimates and false reports: how framing modifies truth. Think. Reason. 15, 268–293. doi: 10.1080/13546780903020999

CrossRef Full Text | Google Scholar

Tombu, M., and Mandel, D. R., (2015). When does framing influence preferences, risk perceptions, and risk attitudes? The explicated valence account. J. Behav. Decis. Mak. doi: 10.1002/bdm.1863. [Epub ahead of print].

CrossRef Full Text | Google Scholar

Tversky, A., and Kahneman, D., (1981). The framing of decisions and the psychology of choice. Science 211, 453–458. doi: 10.1126/science.7455683

PubMed Abstract | Full Text | CrossRef Full Text | Google Scholar

Tversky, A., and Kahneman, D. (1986). Rational choice and the framing of decisions. J. Bus. 59, S251–S278. doi: 10.1086/296365

CrossRef Full Text | Google Scholar

Wagenaar, W. A., Keren, G., and Lichtenstein, S. (1988). Islanders and hostages: deep and surface structures of decision problems. Acta Psychol. 67, 175–189.

Google Scholar

Wittgenstein, L. (1953). Philosophical Investigations. Oxford, UK: Blackwell.

Google Scholar

Keywords: decision-making, rationality, numeric quantifiers, context, linguistic interpretation

Citation: Mandel DR (2015) Communicating numeric quantities in context: implications for decision science and rationality claims. Front. Psychol. 6:537. doi: 10.3389/fpsyg.2015.00537

Received: 20 February 2015; Accepted: 14 April 2015;
Published: 30 April 2015.

Edited by:

Marco Cruciani, University of Trento, Italy

Reviewed by:

Pietro Perconti, University of Messina, Italy

Copyright © 2015 Her Majesty the Queen in Right of Canada, as represented by Defence Research and Development Canada. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: David R. Mandel, david.mandel@drdc-rddc.gc.ca

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.