Publisher of the most cited open-access journals in their fields.


Perspective ARTICLE

Front. Ecol. Evol., 19 May 2015 |

Blind trust in unblinded observation in Ecology, Evolution, and Behavior

Melissa R. Kardish1,2*, Ulrich G. Mueller1, Sabrina Amador-Vargas1, Emma I. Dietrich1, Rong Ma1, Brian Barrett1 and Chi-Chun Fang1
  • 1Department of Integrative Biology, University of Texas at Austin, Austin, TX, USA
  • 2Center for Population Biology, University of California, Davis, CA, USA

We surveyed 492 recent studies in the fields of ecology, evolution, and behavior (EEB) to evaluate potential for observer bias and the need for blind experimentation in each study. While 248 articles included experiments that could have been influenced by observer bias, only 13.3% of these articles indicated that experiments were blinded. The use of blind observation therefore was either grossly underreported in the surveyed articles, or many EEB studies were not blinded. We hope that a concerted effort of the field of EEB—including researchers, peer-reviewers, and journal editors—will help promote and institute routine, blind observation as an essential standard that should be practiced by all sciences.

Training in the scientific method emphasizes accurate observation, unbiased experimentation, and objective thinking. Despite this training, much research remains infused with unconscious biases (Rosenthal and Rosnow, 2007; Fanelli, 2010), resulting in wasted effort (i.e., the need to rectify wrong conclusions) and even medical harm (Foster et al., 1975; Hróbjartsson et al., 2012; van Wilgenburg and Elgar, 2013; Kozlov et al., 2014; Tuyttens et al., 2014). It is unclear how much published science is affected by bias and might therefore be false (Ioannidis, 2005; Fanelli, 2012).

Unconscious confirmation bias is a common source of inaccurate observation where observers interpret what they see in a way that supports their expectations or preferred hypotheses. To eliminate confirmation bias, experimenters should blind themselves, for example by concealing any information that could bias observations (e.g., awareness of the hypothesis tested or of the treatment condition of a specific sample). While blinding is routine in medical and psychological research (Burghardt et al., 2012; Hróbjartsson et al., 2012; Begley, 2013), blinding is unfortunately not a universal practice in biology. For example, a survey of kin-recognition studies—a cornerstone of animal behavior—found that 71% of studies testing for kin recognition in ants did not report the use of blind observation, and, more disconcerting, studies that did not report blind observation reported significantly greater effect sizes than studies that reported blinding (van Wilgenburg and Elgar, 2013). Likewise, herbivory of woody plants was rated nearly 500% higher with unblinded methods compared to blinded methods (Kozlov et al., 2014). We here expand on such previous surveys by evaluating the methods of 492 research articles published recently in the general area of ecology, evolution, and behavior (EEB), specifically assessing the use of blind observation and the potential for observer bias to affect results.

For our survey, we selected nine prominent journals publishing in the area of EEB (e.g., Ecology, Evolution, American Naturalist, Animal Behavior) and EEB articles in four general-interest journals (Science, Nature, Proceedings of the National Academy of Sciences, Proceedings of the Royal Society B) (see Supplementary Materials, Appendix A). For each study, we first evaluated whether observations could potentially be influenced by observer bias, then asked whether blind observation was reported in the published methods. Because blinding is sometimes logistically difficult, for studies that did not report blind observation, we also evaluated whether blinding would be (i) easy, (ii) easy with a naïve experimenter, or (iii) difficult to implement. We attempted to evaluate relative importance of blind observation between studies, however, criteria for the importance of blind observation differ between experimental designs and between EEB research areas (see Table S1), and exact methodological details were sometimes difficult to evaluate for readers (a key reason why blinding should be routine to assure readers of unbiased observation). Two readers independently read and scored each article; if scores differed, the readers discussed the respective article in detail to reach a consensus (see Supplementary Materials for detailed protocols). Readers also identified experimental steps where blind observation would reduce any bias and evaluated possible methodologies that could have been used to reduce observer bias. Table S1 summarizes the rationale for scores of each article.

Across all 492 EEB articles surveyed, we judged 50.4% (n = 248) to have potential for observer bias, but only 13.3% (n = 33 of 248) of these articles stated use of blind observation. Some articles explicitly stated the use of blind observation in the methods (n = 24), while others indicated indirectly that experiments had been done blind (n = 9; e.g., use of a naïve experimenter; Figure 1). In the remaining articles (n = 244 of 492), it seemed unlikely that unblinded observation could bias observations, though many studies could easily have been blinded (Figure S1) (Balph and Balph, 1983). Therefore, either the use of blind observation was grossly underreported in the surveyed articles, or many studies were not conducted blind.


Figure 1. Of 492 articles published in January and February 2012 in 13 prominent journals covering the fields of ecology, evolution, and behavior (EEB), 248 articles reported on experiments that had potential for observer bias. Only 13.3% (n = 33 of 248) of these articles stated the use of blinding in the methods. The remaining 244 articles were judged unlikely to be affected by observer bias (see Figure S1). The use of blind observation therefore was either grossly underreported in the surveyed articles, studies had not been blinded, or a combination of both.

In a surprisingly high percentage of studies (55.8%; n = 120 of 215) where blind observation might reduce bias, it would have been easy to blind observation (e.g., by coding of video recordings or samples before scoring), yet such blinding was either not reported or did not occur (Figure S1). This percentage increases to 78.6% (n = 169 of 215) if we include studies that could have been easily blinded by adding a naïve experimenter.

While summarizing all information, we noted differences between journals in the frequency of reporting any blind observation. Despite a significant range in impact factor of the surveyed journals (Figure 2), we found that (a) impact factor does not correlate with percentage of articles explicitly reporting the use of blind observation; (b) the percentage of articles reporting to be blind does not differ between journals (G-test with Williams correction, df = 12, p = 0.7323); and (c) journals of comparable impact factor differ widely in the publication of articles reporting use of blind observation. Figure S2 presents the data for individual journals. No journal had more than 25% of articles report blinding (Figure 2).


Figure 2. Of the 13 surveyed journals publishing in ecology, evolution, and behavior, the percentages of articles reporting the use of blind observation does not differ between journals (G-test with Williams correction, df = 12, p = 0.7323). Percentage is calculated as the number of articles reporting blind observation out of the total number of articles with potential for observer bias (both of these numbers are reported next to each journal name). Journals of comparable impact factor differ widely in the publication of articles that reported blind observation.

While the average number of articles scored per journal was 37.8, we recognize that the number of articles surveyed for some journals was small (e.g., of 12 articles surveyed in PLoS Biology, only two articles had the potential for observer bias, but neither addressed the use of blind observation; Figure 2 and Table S1). The small sample sizes for some journals preclude strong inferences about journal-level differences in the valuing and reporting of blind observation. However, if the 492 surveyed articles reflect general practices in the field of EEB, blind observation seems consistently unreported or underutilized.

To address the apparent blind spot for blind observation in EEB, we recommend that EEB researchers design their experiments more carefully and train each other consistently in the importance of conducting and reporting blind observations. No one would trust a novel medical treatment or drug unless the clinical research had been conducted blind and had been reported as such (Hróbjartsson et al., 2012), yet the field of EEB appears to overlook a habitual lack of reporting blind observation in published methods. Previous surveys have reported on the lax attitude toward blind observation specifically in behavioral research (Balph and Balph, 1983; Rosenthal and Rosnow, 2007; Burghardt et al., 2012), but similar methodological problems apparently pertain to the broader field of EEB. Blind observation is routine in medical and psychological research (Begley, 2013), and it would seem important that EEB conducts research with the same kind of experimental rigor that should be routine across all sciences. Moreover, EEB researchers are aware that the human mind did not evolve to be unbiased in perception and cognitive processing (Trivers, 2011), yet they appear to operate as if biases or self-deception do not affect EEB research.

To remedy underreporting of blind experimentation, we recommend that EEB researchers report for each experiment whether a study was blinded (or not blinded), and explain how any blinding was accomplished (or explain why blinding was not possible). We also recommend that peer-reviewers and editors require accurate reporting of blinding in the methods section and require that authors reveal in their methods any unblinded experimentation. Such accurate reporting of methods will permit readers to gain a better understanding of the strengths of a study and should facilitate progress in future research building on published work. Finally, we recommend that editorial policies of journals require reporting of both blinded and unblinded observation, and that journals improve guidelines that assist peer-reviewers to evaluate the need for blind observation. We hope that a concerted effort of the field of EEB will soon follow the same routine and standardized use of blind experimentation as in other fields, stimulate a more critical reading of the published literature, and thus establish in the near future a firm tradition of blind experimentation in ecology, evolution, and behavior.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.


We thank the Mueller and Stachowicz labs for comments on the manuscript. MRK and UGM conceived and designed this study and wrote the manuscript. All authors scored articles, edited the manuscript, and reviewed the final Table S1. The work was supported by a National Science Foundation grant (DEB-0919519).

Supplementary Material

The Supplementary Material for this article can be found online at:


Balph, D. F., and Balph, M. H. (1983). On the psychology of watching birds: the problem of observer-expectancy bias. Auk 100, 755–757.

Google Scholar

Begley, C. G. (2013). Six red flags for suspect work. Nature 497, 433–434. doi: 10.1038/497433a

PubMed Abstract | CrossRef Full Text | Google Scholar

Burghardt, G. M., Bartmess-LeVasseur, J. N., Browning, S. A., Morrison, K. E., Stec, C. L., Zachau, C. E., et al. (2012). Perspectives - Minimizing observer bias in behavioral studies: a review and recommendations. Ethology 118, 511–517. doi: 10.1111/j.1439-0310.2012.02040.x

CrossRef Full Text | Google Scholar

Fanelli, D. (2010). Do pressures to publish increase scientists' bias? An empirical support from US States data. PLoS ONE 5:e10271. doi: 10.1371/journal.pone.0010271

PubMed Abstract | CrossRef Full Text | Google Scholar

Fanelli, D. (2012). Negative results are disappearing from most disciplines and countries. Scientometrics 90, 891–904. doi: 10.1007/s11192-011-0494-7

CrossRef Full Text | Google Scholar

Foster, G. G., Ysseldyke, J. E., and Reese, J. H. (1975). “I wouldn't have seen it if I hadn't believed it.” Except. Children 41, 469–473.

PubMed Abstract | Google Scholar

Hróbjartsson, A., Thomsen, A. S., Emanuelsson, D., Tendal, B., Hilden, J., Boutron, I., et al. (2012). Observer bias in randomised clinical trials with binary outcomes: a systematic review of trails with both blinded and non-blinded outcome assessors. Br. Med. J. 344:e1119. doi: 10.1136/bmj.e1119

PubMed Abstract | CrossRef Full Text

Ioannidis, J. P. A. (2005). Why most published research findings are false. PLoS Med. 2:e124. doi: 10.1371/journal.pmed.0020124

PubMed Abstract | CrossRef Full Text | Google Scholar

Kozlov, M. V., Zverev, V., and Zvereva, E. L. (2014). Confirmation bias leads to overestimation of losses of woody plant foliage to insect herbivores in tropical regions. PeerJ 2:e709. doi: 10.7717/peerj.709

PubMed Abstract | CrossRef Full Text | Google Scholar

Rosenthal, R., and Rosnow, R. (2007). Essentials of Behavioral Research: Methods and Data Analysis. San Francisco, CA: McGraw Hill.

Trivers, R. (2011). Deceit and Self-Deception: Fooling Yourself the Better to Fool Others. London: Allen Lane.

Google Scholar

Tuyttens, F. A. M., de Graaf, S., Heerkens, J. L. T., Jacobs, L., Nalon, E., Ott, S., et al. (2014). Observer bias in animal behaviour research: can we believe what we score, if we score what we believe? Anim. Behav. 90, 273–280. doi: 10.1016/j.anbehav.2014.02.007

CrossRef Full Text | Google Scholar

van Wilgenburg, E., and Elgar, M. A. (2013). Confirmation bias in studies of nestmate recognition: a cautionary note for research into the behaviour of animals. PLoS ONE 8:e53548. doi: 10.1371/journal.pone.0053548

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: blind observation, bias, experimental design, confirmation bias, ecology, evolution, behavior

Citation: Kardish MR, Mueller UG, Amador-Vargas S, Dietrich EI, Ma R, Barrett B and Fang C-C (2015) Blind trust in unblinded observation in Ecology, Evolution, and Behavior. Front. Ecol. Evol. 3:51. doi: 10.3389/fevo.2015.00051

Received: 01 April 2015; Accepted: 04 May 2015;
Published: 19 May 2015.

Edited by:

Matjaz Kuntner, Scientific Research Centre of the Slovenian Academy of Sciences and Arts, Slovenia

Reviewed by:

Shawn M. Wilder, The University of Sydney, Australia
Timothy Hume Parker, Whitman College, USA

Copyright © 2015 Kardish, Mueller, Amador-Vargas, Dietrich, Ma, Barrett and Fang. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Melissa R. Kardish, Center for Population Biology, University of California at Davis, One Shields Avenue, Davis, CA 95616, USA,