10 Cherry Picking Data

What is cherry picking?

Cherry picking is the deliberate practice of presenting the results of a study or experiment that best support the hypothesis or argument, instead of reporting all the findings (Morse, 2010, p. 1 ). Cherry picking can also be applied to the process of conducting an experiment, where the researcher intentionally uses limited categories of selection for their participants, in order to carry out their experiment (Morse, 2010, p. 1).

Here is an example to better explain what cherry picking is; let’s say a researcher does an experiment to prove a claim. After conducting the experiment, they find that only 20% of the results actually support their claim, while the other 80% disprove it. A researcher who decided to employ cherry picking would present only the 20% of their results that supported their claim, rather than all the results obtained in the experiment.

When is cherry picking used?

Here are some reasons people choose to “cherry-pick”:

  • They want to seem more credible and successful in their field
  • There is a strong notion in the field of science that unsuccessful studies are not useful, (Couzin-Frankel, 2013, p. 68)

But regardless of the reason, cherry picking should  never be an option.

Why is cherry picking a problem?

Cherry picking is not only dishonest and misleading to the public, but it reduces the credibility of experimental findings because it does not present all the results of an experiment. This can make it seem like an experiment was entirely successful when it fact it was not. Cherry picking results also makes it harder to find patterns and subtleties in the presented data, as there is only a focus on certain positive outcomes instead of looking at the larger picture (Morse, 2010, p. 1).

Cherry picking often leads to generalization because since there is not much data being presented (as the researcher has only chosen the most supportive aspects of the experiment), the researcher tries to stretch their findings to apply to a larger population. Cherry picking subjects for a study is especially problematic in experiments involving people because when a researcher decides to select candidates based on how well they believe these subjects will support their claim, there is much bias that enters the research process. This bias limits the credibility of the study as it impacts the outcome of the experiment (Morse, 2010, p. 1).

How is Cherry Picking Viewed in the Scientific Community?

In the scientific community, cherry picking proves a conflicting topic of discussion. In theory, it is widely discouraged, but in practice it is very common (Couzin-Frankel, 2013, p. 68). There seems to be a widespread feeling among researchers in the field of science that failed studies are not useful. Some researchers will even feel incompetent when their experiments don’t prove to support their claims. Unfortunately, it appears that “studies yielding null results are less likely to be written up, submitted, and published than those with positive findings,” (Franco, Malholtra, & Simonovits, 2015, p. 1).

These sentiments can run so strong among researchers that they feel compelled to present only the successful aspects of their experiments to the public (Couzin-Frankel, 2013, p. 69). Although the majority of researchers understand that this practice is dishonest, they would rather present a study that is not credible than face the “embarrassment” of an unsuccessful study. The reality is that all experiments have use in the scientific community, even if the results of a study do not support the researcher’s claim. There is something to learn from both successes and failures, and more researchers should embrace the idea of presenting all aspects of their research.

Case Studies/Examples of Cherry Picking

Case Study #1

In 2011, author Aric Sigman submitted a paper about the effects of daycare on young children to the scientific journal The Biologist (Goldacre). Sigman’s paper featured information from various sources and studies, but he only used information that supported his claim (Goldacre). Unfortunately, Daily Mail picked up on his paper and published the headline “Sending babies and toddlers to daycare could do untold damage to the development of their brains and their future health,” (Goldacre). The issue with cherry picking is made quite obvious in this example, as Sigman’s un ethical behavior resulted in a provocative headline that introduced misleading information into the public sphere.

Case Study #2

In the 80s, Celia Mulrow revealed that academic sources and textbooks were cherry picking information, and therefore presenting incomplete representations of various topics (Goldacre). This case of cherry picking is especially problematic because people view textbooks and academic sources as being credible, so if the public is being presented incorrect information that they believe to be true, there is a significant level of misunderstanding taking place.

Case Study #3

Researcher Glenn Begley was trying to replicate a study of tumor growth in animals, based off of a scientific paper written by an unnamed researcher (Couzin-Frankel, 2013, p. 68). But for some reason, Begley could not seem to get the same outcome that the researcher did (Couzin-Frankel, 2013, p. 68). So in 2011, Begley met with the unnamed researcher and described his dilemma, to try and understand what he was doing wrong (Couzin-Frankel, 2013, p.68). The researcher explained that out of the 12 trials conducted in the experiment, only one was successful, and that was the trial that was published (Couzin-Frankel, 2013, p. 68). In this case, the unnamed researcher’s use of cherry picking in the presentation of their results misled other researchers to believe that their study was entirely successful when it in fact was not. Not only was the researcher’s conduct unethical, but it was also a blatant misrepresentation of the progress being made in that particular branch of cancer research.

These case studies demonstrate how cherry picking in the presentation of data can be very misleading and misinforming to the public. It is important to understand the significant role that scientific resources have in our society. People trust that the information these sources provide them with is credible, so researchers have a responsibility to present the full range of their results instead of just the successes. By employing methods of cherry picking, researchers contribute to the many scientific rumors and myths that circulate in the public sphere, and also help to create an environment of distrust between the people and those working in the field of science.

How should we Deal with Cherry Picking?

Can cherry picking be avoided?

The easiest way to avoid cherry picking is not to do it! Cherry picking is a deliberate effort, therefore not accidental.  Researchers should always present the full range of their findings, not just the results that make them seem most credible.

To avoid cherry picking in the process of conducting a study, researchers should use participants from a wide range of backgrounds (when possible), in order to limit the bias that comes along with limited perspective. Or, if their study requires a more limited group of participants, they should not try to make their findings apply to a larger audience through generalization (Morse, 2010, p. 1).

Also, when reporting on findings , researchers should be careful to avoid choosing words that imply meaning beyond what actually happened in the experiment (Couzin-Frankel, 2013, p. 68). Words that exaggerate certain aspects of an experiment are like a magnifying glass, highlighting some parts of a study while the rest is overlooked.

By employing good research techniques like increasing sample size, conducting double-blind experiments and using careful word choice, researchers can avoid cherry picking in the process of selecting candidates for a study and when presenting their research (Goldacre).

How should we view Cherry picking?

But why is cherry picking dishonest? Shouldn’t researchers highlight the aspects of their experiments that hold true to their claim? The only acceptable response to this question is: no.

The goal of science is not to be right, but it is to accurately present the findings of research or experimentation. So by withholding parts of the experiment that are not necessarily true, researchers are misleading others to believe their claim.

Richard Somerville, an American climate scientist, said “Choosing to make selective choices among competing evidence, so as to emphasize those results that support a given position, while ignoring or dismissing any findings that do not support it, is a practice known as “cherry picking” and is a hallmark of poor science or pseudo-science.” (Coghill).

As explained in this quote, researchers should view cherry picking as a process employed by bad researchers,  who are not really committed to the field of science. In no case is cherry picking an effective way of presenting data or conducting a study. In fact, cherry picking only serves to undermine the field of science and discredit researchers and their work.


Coghill, Graham. (2012, April 12). Devious deception in displaying data: Cherry picking

[Web log post]. Retrieved from http://scienceornot.net/2012/04/03/devious-deception-in-displaying-data-cherry-picking/

Couzin-Frankel, Jennifer. (2013). The Power of Negative Thinking. Science, 342.

Retrieved from http://www.sciencemag.org/content/347/6222/619.1.full.pdf?sid=fb0199de-cd32-41bc-badc-ee53968fe5d8

Franco, Annie, Malhotra, Neil, & Simonovits, Gabor. (2015). Underrreporting in

Psychology Experiments: Evidence From a Study Registry. Social Psychology and Personality Science , 1. doi: 10.1177/1948550615598377

Goldacre, Ben. (2011, September 29). Cherry picking is bad. At least warn us when you

do it [Web log post]. Retrieved from http://www.badscience.net/2011/09/cherry-picking-is-bad-at-least-warn-us-when-you-do-it/

Johnson, James J.S. (2015). Cherry Picking Data Is the Pits. [Web log post]. Retrieved

from http://www.icr.org/article/cherry-picking-data-pits/

Morse, Janice M. (2010). Cherry Picking: Writing From Thin Data. Qualitative Health. Research, 20(1), 1.