As we navigate the landscape of arguments on theism, science, and politics it’s common for us to get caught up in playing logical fallacy bingo, naming every fallacy that comes up in our informal debates. We pride ourselves on being able to recognize a bad argument particularly when formal logical fallacies pop up, but one appearing to be caught up in the mix is when an appeal is made to scientific consensus. This is commonly cited when arguing topics such as climate change, vaccine research, and even mythicism. On the surface, appeals to scientific consensus appear to be both an argument from authority and an argument from popularity. Just because the arguments are coming from experts in the field doesn’t mean that it’s right. And just because lots of experts believe it doesn’t mean it’s correct.
But is that really all that we are saying? Are we creating an argument that simply states, “Lots of scientists believe A, therefore A”? While it’s true that this conclusion doesn’t necessarily follow from the premise, there are a few intermediate steps and a few assumptions tucked away in there that aren’t included in that argument. The power in the argument that cites scientific consensus is tied into the fundamental philosophy of scientific thinking. It’s tied into how scientists determine which models are accurate and which are not, so that our knowledge approaches the truth.
Where scientific consensus comes from
First of all, let’s review scientific consensus comes from. Modern science relies upon the peer-review process. When a researcher submits a paper to a journal for publication, experts in the same research area volunteer to review the paper to ensure that it’s fit for publication. If you get asked to be a reviewer, you are considered an expert in the field and you become well-respected for having referee experience. It is part of your responsibility to ensure that the journal publishes quality work and that your field has reliable information sent out. It’s also important to keep in mind the “publish or perish” mentality. The reviewers for a publication are likely to be competing for similar funding sources, so they will want to be meticulously looking through papers with a fine-toothed comb to find any flaws at all in the methodology. It also doesn’t help when two different researchers are trying to be first to publish similar data. It pays to have the most novel research in a field, so researchers will fight tooth-and-nail to compete with each other, meaning one independent reviewer will have a strong motivation to pick apart anything wrong they can find in an article. There are a variety of motivations to making sure that what gets put through peer review is quality work, so there’s a pressure for good data to come out of any given paper.
Not only is there basic filtering in the peer-review process for any given journal, but articles may also get retracted. Retractions are for articles that have so many problems that they shouldn’t be used as reference for further research. This can be due not only to terrible scientific methodologies, but due to fraud or unethical behavior on part of the researcher within the context of the experiment. Such is the case with when The Lancet retracted Andrew Wakefield’s study linking the MMR vaccine to childhood autism.
These procedures a pretty good start, but there are a few problems. The publish-or-perish mentality also pushes people to submit as many papers as they can. This causes the same problems that you expect anytime you run into places valuing quantity over quality: the quality suffers. Journals can often be overwhelmed with papers, and as a result the referees share this burden. I’ve mentioned that the reviewers are volunteers, so professors are spending hours on unpaid work. This dilutes the motivation for quality papers to get through the process. A researcher can save time by being less meticulous so they can get back to their paid job. This is one of many reasons why the peer-review process is far from perfect. In fact, Most Published Research Findings Are False (keep in mind that publications won’t necessarily support what the consensus will be).
What? So most of science is a lie? Not exactly, there’s a lot more to the system of modern science than that.
Independent verification and reproducibility
Reproducibility is the heart and soul of the scientific process. For a researcher’s experiments, it’s best to have multiple trials of an experiment for verification. One of the easiest ways to get rejected from a journal is by not performing enough trials of an experiment. The error bars you see on each graph can be related to this. The onus is on the individual researcher to verify that their own conclusions are valid. If an experiment is performed twice and has wildly different results, the scientist has to examine the methodologies closely and make sure they are capable of consistent results before putting them down on paper.
Then it becomes a point of independent verification. Each scientific paper has a section of experimental methodologies, equipment used, software, algorithms, and statistical tests. These are supposed to be written such that any lab with comparable means could also do the experiment in a similar manner to achieve similar results. In reality, there isn’t a lot of funding to go around to simply redo somebody else’s study. That being said, research is often heavily interrelated from lab to lab, so there will tend to be a lot of work that supports similar conclusions.
Let’s give an example. A ligand is a molecule, typically in a biochemical system, that can physically “bind” with another molecule known as a receptor to give a signal or response. Say you are trying to compare the strength of binding between three different ligands: A, B, and C. You could use an Atomic Force Microscope to measure the amount of force it takes to unbind a ligand from its receptor. A plot for that experiment might look something like this:
Don’t worry about what exactly this graph is telling you; the important thing is that B has the highest force to pull apart a ligand from its receptor, while C has the lowest. Another lab may not be interested in the force between two molecules, but maybe they have a device that produces a specific signal as long as the molecules are bound. They also do a different study on the same molecules, and come up with data that look like this:
Again, don’t worry about the specifics. To a person not in the field, this may look like nonsense, but to an expert, these data sets corroborate each other. In the second graph, a distribution shows that ligand B has the longest signal distribution, and the numbers in the below table support this quantitatively. Basically, the first data set shows that B is the strongest, and the second shows that B stays together the longest. While many readers are foreign to the concept of ligand-receptor pairs, it makes sense that things held together more strongly will stick together longer, so it’s clear that both of these results agree with each other.
This is how science tends to work; it gains strength by independently verifying each other on multiple studies. On their own, either of these studies may not be worth much. If I was a grad student studying ligand receptors and these two different papers showed opposite trends, I may not have much to work with. That’s ok. Science builds and builds on what it has to develop more knowledge over time. You can get science bonus points if you are able to verify things through alternative methodologies. These two data sets basically said the same thing about the strength of binding. Imagine another research group tested these same ligand-receptor pairs in living cells and the ones with ligand B performed a best-desired cellular function. The result delivers a conclusion quite different from previous experiments in terms of what is being measured, but still provides even better evidence that the “B” pair is the most functional, further affirming the previous results. It is similar to what we already know, and since everything ties together it confirms that this piece of the puzzle is in the right spot.
Here’s another important thing to keep in mind: research necessarily works only on the very cutting edge of what we know. Everything is already inconclusive. It is not one person’s responsibility to advance a certain field or body of knowledge. Any given paper is usually not good enough on its own. This is one of the reasons why science reporting is notoriously bad. A lot of science reporting is done with preliminary work. Other findings may have just had poor methodologies that got through peer review. This is not an uncommon occurrence.
This leads to why the consensus is important. Even though science continues to be the most reliable method for investigating reality, it will still never make us 100% certain of anything. But we can have better and better of ideas of what’s true or not over time. At a certain point, if I wanted to make a certain device that required strong binding to that desired molecule, I would be forced to make a decision. I might just have to choose B. In the future, it might turn out that for certain reasons, it’s not the best, and that’s ok. But new science depends on older science, and it’s a slow, gradual process. If I had to create an image to represent that idea, it might look something like this*:
If some scientific findings were not accurate, then further research based on them wouldn’t be possible. For example, we can use ammonite species fossils to date ancient layers of geology. This wouldn’t be feasible if: A. Evolution weren’t true and B. Radiometric dating were inaccurate. But rapid speciation can and does happen over geologically short periods of time (a few million years), and we have a pretty firm grasp of many forms of radiometric decay. Many, many publications can verify this methodology of dating, and many more papers can be built on those papers to establish better natural history that also tends to corroborate with findings in other fields.
This is where the scientific consensus is most important. There needs to be a human element that helps decide when we “know” some science is accurate. There are lots of ways to determine when science is conclusive. One way is to form a “systematic review”, which compares multiple studies in a related field and compiles them to form a strong conclusion. But the way to compare these articles can be quantitatively simplistic, often categorizing studies as “supporting” or “not supporting” a hypothesis. These problems can be mitigated by “meta analyses”, which will try and statistically combine the data in these reviews. For areas without these analyses, there is going to be a human element that decides when something is confirmed or not.
This is why the consensus of a field is actually quite important. It really takes an expert to know what science “works” or not. In order to progress a field, scientists have to have a firm grasp on the related research. Things simply cannot progress if the researchers don’t have conclusive findings. Science is always subject to change of course, nothing is 100% settled, but nobody knows best what is “confirmed” or not better than these experts. They are in the know, their research relies on accurate models, and they have read all of the papers (with apologies to Sarah Palin).
When the evidence is not strong, there may be multiple competing hypotheses being thrown around. At the moment, for example, we don’t really know what causes the hydrophobic force. There is some effect that is known that is due to the entropy of how water molecules aggregate around hydrophobic materials, but there are some forces involved in protein and surface interactions that don’t appear to be simply due to that phenomenon. One hypothesis is that there is an orientational ordering due to hydrogen bonds, while another is a spontaneous nucleation of capillaries between the surfaces (obviously to the reader, these are easy to mix up). However, when more and more publications get sent out based on these topics and there is stronger and stronger evidence supporting one hypothesis, the body will shift to a consensus.
An example of this happening was the discussion of the best treatment for neuromuscular disease: Intravenous immunoglobulin treatment or plasmapheresis. It was at one point not clear which treatment was more effective for treatment of the disease, but that’s because the research had not been thorough enough to derive a conclusion. Yet over time, when more work was put into both treatment methods, it was found that both treatments returned positive responses, and the medical field has accepted both as effective.
Other small things to consider
To be a researcher it takes a lot of work for a long time. It will take four years to earn a bachelor’s degree in a broad major field of study, and then it will take at least four years (almost always more) to earn a Ph.D., 1-3 years as a postdoc to earn more research experience, until you can finally become a non-tenured professor with the opportunity to try and become a respected expert. It takes a long time to become acquainted with even a small field of study. Freethinkers take pride in thinking for themselves, but in almost every case they will simply never have the expertise needed to be well-informed on any given topic. There is simply too much information for one to be well-versed in everything. As a practical necessity, we must almost always defer to experts. This means your level of certainty on a topic probably won’t be as high as any expert, and that’s ok, you won’t have a high degree of certainty on most things. If you understand certain lines of evidence, by all means, use them in your discussions and conversations wherever you can. Just realize there’s far more to any given topic than you will probably know.
Another point is that there’s simply no strong mechanism to make the scientific consensus reach a false conclusion. For evolution to be false there would have to be massive cover-ups on a completely unprecedented scale that nobody has seen in the history of the Earth. Entire fields of genetics, agriculture, ecology, virology, geology, biochemistry, paleontology, medicine, and zoology would have to be false for creationism to be true, and I’m certain that I’m leaving more fields out. Climate change deniers claim that scientists want to keep government jobs and grants as motivation for accepting climate change as real, despite an overwhelming monetary pressure against accepting climate change. The most lucrative oil and gas companies would love for most scientists to state that the Earth is not, in fact, warming, yet with all the hundreds of billions of dollars of annual revenue they are still apparently incapable of bucking the consensus. To think somehow that scientists worldwide are engaged in a secret cover-up that is less lucrative than the other option is to imply a massive scientific conspiracy on an enormous, practically unbelievable scale. Anytime someone discounts a consensus like these in science, they are effectively engaged in a bigger conspiracy theory than the Kennedy assassination or chemtrails. There are simply far too many people involved in independent fields.
Another important point is that the consensus is rarely wrong. People bring up the flat Earth as a counterexample, but let’s be honest, there was no scientific methodology before that time, so there was no scientific consensus, even though there was a consensus among proclaimed authorities. A common cry against global warming is that scientists used to think the Earth was headed towards a massive cooling event in the 70s, but while some scientists predicted that there was hardly a consensus. In fact, scientists then still thought the Earth was going to heat up over time, not cool. Even when scientists have tended to overwhelmingly agree on things that are false in the past, there hadn’t been thorough studies on the topic, and when the studies had been performed the common view became overturned. Again, science is self-correcting.
Back to the formal argument
At the beginning of this post, I set up an argument that looked something like this:
- People who believe in X are authorities in the field.
- There are a lot of them.
- Therefore, X is right. When we bring up the scientific consensus, it’s not in the form of a logical, formally stated argument like this. There are many unstated assumptions and implications that come with the consensus. To summarize:
-These people have work that needs to be independently verified.
-They are critiqued by other experts who rely on quality information and are often in competition with each other for novel results.
-Consensus only shows up with a variety of lines of evidence supporting something.
-These people have spent years becoming experts that require deep knowledge of their field, and they have to continue to be in the know to keep their jobs.
-Consensus appears even when the financial incentive is against it.
-When applicable, consensus appears when widely different branches of study converge on the same answer.
-You would have to manufacture a conspiracy on a massive scale to get so many scientists agree on something otherwise.
These don’t automatically mean the consensus is true, and it shouldn’t be taken to mean that. But if you’re in an argument with someone, it is very likely that the experts are, in fact, privy to far more understanding than either you or the person you are talking to. It’s not a logical conclusion of some stated premises; it’s a large list of evidence that leans in a particular direction. Understanding the actual science and arguments behind a position will always be preferable, of course. But understand that the scientific consensus is a very useful argumentative weapon that we shouldn’t be shy about using once we know what’s involved.
*There’s a lot more that we don’t know than what we do know. So a more accurate picture would look like this.