The Scientific Method, Pt. 1 – Refutations

*EDIT: I’ve added links now and I realized I misremembered one of the scientific established facts, so I got rid of that. That’s what happens when you write based off the seat of your pants I guess. My bad.

I have to begin this by reminding you that I’m an engineer and that, yes, we are not technically scientists, engineering certainly has a strong, positively correlated relationship with science and the scientific method.

But the scientific method has gaps.

I am very fond of science, as an inquiry and as a subject of study itself. The reason I’m reminding you of my own inclination to the scientific method is that if you are of the disposition that science is the guiding light of salvation and progress in an otherwise dismal history, this will come off as criticizing science as a methodology, which is not my intention. If that’s what you think I’m saying, just remember that I also am telling science is predictive to degrees that surpass human intuition and it can be argued by metrics that science has contributed more to the welfare of humanity than any other method.

What is the scientific method? Grade school taught us four to seven steps as part of a formal method, depending on where you went to school.

  1. Generally, you start with making an observation, often something interesting or quirky you notice about the natural world.
  2. You then formulate a hypothesis, your best guess as to what is going on. This hypothesis must be testable, meaning that it has to be falsifiable, empirically measurable, and ideally has accounted for confounding variables i.e. other crap that might be mucking up your observations.
  3. If you have formulated a valid hypothesis, you should then devise an experiment such that the experiment measures a result with integrity.
  4. Analysis, the careful thought and parsing of the data you experiment returned is next.
  5. And the final step is what I refer to as sharing. Peer-review, publishing, teaching, however it is, this information or insight must be distributed in order to be a part of scientific engagement.

I would add that replication is an important aspect of the sharing stage, but that depends more the scientific community than it does on the individual experimenter.  Cool, the scientific method 101. Not a mystery, right?

It really isn’t. I sometimes get annoyed when others try to dress up science as something that can never be fully understood. If it couldn’t be understood, then it does a pretty lousy job at achieving its core goal, which I believe to be the deepening of human understanding and culmination of knowledge. I think the one aspect that wasn’t conveyed well to me when I was a kid was that the scientific method isn’t a linear “do-this-then-this-then-this” sort of thing. It’s a cycle and an ecosystem of thought. A very regimented form of thought.

A better way to think about it (for me at least) is a spectrum, with one end at fiddling, then tinkering, then engineering, and at the extreme end is science. They differ in intensity and purpose, but operate off similar principles.

But at almost each step, there are significant reasons to doubt the validity of the information acquired, which leads to the doubt of the method itself. At the observation stage, the two criticisms one could levy are 1) framing and 2) observer effect. One might immediately say, in regards to framing, “Well, that’s why we have double-blind and single-blind experiments”, but my argument is that if the observation is the impetus for the study, you have already committed to a “framed” study. If for example, you make an observation about the correlation of a certain ethnicity and some undesirable trait, the discussion around your work, even if resoundingly in the negative, has framed the discussion to associate that ethnicity and those undesirable traits. This is the “Does your mother know that you’re (some insult)?” argument – it’s really hard to deal with these sorts of inquiry, especially if they’re made insincerely.

Equally as troubling at the observation stage is the observer effect – the presence of the observe changes the behavior of the observed. Often this is a noted confounding effect in social sciences, and is often the cited reason for behaviors like lying to experimental participants and maintaining anonymity will observing subjects.

But even on an empirical level this is seen in the infamous double slit experiment in physics. We actually did this experiment in 11th (12th?) grade, and it happens in real time. If you have a plate with two narrow slits and you shine a laser at the slits, you would expect two lines on the wall, one for reach slit right?

But what you get is a spectrum of multiple bands of light. Unless you try to directly observe the photons splitting. Then the bands of light collapse into two band of light. The formal explanation is that the light is acting as a wave in the former case, but as a particle in the latter case. But what it signals to me is that experimental results are dependent on the presence of the observer.

At the next stage is formulating a hypothesis. Robert Pirsig, in Zen and the Art of Motorcycle Maintenance, discusses how he went on a lifelong journey to free himself of the scientific method because he realized that anyone can formulate any number of hypothesis, all equally valid and equally deserving of testing. I’m not sure I could come up with infinitely many hypotheses, but I don’t doubt that there are those who can. If you need any proof, talk to an exceptionally curious child sometime, and you’ll see what Pirsig means.

This is an epistemologically existential question because It simultaneously acknowledges two facts. One is that the scientific method will never encapsulate all falsifiable statements. A theory is never true, just not false for now. The other is that it won’t even encapsulate human curiosity/stupidity. If we view the scientific method as an algorithm, it is faster than doing nothing, but too slow and easily outpaced by hypothesis formation.

Hypothesis formation itself is a source of epistemological angst for the scientific method. Again, the aforementioned framing issue comes into play here again. Our hypotheses can be framed by implicit biases we aren’t even aware of, like cultural or gendered biases. But more unnervingly, hypotheses are formulated based off of knowledge somewhat assimilated. What I mean is that with observation, one could theoretically make a novel observation, something no one’s noticed before. But hypothesis is limited by what you already know. As Donald Rumsfeld would say:

Reports that say that something hasn’t happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns – the ones we don’t know we don’t know. And if one looks throughout the history of our country and other free countries, it is the latter category that tend to be the difficult ones.

Hypotheses are good for exploring known-knowns or known-unknowns, but even in a non-political context, the unknown-unknowns are what we truly point to as discoveries, the revolutionary stuff, and what hypotheses are not optimized to discover. Unknown-unknowns are the prize catches of science, but the very thing that cannot be attained by virtue of a fundamental step of the scientific method.

So, how have we gotten unknown-unknowns previously in science? Generally, by accident. This brings use to experimentation. Experimentation is the least theoretically unsound of the steps of the method, but in practice the hardest to get correctly. It is so easy to gather crappy data, or not account for a confounding variable, or just have overall experimental error creep in.

This is why I view science as having similarities with tinkering and engineering –experimentation. But as I said, so many discoveries are the result of happy accident (one professor I had once estimated between a third and half of all discoveries). And while I understand that the method exists to properly capitalize on these accidents, it still changes the way we view knowledge, especially in light of the whole tinkering-science spectrum argument.

What I mean here is that, if you are just tinkering around and you discover something interesting, you might play with it until you get little doo-dad or understand a cool little factoid about whatever you were tinkering with. If you are engineering, you are actively creating method of reproducing your discovery, of standardizing and optimizing it, maybe even monetizing it. If you find something in science, it becomes a theory. What this means is the knowledge isn’t “discovered” – it’s manufactured.

If this strikes you as blasphemous,  consider that pharmaceuticals barely ever perform better than placebos. Look to the proven failures of psychoanalysis and operant conditioning. Maybe this is why there seems to be a half-life to facts – they aren’t maintained nor “designed” for sustainability. And just note that I’m talking about theories and facts, not laws like the conservation of energy. And again, this is not a call to be anti-science or doubt the established body of scientific literature many dedicated people work hard for. I’m simply trying to figure this all out and lay down my concerns.

Analysis is next, and prone to many issue mostly regarding manufactured data, falsified data, misinterpretation, inept data analysis, playing games with statistics, etc. You get the picture. I’m not trying to assume malice on the part of the experimenter, I hope you’ve noticed, and most of the levied criticisms here make that assumption. It’s worth noting, but doesn’t really call the epistemological founding of the scientific method into question, so I’ll make a note to be aware that it happens but not to dwell on it.

The final point of epistemological failure is in the sharing phase. In the ideal world, you share your findings, other intelligent experts discuss your findings in the greater context of the scientific literature itself, and maybe a few get excited enough to replicate and tweak your experiment to come across their own unique insights. But in reality, the marketplace of ideas leans heavier in the direction of marketplace than ideas.

The three ways I see this are in 1) funding bias, 2) publication bias, and 3) the replication crisis.

The funding bias states that the direction of research will influenced by the source of funding. In practice, this has led to the corporatization of academic research, the decline in diversity of research (especially basic research), and in extreme cases, lobbyist data (think Big Tobacco, Big Oil, etc).

Publication bias is the phenomenon of studying what will get published. This is partially the result of the whole “publish or perish” model in academia, but also is the same factor driving shoddy click-bait journalism. More people seeing your work means more funding, more personal accolade, more opportunity for more publication, etc. This also decreases diversity in research and forces experimenters to self-edit anything that wouldn’t be a clean or polished publication, so the more nuanced, messier important details get left out.

And the replication crisis is the startling lack of replicability in especially social sciences. This means that the validity of the original experimentation and its conclusions maybe one-offs or filled with biases and confounding factors that the mainstream has simply accepted. An entire corpus of work may be built on well-intentioned but over-generalized phenomenon.

The other thing about the sharing phase that bothers me is its existence. This portion will sound authoritarian, but remember, the truth is always true. It should not change because we want it to. The entropy of a perfect crystal at absolute zero is exactly zero. It doesn’t matter how I feel about entropy or crystals, the entropy of a perfect crystal at absolute zero is zero.

What bothers me about the sharing phase is that it is promoted as democratic and consensus based. Which is great if we are speaking about governance, but we talking about science and truth. The truth is not something that is voted on or decided on, it just is true.

This isn’t absolutism, it’s literally necessary for the definition for the truth science seeks to establish –observable, objective, impartial.

Dan Shechtman, winner of the 2011 Nobel Prize for Physics for his discovery of quasicrystals actually discovered quasicrystals in the mid-80s. His findings were not accepted because they were disputed by Linus Pauling, a two time Nobel Prize winner himself. Shechtman faced great hostility from the scientific community and although he was proved correct eventually, you can see how this community aspect only hindered progress.

Time and time again, the Copernican rebels who bring forth the “unknown unknowns” are cast aside only to be recognized in hindsight. Another fascinating look at this phenomenon is how homosexuality was to be categorized in the DSM. It was put to a vote. Not scientific inquiry, not a longitudinal study, or a meta-analysis of quality data, but a vote. Remember, the truth should be the truth, regardless of how we feel about it.

I hope I haven’t dissuaded you from believing in science. I still hold the scientific method in high regard. I only posted all of this because I think it is important to be skeptical, but so much so that we should be skeptical of our own skepticism. Please read this post in that light, not in a “gotcha” sense, looking for ammo to fuel predicated beliefs.

Be skeptical of your own skepticism.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s