As a reminder of what I posted:
“Observational bias, for those who haven’t come across the term, is a trait where someone allocates much more importance to information that backs up their strongly held point of view than they do to information that is contrary to it.”
Now, given some recent posts on the board perhaps this snippet is more pertinent:
“…I spend a lot of time thinking about experimental configurations and how not let bias creep in. This often brings me into conflict with the theoretical physicists who want to ‘validate’ their models. When the experimental results don’t fit with the modellers predictions they’re often forced to specify additional parameters that they haven’taccounted for previously – or ‘fudge the shit out of it’ as the empiricists would call the process. It can often get to the point where the model only replicates the experimental results over such a narrow parameter space that it’s clear to everyone that the hypothesis is broken (we have another saying: “if you want a straight line result then only make two measurements”). So what’s the relevance of this? Casting physics, to me this is still riddled with observational bias, including some often quoted published papers.”
A classic example of this is the ‘big spring’ hypothesis, i.e. casts are made by flexing the rod thus giving it the elastic potential energy required to propel the fly-line forward once a stop is made. Thankfully this is seldom heard these days, but that myth survived for many, many years because people wanted to believe it (perhaps due to respect owed to the ‘big names’ that were telling them?). The disappointing thing is that the experiment to test the hypothesis was so ludicrously easy: measure the bend that’s seen in a fly rod being cast (relatively straight forward with photographic equipment) and then pull that same bend into a fly rod that is fixed by the handle and let go. Compare the results and dismiss the hypothesis. Clearly this didn’t happen and the observation that short casts could be made via the ‘bow and arrow’technique somehow grew into explaining all casts, and thus ‘casting physics’ headed down a poorly lit cul-de-sac until it was dragged, kicking and screaming, back out.
There are plenty of other examples where simple validation tests have not been performed, this leaves me slightly perplexed, especially in the more academic papers. The thing with the peer review process is that the reviewers, in the case of fly casting, will very likely have no knowledge of the subject. Sure, they’ll be experts in mechanics, aerodynamics, physics or mathematics etc., but I don’t know how many of them will have actually made a fly cast themselves. Perhaps that’s why, although the most complex physics and maths constructs are thoroughly checked, opportunities for simple validation tests are seemingly missed. Take, for example, the ‘climbing loop’ idea that is mentioned on the board, a simple side cast is an obvious test to apply (but perhaps not obvious to a mathematician etc.). I’ve certainly never seen my cast veer to the right when performing this cast. There are others I could mention also…
The hardest thing in science is coming up with what you think is a great idea and then putting in the effort to break the hypothesis. If you don’t try and find the holes in your own idea then you are almost certainly going to be guilty of observational bias and it’s a given that others will find the holes for you. If you ignore this feedback and publish anyway then you deserve all the flak you’ll undoubtedly get. Experiments should be designed to come up with an unbiased result – not thought up to prove a point.
Right, back to those rabbit fur flies.