Familiar with this term?
It’s the subject of a post on the blog Armed and Dangerous. I wouldn’t have seen had it not been picked up by Judith Curry. She has a flair for original research and thoughtful perspective (and taking care to separate the two). But her contributions don’t stop there. She has an energy and an instinct for ferreting out, and then sharing, interesting work of others, often with a commentary appended or interposed. The latest example is her post on Error Cascade of yesterday.
Briefly put, according to Armed and Dangerous (there’s much more to this and I hope you’ll get a chance to read the links thoroughly) “A scientific error cascade happens when researchers substitute the reports or judgment of more senior and famous researchers for their own, and incorrectly conclude that their own work is erroneous or must be trimmed to fit a “consensus” view.”
Get the idea? A scientist of established reputation publishes an error. Subsequent researchers, for a variety of reasons, place too much trust in the erroneous result, and “build” on it, or tweak their own findings into conformity with that earlier finding, so that the error is perpetuated instead of fixed. This goes on until some other thread of scientific inquiry, proceeding along independent lines, happens to stumble across the error, and it’s finally corrected.
The authors have covered this topic (and its implications for climate science and the politicization of that science in particular) far better than I ever could. But it prompts some personal observations:
Error cascade has always been with us. My father, the research statistician, used to warn my brother and me about these dangers in the early 1960’s when we were still living at home. He’d talk about how scientists would measure this or that fundamental physical constant and that for several subsequent papers, all the estimates would fall within a certain small, consistent range of error bars, until a new measurement, based on some novel method, would fall outside those error bars. Then a similar herd mentality (20th-century language) would set in with respect to the newly-accepted value, until the next correction, and so on. He presented this to us as if it were widespread and never-ending challenge for all science disciplines.
And it goes back further. Take a glance at the Darwin quote on the right-hand margin of the home page for this blog. Darwin laments false facts and their long-enduring, negative impact on the progress of science. Same idea.
Or go back further still, to the origins of science, and physicians like Galen asserting that the heart was a furnace – heating the body. Some 1500 years would elapse before William Harvey would assert that the heart was and is a pump. Or Empedocles suggesting that all matter was composed of four elements – earth, water, air, and fire. Took a few millennia to get a true table of elements.
Error cascade can throw a monkey wrench into the wisdom of crowds. James Surowiecki’s book by this title examines how crowds can produce superior outcomes, provided that no single dominant personality dominates discussions and sharing of perspectives. But that caveat turns out to be a big one.
Peer review is an imperfect instrument for coping with error cascade. In a brief series of three posts back on February3, 2011, February 8, 2011, and February 9, 2011, I looked at peer review and some of its difficulties. One that I didn’t emphasize at the time is that when the reviewers are anonymous but the author isn’t, the playing field is not level…and it’s tipped in favor of established scientists. So peer review as currently constituted cannot be relied upon to deal with this problem. Moreover – and this was treated in those earlier posts – as science increases in complexity and advances more rapidly, peer review struggles to keep up.
Social networking may help make up for this lack. This is at most a surmise…or maybe simply a wistful hope. But it seems that as the pace of science accelerates, as sharing of knowledge becomes more rapid and extensive, and as science grows more crowded, that any development or new knowledge today quickly finds in its train not only reviewers, but an army of scientists quickly exploring its applications, and testing its limits, in myriad ways that may well be more penetrating and insightful than any formal peer review. This swarm activity may prove able to catch errors and correct them more rapidly and effectively than the more deliberate methods of the past.
Just an (admittedly) hopeful thought.
Ever heard of the Clovis Barrier in American archeology? The herd mentality lasted from 1933 till 1997. Not only did they (supposedly) not ever find anything older than Clovis (about 11kya), but in Clovis sites they wouldn’t even LOOK below Clovis layers. Finally, when the Clovis Barrier was breached in 1997, by research at Monte Verde in Chile, one site manager actually looked under the Clovis layer – and found earlier artifacts.
At about the same time, DNA researchers found that not only did people arrive in the Americas earlier than Clovis, but they came in FIVE waves, including the time of Clovis. One of the waves came from Europe.
For six and one-half decades all evidence earlier than Clovis was ignored and careers diverted – some destroyed. And in the end, the Clovis Barrier was a wrong conclusion. Many an archeologist or anthropologist died, believing that no one came here before Clovis man.
Now we just have to get them away from the other error cascade – that Clovis man killed all the mammoth.
Pingback: Jerome Ravetz on the sociology of science and keeping standards high. | Living on the Real World