For a discipline that prides itself on its ability to question the status quo, the way in which we do science has not changed much since the scientific revolution. This conservatism in methods has become so entrenched that it can start to feel unavoidable, and a whole ecosystem of self-interested parties has emerged to defend it. It feels almost transgressive to suggest that we can do better, to the point that I often feel like a madman when talking about the subject.
The scientific publishing industry has become adept at defending its archaic practices. From hiring a PR firm to slander open access journals to slapping astronomical fines on anyone foolhardy enough to make research papers freely available and easy to access.
This monopoly on the scientific method framework is not only unfair, it actively hinders research. It favours a conservative approach to science by punishing exploratory studies and limiting access to information. No credit is given to routine but crucial work (such as reproducing experiments or maintaining databases) and timid “safe” research is more likely to get funded.
The good news is that those problems are relatively easy to solve. The internet makes the sharing of information trivial and gives access to many more tools to convey an idea. A huge variety of content curation methods have emerged, and could provide a solution for dealing with the exponentially growing number of published papers. (To those scoffing at the idea, rating a cat picture is not substantially different from rating a scientific paper)
Cracks are already starting to appear in the conservative science wall. Every time a paper is viewed in SciHub, a copy is downloaded on their servers. Thanks to its popularity, the repository of freely accessible papers in SciHub has become so large that journals will never be able to put them back behind a paywall. Since the site is based in Russia, fines awarded in American courts will not affect its operation. A new journal on AI research called distil.io has done away with the archaic periodical format, allowing for interactive or self-updating graphs to be included in reports.
I’ll now go over the main ideas that I find especially anachronistic in modern science. The list is by no means exhaustible, so feel free to share other ones in the comments!
Reporting framework (periodicals)
Ever since the 1800s, science has been reported in a periodical format. A small series of selected papers are published regularly on a static format (pdf). This publishing method notoriously discards negative results, silences less popular research and prohibits reliable iterations or curation on an already published paper. In the age of the internet, this is akin to chopping wood with a stone axe while an industrial mill is available.
In economics, an assumption is often made that all actors act in their optimal selfish interests. This idea is referred to as Homo Economicus and has been widely criticised since humans often act irrationally. Peer review does the opposite, it assumes that all actors are altruistic, when they often are not. The process often lets research be stolen, and high-profile authors are treated differently due to their seniority. Although technically anonymous, peer review (especially in smaller fields of research) is seldom impartial since the identity of the author can be inferred. Those flaws are often made obvious in the all too common retraction scandals which plague science. Why should a few people decide which research can and cannot be published? Their assessment of the research should be shown alongside the publication, and the audience can decide if the assessment was fair.
Publish or perish
While they are detrimental to research, it is important to remember that the detrimental actions of most of those self-interested actors stem from good intentions. For example, research institutions often rely on the impact factor to evaluate the quality of research. This incredibly flawed indicator assigns a score to each published paper based on its popularity. Glossing over the dubious logic of equating research quality with popularity, the impact factor also embeds the publishing industry in the scientific process since papers published in particular journals are given a higher score. They therefore have a built in incentive to make the impact factor more important than it should be.
The impact factor was developed with the laudable goal to decide which researcher to fund, and it led to the sad saying “publish or perish”. When assessing it, another saying comes to mind: “The road to hell is paved with good intention”.
The gentlemen scientist
I think one of the less talked about pernicious ideas in modern science is a legacy of its early days. The scientific revolution was led by rich or noble members of society. Darwin was the son of a wealthy financier, Lavoisier (the father of modern chemistry) was a nobleman, and scientific academies were staffed with a rich (male) elite. This legacy lingers in the form of a resilient glass ceiling for women in science, and the assumption that one does not pursue a career in research expecting a substantial salary. It’s not surprising that the emergent field of AI research is being led by tech giants like Google rather than conventional academic institutions.
Finally, while the experience of older researchers is certainly valuable, age is often mistaken for ability. In academia, older researchers are given complete control of younger researcher’s careers. This wildly unbalanced power structure has led to numerous horror stories of abuse and regularly pushes out valuable talent.
We need to discard this misconception and accept that the researchers we refer to as “students” (Postgraduates) are actually staff scientists, and should be treated as such. A modern lab is run like a conventional venture, with the lead professor acting as the CEO, managing the lab, its staff, and constantly seeking funding. We wilfully deny this fact because it would imply that the academic career path is flawed. Someone who spent their life doing research has not necessarily gained the skills necessary to act as a de facto CEO.
Instead of a rigid structure, we must accept that there are a wide variety of different skills other than research needed in a modern lab, and use people with the appropriate specialisations. Management, research reporting, and teaching are separate functions of a modern lab which should be performed by people with the right skill set.
I do not pretend to hold all the answers to those issues, and the internet would certainly not solve all the problems of peer review, but I think we should at least acknowledge that there could be a better way to do science, and try to find what that might be. I will talk about what that solution could be in my next post. In the meantime, please let me know what you think!