We Won’t Publish That

By Mia Hanuska

If you were to Google “how to make a nuclear bomb,” what would come up? Would you see a specific instruction manual on how to actually make a nuclear bomb yourself, or just the general steps? Hint: it’s not the former. But why? Why can’t I, an average US American citizen, learn how to make a nuclear bomb from the internet? You can thank scientific censorship.

What does it mean to censor science? Scientific censorship encompasses peer reviews, retractions, biased funding, and self-censorship, and often classifies as the restriction of certain findings, information, or studies from publication. 

One of the most common places for censorship to sneak in, peer reviews of papers and studies allow bias and chance to determine the result of the review—whether it be approval or rejection for publishing. While the majority of scientists agree that peer reviews do greatly prevent major errors in publication,  many also concur that since peer reviews decide what gets published, if reviewers have a strong belief or viewpoint on an issue, any paper opposing their side may be prone to censorship, effectively suppressing unpopular and controversial opinions. 

Furthermore, censorship from peer reviews quickly morphs into self-censorship as scientists aim for positive reviews. Instead of focusing on issues that may face backlash from peers or faculty members, scientists will stick to conservative topics that provide few opposing arguments in order to avoid rejection and social ostracism. Reputational damage or informal punishments (such as delays in equipment approval, denial of grants, and a lack of communication on important matters) from coming to unpopular conclusions—even if supported by data—also contributes to the self-censorship habit scientists often fall into. Some scientists even endorse the moral censorship enforced by society, claiming that certain issues lack scientific necessity and instead cause harm to the public by presenting themselves in  scientific journals

Thus brings back the argument: what should the public have access to, and what causes harm? Why can’t I make a nuclear bomb at home? Often, the government censors access to certain information in order to avoid possible malevolent persons from using the information for harm, such as people interested in building a nuclear bomb. Many accept that some scholarly information is too dangerous for pursuit, and thus must be censored in order for the safety of vulnerable groups. However, although some information truly causes more harm than good when accessible, where does the line fall—what constitutes as “harming vulnerable groups?” Objections often arise when information paints historically disadvantaged groups in a negative light, but despite the way it may look, is it not what the science shows? Should science be regulated simply because of its possibly unfavorable result? 

Similarly, papers can be rejected for “failing to meet conventional standards,” yet, these conventional standards are typically subjective and open for interpretation. This enables the hyperbolization of flaws in the justification of rejection of certain controversial findings. Plus, the with increase of scientific retrations—where a paper is removed from a journal after publication—since 2000, scientists now also fear their work being retroactively altered due to present political motivations. Although retractions provide beneficial remedies to old, inaccurate works, the uneducated public tends to see them as “evidence of scientific incompetence” and tarnishes the reputability of scientists. Thus, the illegitimate retraction of articles unfortunately harms scientists and prevents the spread of accurate information.

Finally, the government, corporations, and educational institutions don’t want to fund issues that aren’t seen as “important” or “valuable.” This leads to the lack of substantial research on non-prioritized areas while the prioritized topics have an abundance of repetitive information as scientists follow where the money is. For example, lots of research focuses on automotive transport (cars, buses, roads), while bike transport and effective city planning are neglected. This prioritization of funds results in a lack of diverse findings and the effective censorship of topics deemed “unworthy.” 

Ultimately, scientific censorship not only produces worse studies and a less diverse offering of information, but it discourages scientists from working on topics that need more research and staying truthful in their findings while creating public distrust in science—how do we know what they discover isn’t altered to avoid censorship? It’s a lose-lose-lose on all sides. Find something too controversial? Yeah…they won’t publish that.

Discover more from The Shield

Subscribe now to keep reading and get access to the full archive.

Continue reading