Skip to Content, Navigation, or Footer.
The Tufts Daily
Where you read it first | Tuesday, May 14, 2024

Peripheries: The limits of free speech online

This week, British regulators unveiled a proposal to punish technology giants such as Facebook and Google who “fail to stop the spread of harmful content online.” “Harmful content” includes terrorist activity, violence and fake news. This proposal would create a regulatory body with the power to fine tech companies, implement other deterrents and punishments and to remove such posts from the public domain. Tech company executives could themselves be held liable for content published on their platforms. Officials involved in the plan have characterized it as part of the plan to make the United Kingdom “the safest place in the world to be online.” While there are clear dangers in allowing governments to lay down such rules, a third party could play an effective mediating role.

The question of where to draw the line between free speech and public safety is never clear-cut, but allowing totally unfiltered “free speech” often has deleterious consequences. For example, videos of the recent attack on New Zealand were publicized virally on 8chan, an anonymous board where terrorist sympathizers and other groups are free to discuss across borders at will. Many sources have concluded that users on Facebook influenced political outcomes, including the U.K.’s 2016 Brexit referendum and the 2016 U.S. presidential election. Facebook was pivotal in the organization of the 2017 Charlottesville rally. A fake video showing an attack by a Pakistani terrorist group is one example of fake news having a profound impact on India’s upcoming general election.

In March, Facebook officially banned content supporting white nationalism and white separatism. The company had faced public pressure to do this after it was discovered that the prior rules allowed users to advocate for the creation of a whites-only nation. Facebook’s position changed after extensive consultation with academic experts; the company concluded that white nationalism “cannot be meaningfully separated from … organized hate groups.” Perhaps it is time to extend the scope of these consultations beyond experts. We need to have a national, even a global, conversation about where free speech crosses the line. The nature of technology and social media's impact is largely unprecedented, and our previous conceptions of rights and free speech should not remain unyielding and absolute principles but should evolve and adapt to the changing discourse of the 21st century. We should harness emerging tools such as artificial intelligence to aid us in such regulation.

Tech companies have long enjoyed a special degree of autonomy and the ability to self-regulate. The U.K.’s recent announcement shows that this era may be coming to a close. Unfortunately, platforms that in theory should serve as forums for open communication have transformed into vehicles for propaganda and harmful messaging in multiple cases. However, regulations like the U.K.’s are minimally effective if enacted in one country alone. The U.S. and other countries should seriously consider similar proposals. The specifics of implementation can be debated, but we must not let the principle of free speech preclude this important conversation from happening in the first place.