Written by: Sytske Post
After weeks of speculation about whether Elon Musk would join the board of directors or only remain an investor, the business magnate struck a deal to buy the media giant Twitter for $44 billion on April 25, 2022. This deal has triggered debates around the concept of the right to freedom of expression. However, where does society draw the line? What happens when this right clashes with other rights (life, non-discrimination)? The public response to the deal reveals the complexities of internet governance as well as security concerns regarding the limits of online freedom of expression. Musk's idea of creating an arena for free speech perhaps oversimplifies the complicated politics, socio-economic diversity and geographics in which the platform operates. Twitter, and other social media platforms, have had very positive impacts on democratic practices. However, the darker sides to these platforms should not be underestimated, as they are also spaces where hate speech, disinformation and conspiracies are amplified and spread. These discourses may not be dangerous in and of itself, but they can create environments where violence against another can be legitimised.
The reality of local free speech laws
Musk claimed that the objective of the purchase of Twitter is to secure the site's role as a "digital town square where matters vital to the future of democracy are debated". Despite plans on the actual future direction of the platform remaining unclear, especially since the deal has been temporarily put on hold, the takeover has triggered worldwide reactions.
Advocates for Musk's takeover anticipate that this transition will have a positive impact on free speech, democracy, and freedom of press. Whereas, human rights advocates have been alarmed by these prospective modifications to the platform, and are concerned about an increase in the amount of hate speech and misinformation on the platform. They see it as a worrying development that may signal the unravelling of many key initiatives that have been put in place to protect users from individual and collective harm.
What has actually been said by Musk himself that has brought about scepticism from these human right advocates? And how do these two sides provide insights into the complexity of content moderation?
In the scenario that the sale proceeds, let's first consider what Musk has said about Twitter's future direction. Musk, who calls himself a free ‘speech absolutist’, has indicated a desire to loosen the platform’s content moderation policies. However, in an interview with Chris Anderson, responding to a question about free speech, Musk also stated that Twitter or any forum is bound by the laws of the country that it operates in… “In my view, Twitter should match the laws of the country” Musk said. In a later tweet, Musk reaffirmed his rather legalistic attitude to free speech when he stated: "By 'free speech', I simply mean that which matches the law. If people want less free speech, they will ask the government to pass laws to that effect. Therefore, going beyond the law is contrary to the will of the people."
However, this stance perhaps reveals a lack of understanding of content moderation generally, but also the complex reality of freedom of speech laws in countries outside the United States. Twitter has faced some political restraints and regulations based on local laws, and in certain cases abiding to local laws can actually minimise the inclusivity of free speech.
Nigeria, which banned the site from operating in the country from June 4, 2021 to January 13, 2022, is a recent example of this complexity. The banning of the platform was seen as something that had been coming for a long time. Twitter played a large role in the support of the youth-led #EndSARS protest against police brutality, which swept through Nigeria and her diasporic communities in 2020 and posed challenges to Buhari's government.
The youth movement used the platform to propel, mobilise, and fundraise for the movement. This social media activism soon spiraled into street protests in various Nigerian cities. The government, not happy about the support the digital platform was providing, banned the platform accusing it of threatening the country's national security. Eventually, the ban was lifted after months of negotiations, with Twitter agreeing to abide by Nigerian laws in the moderation of prohibited content and to establish a local legal personality.
Nigerian laws on social media regulation, however, have previously been criticised. In 2019, the government proposed a social media bill that would allow law enforcement to shutdown parts of the Internet and limit Nigerian users’ access to any ‘online location’ (including WhatsApp, Facebook, or Twitter. It also sought to criminalise statements that the government deemed “prejudicial to the security of Nigeria”. The bill has been stalled due to criticism by digital rights groups. However, these regulations demonstrate that relying on primarily local laws for Twitter content moderation could actually infringe on a person's freedom of expression.
Twitter, and other social media platforms, are indeed arenas where public debate can contribute to holding governments accountable and mobilise for political and social causes. But content moderation and the politics of Twitter regulation is complex. The free speech philosophy grounded only in following local law is therefore a simplistic way of thinking about how such regulations work in practice, and it can jeopardise users' digital rights in certain countries, such as Nigeria.
Escalation to violence
Another challenging concern in debates over freedom of expression is the potential impact of words on the violent escalation of conflict. What does unrestricted freedom of speech look like? Even in the United States, where free speech is protected as a constitutional right, there is a vital distinction to be made between one’s freedom to say whatever they want and others' right to impose consequences if that speech causes harm to others. Certainly a line would be drawn when speech directly incites violence, but the difficulty with such an understanding is that words will rarely be as clear as an order to shoot someone. Incitement to violence frequently, but not always, overlaps with hate speech, but also is intertwined with dis/misinformation and conspiracies.
One of the most prominent cases showing this relationship between online free speech and the incitement of violence has been Myanmar. The use of Facebook to spread anti-Muslim messages, and dehumanisation of the Rohingya community, as stated in an investigation of the United Nations, has played a significant role in the Rohinya genocide. Facebook was able to play such a significant role in the Myanmar context due to the country's socio-historical context and treatment of the Rohingya community. The Rohingya's acute vulnerability is the result of decades of state policies and practices that have gradually marginalised them. This vulnerability created an environment where the group could be easily targeted and used as scapegoats.
This intensified in 2012, when a Buddhist ultranationalist group emerged and built on these already existing divisions in society. Creating a community united through shared anxieties. These fears were formed around two targets, 1) the broader Muslim community in urban and rural regions across the country, and 2) the Muslim Rohingya people centered in Rakhine State. The group framed the Muslim population as both a personal threat and a threat to the Buddhist majority nation. They disseminated information that Muslim birthrates were rising, that Muslim economic power was growing and that Muslims were plotting to take over the country. These conspiracies and disinformation campaigns contributed to hate speech against the Rohingya community.
In the case of Myanmar, the rise of the Buddhist ultranationalist group was thus partly due to Facebook, using the platform as a useful instrument to spread their narratives. The wide reach, the speed with which information can be disseminated, and the participatory aspect of sharing and commenting, contributed to creating a climate of fear, hatred and anger among Myanmar's Buddhists and other non-Muslims. When such a climate is created as a result of online discourse, violence against a group, in this case the Rohingya, could be instigated.
Why can an unrestricted space for freedom of speech pose a security threat?
Offensive language, conspiracy theories or disinformation are not necessarily dangerous in and of itself. The speaker, the audience, the situation, and the media must all be taken into account. If the remarks are hateful but the speaker is a minor character, intercultural tolerance is well ingrained, and people seek information from a variety of sources, then words are unlikely to have any serious influence. However, when such speech is used by speakers who are highly regarded, or in a society where existing divisions exist between various groups, offensive speech can become particularly dangerous. Especially when the medium via which such speech is broadcast, such as social media platforms, can reach a huge audience.
In the case of Twitter, the amount of interest in the platform's sale vastly outweighs the site's economic importance. Twitter does not even make the top ten most popular social networking sites in the world. However, it has a unique and important reputation that helps it to stand apart from competitors. It is the preferred platform for journalists and government officials, two significant groups whose engagement in the site allows it to reach beyond the site's real users. People pay attention to these people and often regard their information as trustworthy, and therefore the site has significant power to organise and influence audiences. Additionally, due to the presence of these specific groups, traditional media often picks up on specific tweets and rebroadcasts them on television, radio, or newspapers. Thereby amplifying the impact of certain tweets beyond the users of the site.
These characteristics have also provided Twitter, and other social media platforms, with the ability to contribute to democratic practices. Providing a platform that can be utilised for public dialogue, and mobilised social change. However, the concept of freedom of speech is complex and deciding where the legal and/or societal borders to free speech should be drawn can be extremely challenging.
Musk's desire to loosen content moderation glosses over coordinated disinformation and propaganda, endemic harassment, and pervasive hate speech, all of which can drown out and silence other voices. If the objective is to create a more inclusive environment for public debate and to ensure that free speech is as free as possible, the question that therefore should be asked is: “an inclusive environment for who?”.
If absolute free speech is the aim, then abiding by local laws might have the opposite effect, as speech laws in countries outside the United States have far more restrictive laws and could endanger user's digital rights. If inclusivity means allowing disinformation, hate speech, or conspiracies, which is increasingly targeting minority communities, this promise of unrestricted free speech could feel more like a danger to these individuals who have faced the social and political ramifications of these discourses. Content moderation and identifying dangerous speech is challenging since Twitter and other social media platforms have such a global audience, with different geographies, varied politics, and socio-economic variety. But a degree of moderation is necessary, as is evident in the case of Myanmar and many other cases (e.g. U.S. Capitol riot or the conflict in Ethiopia).
The reality around content moderation is thus tricky, as is the concept of freedom of expression.
The future of Twitter is still unclear and the sale of the company might eventually not even happen. However, these debates provide a deeper insight into the complexity of freedom of speech, content moderation and the impact technologies are having on our society. To create an inclusive arena for speech, perhaps it is more productive to think about future governance of media platforms. What lessons can we learn from the mistakes that were made and the opportunities that were created? Beyond Twitter, what rules and systems will we need to have in place to prevent whatever the next iteration of social media will be from repeating the harms of the platforms we have now, while simultaneously grasping the opportunities?
Disclaimer: Any analysis or views expressed in this article are personal and do not represent any positions of © Dyami B.V.
About the Author: Sytske Post
Sytske is a graduate of International Studies and is currently enrolled in the Master's degree Conflict Studies and Human Rights at Utrecht University. This educational background has provided her with an interdisciplinary understanding of violent conflict and security. Currently, she is particularly interested in the intersection of technology and conflict, ranging from digital disinformation to the shifting nature of warfare powered by artificial intelligence.
The article was edited by Ruben Pfeijffer