Over recent years Twitter has consistently come under fire for doing very little to handle the abuse on the platform.
While anonymous abuse or ‘trolling’ is on the rise, only a tiny fraction of online abuse happens on Facebook or other social media platforms, which makes this, mostly, a Twitter-specific problem. So, when Twitter presented at CES 2020 in Las Vegas this year, it was no surprise the issue of abuse was front and centre.
At the show, the social giant announced that it would be experimenting with a fundamental change to the platform, which has been designed to bridge the gap between having a totally private account or a public one. The change is to roll out slowly over the next few years and will give the author power over who can reply to their tweet in four different ways: Global, Group, Panel and Statement.
- Global: Anyone and everyone can respond to your tweet
- Group: Only your followers and anyone you mention can respond to your tweet
- Panel: The people mentioned in the tweet are the only ones who can respond
- Statement: No one can respond
Perhaps the most concerning change to the platform is the ‘Statement’ function. There’s no doubt about it; we live in divisive times in which debate has become a natural discourse. It can be healthy and constructive, and it’s important for society. The statement function takes away this discussion, and provides users with no opportunity to respond. Which means that even those who set to educate and inform, rather than attack, are still not able to add value in response to a tweet.
This sounds harmless enough, until you consider the spreading of fake news and misinformation. The statement function will allow people to say what they want, unchallenged, without facing correction or consequence. If Twitter follows its trend of rolling out to verified accounts first, this will give some incredibly influential people the ability to have an unchallenged voice on the platform, which raises some important questions about how these tweets will be fact-checked, if at all.
Twitter should already be doing more to address this issue. We repeatedly see multiple far-right activists tweeting racist, sexist and anti-LGBTQ content and not facing penalty. We see presidents of countries inciting war crimes, and climate-deniers and anti-vaccine activists providing a plethora of fake news and doctored statistics in order to alter people’s worldview.
However, the message from the presentation is clear: Twitter is now putting the onus on its users to deal with the misinformation, trolls and bullies that lurk on the platform. Instead of removing these problem accounts, Twitter’s support team can now default to showing people how to limit their reach to avoid these accounts interacting with them.
This is a direct juxtaposition to the purpose of the platform, which is truly about seeing people come together for discussion; whether it’s about politics, culture or memes. It’s true that not all of the conversation has been positive or intelligent, but we cannot ignore that Twitter has managed to grow a culture of community which spans global boundaries.
While trying to come across as forward thinking and innovative, Twitter has actually shown it’s hand. The platform is out of its depth, drowning in the tides of vitriol, and it can’t seem to get it’s stance on this key issue right. It needs to develop a structured approach to its handling of online abuse, and work to start getting it right more often than it gets it wrong.
To do this, Twitter doesn’t need to remove people’s ability to debate and discuss. It needs to remove those who intend harm and spread hatred on the platform.
It’s that simple.