If you happened to be taking your daily exercise on the streets of York, Leeds, Hull or Sheffield last month, there is a chance you would have seen messages blaming migrants for coronavirus.
Behind these messages were the Hundred-Handers – a secretive far-right group that aims to influence people through the mass posting of stickers in cities. Coordinating on social media platforms like Telegram, the group also posts charming slogans like “anti-Semitism is caused by semitism” and “Western civilization is white civilization”.
Social media provides this group with the oxygen needed to survive – and they are by no means alone.
In April, TellMAMA warned far-right groups are using social media to blame Muslims for spreading coronavirus after Tommy Robinson posted a series of videos on TikTok, falsely claiming Muslims were attending mosques during lockdown. Meanwhile, a report last year found that a white nationalist genocide theory has been mentioned 2 million times on social media in recent years.
Far-right leaders celebrate their reach and are angry when they are banned from platforms. When they struggle to reach a mass audience, they pay to advertise Islamophobia and hate on Facebook. Social media is clearly integral to the modern far-right’s growth.
How can social media keep up with the spread of hate on their platforms? Twitter, Facebook and YouTube have all developed comprehensive policies dictating the type of content allowed. For instance, they ban content that incites violence and remove accounts that violate their terms. However, their guidelines are by no means fool-proof.
Indeed, while accounts associated with far-right are often banned, this is not before their videos rack-up substantial views. In a video posted on YouTube by Paul Golding, members of Britain First claimed asylum seekers entering the UK could be “terrorists” and said “we are going to intercept them and deter them from coming”.
Clearly in breach of YouTube’s hate speech policy, which says moderators will remove content promoting “violence or hatred”, the video remained active for months – and not without Scram flagging it to the platform.
YouTube claims most of videos in breach of its policies are removed automatically, before anyone views them. But with videos like this seen by thousands, clearly something is going wrong.
And for an account to be banned, it has to enjoy a certain notoriety. Accounts associated with household names like Tommy Robinson are often spotted and reported but, doubtlessly, there are swathes of smaller channels posting vitriolic views to small but loyal fanbases, undetected by YouTube. What are the consequences of the existence of these videos?
The same story has been played out on TikTok, Facebook and – thanks to the lack of nuance embedded within these platforms – Twitter, when Britain First tweaked the name of its old account to flout a ban and return. These accounts emerge, spread hate to thousands, and are ultimately banned again.
And when one platform closes its doors to the far-right, they migrate to somewhere new. Recently, the far-right has found a home on VK – a popular Russian social media platform with 500 million users. With numerous other platforms springing up, it seems that anti-hate activists have been condemned to an infinite game of whack-a-mole to remove the far-right from the internet.
Indeed it is away from the gaze of the mainstream where the far-right really flourishes. They have found a safe-haven in libertarian platforms. Often built using open-source software, far-right users can also benefit from the monetisation of their videos and receive crypto-currency in accordance with their popularity.
Unregulated by their creators and unknown to the mainstream media and public, there are few people devoted to shutting these accounts down, especially when the forces keeping them alive are so strong.
While it may seem clear that removing online hate is necessary to stop its spread, the far-right defend their “free speech” and say the establishment are employing tactics to muzzle them. But anti-racism group HOPE not hate have warned that this could be manipulation not integrity: “For some on the far-right free speech is not a right, it is merely a tactic.
“With their ideas long marginalised from the mainstream, they are using the notion of free speech to try and broaden the ‘Overton Window’ (the range of ideas the public will accept) to the point where it includes their prejudiced and hateful politics.”
Whatever the motive, the far-right have shown that they will not stop in their attempts to capture large social media audiences. And with minimal regulation, duty falls on audiences to not let the ideas they witness from these groups broaden their own Overton windows.
Kate Plummer is a Reporter at Scram News.
Scram News is closing its doors, for the time being, from 1st June. Find out more here.