6
Shares
Pinterest WhatsApp

Back in November, the BBC published a report into the online abuse (or ‘toxicity’ as the report termed it) that MPs suffer on social media. The report was eight months in the making and put an AI tool that had been built by the BBC themselves at the heart of the research.

However, upon the publishing of the research to much fanfare online, flaws in the report became swiftly apparent. The BBC had linked directly to the tool itself in the report, which allowed anyone to test the tool. Users began to test certain slurs, particularly those of a racialised nature. Highly offensive terms for Jewish, Black and Hispanic people went entirely unrecognised by the tool.

Well known far-Right and fascist phrases such as ‘I’ve got fourteen words for you’ were shown as non-toxic (or, in the methodology of the report – < 10% likely to be toxic). In other examples, playground insults that would struggle to upset primary school children such as ‘you are a poo poo head’ were found to be 77% likely to be toxic, while ‘your kind don’t belong in politics’ was rated as inconclusive.

Swearing appeared to have been automatically tagged as toxic, with the exact same phrase with a swear word added going from being rated as a very low chance of being toxic to a very high chance, even though the sentiment remained very similar.

The report was well-meaning in attempting to measure and draw attention to a genuinely pressing problem. But it also exposed the pitfalls and challenges of attempting to measure incivility, abuse and harassment online via automated methods. Blanket rules, which do not take account of context or internet slang and cultures, will almost always be lacking.

In part, this is why the Government’s flagship Online Harms Bill has run into problems. The debate at one point hinged on attempting to define ‘legal but harmful’ speech, and regulate this. But ‘harmful’ speech is almost impossible to define, often dependent on who it is aimed at, how it is taken by who it is aimed at, and other important contextual factors. The exact same words in the exact same order could be ‘harmful’ towards one use but not another. This makes legislating on this exceptionally difficult.

In my own work on the abuse and incivility sent to MPs and other public figures on social media (alongside other Liverpool colleagues Dr Emily Harmer and Liam McLoughlin), I’ve read, and hand coded over 10 thousand tweets and other social media posts. I’ve learnt that abuse can often take the form of far more subtle choices in language. Examples of this include people suggesting MPs may provide sexual services for their promotions by tweeting ‘I’m sure their interactions were…quite a mouthful for her’, asking questions such as ‘did you run this by your husband?’ and tweets and other posts questioning why women MPs aren’t at home looking after their children. All of these are obviously deeply sexist and even misogynistic but would be very unlikely to be picked up via the types of automated methods deployed in the BBC study. Tools trained carefully with misogyny, racism and ableism at their heart have the potential to be more successful but this would take a large amount of human coding first. Unfortunately, there are no shortcuts here.

When it comes to swearing, hand coding posts reveals that much swearing in online political spaces is not toxic and indeed can be the opposite. I lost count of the number of variations on ‘f*ck yeah’ under women MPs’ Instagram posts when they were announcing victories on policy etc. AI could pick this up as abusive when it is, if anything, the swearing is adding affirmative emphasis to a positive declaration.

There is a further important distinction. More neutral swearwords and insults thrown at MPs are not ‘nice’ but may not necessarily be a threat to democratic ideals. However, instances of language which are gendered, racist, homophobic or transphobic may lead certain already under-represented groups to reconsider their political participation. In other words, those already made to feel unwelcome in traditional political spaces, may see foul language of this kind being normalised in social media political discourse as yet another barrier to entry, so the issue of harmful speech, however fraught with complications, remains a factor with serious democratic implications.

There are also power differential considerations. In a news report covering the BBC report, one male Conservative MP complained of the abuse he received online and specifically mentioned that he had been sometimes called a ‘fascist’. Though a strong assessment, there are arguments that some of the policies the Conservatives have enacted – such as their policy of sending asylum seekers to Rwanda, and their attempts to limit union rights – have authoritarian elements. A quick glance at the MPs timeline also reveals he himself has spoken in less than polite terms about various people he has interacted with, including mocking one of his own constituents for having a low amount of followers. It is important that legitimate criticism is not conflated with simple abuse, and that MPs are asked to apply the same standards of civility to their own online output that they request they are subject to.

The BBC report was important in highlighting this genuine and real problem. But measuring this accurately and taking account of the real problems here, including  intolerance, potential exclusion of already marginalised groups, will mean we are far better placed to tackle this problem and ensure a healthier democratic debating space for all.

Comments

comments

Previous post

A New Battleground: Russia’s “Grey Zone” Warfare in the Sahel

Next post

Social Media and the Subscription Subject