A Dialogic Turn in Online Far-Right Activism
Far-right groups, which encompass a broad plethora of cognate paramilitary groups, political parties and protest movements with nativist, authoritarian and populist policy platforms (Mudde 2007; Carter 2018), have increasingly been able to mobilise, exploit, and weaponise the online space for activism and their campaigns. Recent research suggests that such groups have been able to exercise an ‘opportunistic pragmatism’ when using online platforms, creating new hubs of convergence and influencing elections in countries like Germany, Italy and Sweden (Davey & Ebner 2017 & 2018; Colliver et al 2018). Just as examples of recent far-right successes in the electoral arena now abound, such cases also demonstrate a shift away from parochial concerns in using the internet to organise and mobilise an activist base of particular groups and a move towards greater transnational ambitions in disseminating far-right messages and ideology to a broader audience (Froio & Ganesh 2018; Caini & Kröll 2015).
Indeed, this dialogical turn is symptomatic of the plethora of social media platforms that characterise the modern internet. Far-right groups are no longer content to talk amongst themselves on bulletin boards, chat forums, and closed online spaces, as was the case with the early internet. Increasingly, these actors have used ‘likes’, ‘retweets’ and ‘pins’ to disseminate (usually sanitised) versions of their messages to a wider audience. What is problematic about this content is its often banal and coded nature, using notions of tradition, heritage, and support for the military to boost followership and widen the pool of those exposed to their more nativist narratives (Brindle & MacMillan 2017; Copsey 2017). Added to this, far-right actors have been able to exploit the voyeuristic nature of the ‘live-broadcast’ function on sites like Facebook and Twitter, using them to launch video attacks on journalists, politicians, and minorities.
The dialogical turn has posed an increasing challenge to policing communities seeking to detect and disrupt potentially illegal activity online. De-platforming far-right groups has now become the norm among social media companies aiming to tighten the net around far-right groups and actors. Indeed, such measures have helped to limit the far-right’s ability to spread anti-minority propaganda; for example, one UK-based group, Britain First, was stripped of over two million Facebook followers after its account was taken offline in March 2018 (Dearden 2018). More challenging, however, is the ‘whack-a-mole’ nature of de-platforming, with many far-right groups shifting their online activities to smaller, less regulated platforms, such as Gab.ai and Minds. Another persistent grey area also surrounds the review of posts that are not overtly racist or harmful in nature, but which fuel wider anti-diversity narratives. Indeed, messages once associated with far-right fringe groups have become mainstream in the contemporary populist authoritarian moment in the world, with governments in America and Europe now supporting anti-immigration and anti-internationalist policies (Norris and Inglehart 2019).
Policing the Far-Right Online: The case of the United Kingdom
In a recent article for Studies on Conflict and Terrorism, we examined how policing communities in the UK and Hungary are dealing with far-right groups online, the strategies they are using, and the impact they are having on far-right activism – both online, and more importantly, offline. Based on interviews with practitioners and case studies of the online behaviour of far-right groups’ in both contexts, we found that policing communities are using increasingly sophisticated tactics to combat far-right content and that these efforts are increasingly being frustrated by social media companies themselves. However, the reliance on these platforms has handcuffed these groups and individuals, ceding responsibility over to an increasingly vigilant set of social media companies keen to remove their pages and content.
One of the key challenges identified in our interviews related to the practicalities of monitoring and vetting far-right content online. In particular, the issue of definitions and what constitutes ‘far-right’ content online was raised by one interviewee, who stated: “in terms of regulating or policing the online sphere, that’s where the definitions come into play… it is difficult to establish whether content is harmful or not. The most harmful content might not be the most violent or sensitive content. The most harmful exactly might be the most mundane topics that you might like. So that’s an issue.” In particular, this interviewee suggested that there needs to be online databases tying definitions to symbols and messages of such groups to help law enforcement, government and tech companies regulate the relatively new prevalence of far-right groups online.
Another challenge is the sheer volume and proliferation of far-right content online. As the Trump election and Brexit have therefore shown, the messages and political platforms of such movements have become more mainstream, expanding the possible channels to monitor and disrupt. As one interviewee suggested: “You know, generally when we’re looking at Jihadist searches we’re looking in the low thousands, and when we look at violent far-right searches, particularly in the US, we’re looking at the hundreds of thousands.” Paradoxically, this applies to closed groups and chat forums used by far-right groups which are also harder to detect and police due to their secretive nature.
Finally, interviewees also mentioned the need for more proactive responses than simply taking down far-right content and sites. This would involve using algorithms and natural language processing to anonymously identify individuals who are searching for and accessing far-right content online, and then targeting them with counter-narrative videos and one-to-one interventions to prevent them from becoming further radicalised or even committing violent extremist acts. Such preventative interventions have been pioneered by Google and YouTube through the redirect method. However, as one interviewee pointed out, it is hard to develop counter-narratives that address all permutations of far-right ideology: “There are so many different grievances that are out there, that you can’t necessarily hone in on and that’s why when we discuss the far right,the narrative you are going to give to the generic population is going to be slightly different.”
In addition, there have been other proactive efforts to micro-target and identify individuals in need of further counselling and guidance to disengage them from extremist content. For example, one initiative called ‘counter conversations’ engaged with individuals from both Islamist and far-right backgrounds (Davey et al 2018). As one interviewee commented: “I think it’s a promising approach. I think there needs to be greater work in how this can be employed in different settings. But I think it [is] a useful starting point.” Such online interventions could complement existing counter-terrorism policing programmes, such as Prevent in the UK, which uses one-to-one counselling as part of the de-radicalisation process in the offline space.
Policing the Far-Right Online: The case of Hungary
Turning to Hungary, where far-right street activist organisations and political parties have been active in the online space for some time, an analytical examination of this activity is lacking. However, Jamie Bartlett and colleagues (2012) conducted a pioneering study surveying supporters of the political party Jobbik Movement for a Better Hungary (Jobbik Magyarországért Mozgalom; Jobbik) on Facebook. The study found that, at the time, a significant percentage (22 percent) of Jobbik supporters had a university education and that Jobbik Facebook followers under 30 were less likely to be unemployed than the national average. The main concerns of Jobbik Facebook supporters were the integration of Roma (28 percent) and crime (26 percent, compared to the national average of 3 percent). One of the biggest differences between Jobbik’s Facebook followers and European far-right activist movements was the lack of concern among Jobbik supporters about immigration and Islamic extremism, which are the top two concerns of Western European far-right movements (Bartlett et al. 2012). These attitudes have now shifted over the years, as immigration and anti-Muslim sentiments have taken over the rhetoric of the far right since the 2015 migrant crisis.
As our Hungarian expert interviewee was quick to point out, like the UK interviewees, it is not always clear what is meant by the far-right in Hungary. Several far-right organisations exist in Hungary, but far-right messages are also promoted online by the Hungarian government, as the messaging “comes much more from the state-run media.” In recent years the Fidesz party has pushed anti-migrant and xenophobic messages, leaving little room for far-right parties and essentially ‘stealing’ their main themes. In fact, “largely what Fidesz says is exactly what the [far-right] organisations say.”
In Hungary, therefore, far-right messages are not only pushed by organisations, but also by the government. As the Hungarian government has a monopoly on the media, with very few non-government controlled media outlets left, it can easily promote its own messages. These messages are promoted both on social media sites and on the websites of Hungarian news outlets. Comparing the Hungarian media to the infamous American Breitbart, our interviewee pointed out that some news outlets went so far as to create quasi-fake anti-refugee videos ahead of Hungary’s 2018 national elections.
Unfortunately, the future of regulating far-right online messages in Hungary looks bleak. While there is a need for much stronger regulation, this is clearly not in the interest of the government. Nor is it in the interest of tech companies to regulate the messages in Hungary per se, as this could lead to potential conflict with the government. Indeed, it is “hopeless to work against the government’s propaganda; as long as the crosswinds are so strong,” there is not much that civil society organisations can do. Our interviewee suggests that, in Hungary, regulating far-right messages online is not the most useful strategy because of the difficulty of regulating messages promoted by an authoritarian government. Efforts to control and moderate such misinformation campaigns by technology companies and civil society can therefore be seen as the best way forward.
Conclusion: Boosting the Effectiveness of Policing the Far-Right Online
Looking at the two cases of policing the far-right online in the UK and Hungary may seem obtuse at first glance, but such different contexts point to a possible way forward for the new and developing field of regulating far-right content online. In both cases, expert interviewees involved in practical intervention and analysis have found that clear definitions of what constitutes a far-right group and far-right content are needed to identify and take down potentially harmful content. Our first interviewee from a UN-based counter-terrorism project suggested a sensible solution to this: international standards, lists and symbols of these actors, will make it easier to flag and remove such content when it becomes problematic. Difficulties arise, of course, at the state level of implementation and the contextual specificities of each country, but global definitions seem to be the way forward in such a grey area.
In addition to these standards, better international cooperation between state actors and tech companies in sharing intelligence and methodologies could also help to reduce and de-risk far-right political content online. The Global Internet Forum for Counter Terrorism is an important example of an initiative that brings together willing parties and shares best practices to reduce the spread of terrorist and extremist content, but more can and should be done at the international level to develop innovative approaches in this area. New methodologies piloted by tech companies and NGOs using advanced computing techniques could move policing communities away from more reactive modes of combating such groups and towards a more proactive stance in tackling this issue. The use of algorithmic and natural language processing techniques to monitor and target individuals at risk of moving towards violent forms of extremism appears to be a keyway of intelligently preventing more problematic forms of far-right activism. Moreover, we can see how this first stage can be complemented by a second, counselling stage – using the online space as an equivalent to the offline social work needed to rehabilitate individuals who would potentially follow a criminal path.
There are obvious ethical implications both of having lists of far-right activists and messages, and of using technology for monitoring. However, if this is done in a way that is sensitive to individual privacy and context-specific, the infringement of civil liberties can be kept to a minimum. Well-defined limits on what policing actors can do, as demonstrated in the UK, are therefore the basis for gaining citizens’ acceptance in this area. Moreover, the effectiveness of online techniques in meaningfully deterring individuals from engaging in illegal activities in the offline space can be questioned. However, in conjunction with offline efforts, such online technologies could become a key tool for resource-poor authorities in tackling the early stages of violent radicalisation. If such ethical difficulties can be overcome, then it seems that these more proactive methods will be crucial in combating more problematic forms of far-right activism in the months and years to come.
Note: This article reflects the views of the author and not the position of the DPIR or the University of Oxford.