In a little over a month, the #StopHateForProfit campaign has become global. Hundreds of brands are joining the pledge to suspend advertising on Facebook and Facebook-owned platforms until the company comprehensively addresses hate speech on its platforms.
Such calls to outlaw platform speech deemed hateful are not new, but in the past 15 years, they’ve gathered a strong momentum. As this graph from Freedom House illustrates, freedom of expression has been in constant decline.
But surprisingly, it’s not just state legislation that reduces freedom of expression. With the growing importance of social media, those platforms have become the targets of those that seek to silence dissenting voices.
For example, in 2019, the Christchurch shooter used the website 8chan to publish his manifesto. Later on, Cloudflare, the hosting provider for 8chan was pressured to kick the website out.
Similarly, mainstream social media platforms such as Facebook or Twitter, ban and remove extremists’ accounts on a regular basis. And usually, those clean up exercises take place in the wake of highly-publicized incidents. Despite the measures, the extremist activists keep on returning to those platforms with new profiles.
By going after its revenue stream #StopHateForProfit aims to hurt Facebook badly enough that it will enforce a strict policy on hate speech. However, Facebook having a user base of over 1.5 Bn users with different world views, values, and beliefs, doesn’t want to get involved in policing people’s views.
This constant fall in freedom of expression has made OSINT in general and SOCMINT, in particular, increasingly complicated and cumbersome.
Driving the conversation underground
With every round of censorship, extremists not only go further and further underground but they also strengthen their security. Encrypted platforms such as Telegram have become central to the communication strategy of jihadists and far-right activists. But infiltrating such channels is much more complicated than running a search on Facebook or Twitter.
To start with, researchers need to rely on techniques that fall into a grey area, potentially putting themselves in legal jeopardy. Most legislation restricting the holding or sharing of hateful or extremists’ documents have exemptions for researchers. Despite that, those legislations ultimately rely on a judgement call by the police officers and judges to decide whether the material is for research or extremist activities purpose.
Depending on the depth of the analysis, analysts will create more or less complex sockpuppet account to access those encrypted channels. But to infiltrate the most secured channel or chat rooms, they need to build and interact through a complex legend – a fake identity with a fictitious background and character - which takes time to build and maintain.
While such techniques have long been exploited by specialist analysts to infiltrate operational channels and chat rooms, more and more surface OSINT analysts need to recourse to similar toolkit and skillset. However, most surface analysts don’t have the skills and training to conduct that sort of research safely. Often, it will require the use of virtual machines, VPNs, and specialised tools such as the TOR browser.
In short, driving extremist conversations underground doesn’t stop them. Instead, it makes the monitoring of such conversation more complicated and dangerous for the OSINT analysts.
A knowledge management nightmare
Most de-platforming efforts are useless. For every account taken down five more crop up. And while hateful messaging keeps being published on social media, tracking those users across platforms and usernames is extremely time-consuming from an intelligence perspective. Yet if you want to have a good understanding of how a specific network evolves such exercise in knowledge management is essential.
Extremist movements are made up of highly complex webs of connections between individuals and organisations with many diverging agendas. Whether they are Islamists, Far-rights, or Leftists, extremist networks are living things. They constantly evolve through constructivist interactions with each other.
More importantly, they are rifted with disagreements and internal debates. For instance, Islamists regularly bicker over which enemy they should focus on (the far enemy – usually the US or Israel) or the near enemy (the local regime).
In the US, the emergence of the Boogaloo movement and its incoherent approach to the Black Lives Matter movement is also illustrative of such internal debates. Proponents of the Boogaloo are split between those advocating a civil war against the government by supporting BLM and those advocating a race war and opposing it.
From an intelligence perspective, tracking those internal debates is essential to understand how a certain risk profile can change. Understanding which narrative has most momentum can help recognise and predict target types and modus operandi.
Constant disruption of extremists’ online presence (through platform takedown and de-platforming), however, means we lose the ability to track those networks and conversations. The only alternative is to invest a lot of time and resources to build manually this institutional knowledge.
First they came for the extremists...
Finally, this drive for political correctness can ultimately affect the analysts themselves. Whether they produce analysis for political or business decision-makers, sometimes their job involves telling their audience things they don't want to hear.
In this current context of extreme political correctness, delivering the truth to power can be highly damaging to an analyst’s career. There is a fine line between explaining a phenomenon and condoning it. And sometimes, it is the analysts’ job to inform that their audience’s decisions might drive the risk of terrorism or civil unrest up.
The risks associated with infuriating bosses or clients has always been inherent to the intelligence analyst’s job. A famous example is Valerie Plame who was outed as a CIA agent. This was in retribution for her husband’s articles that cast doubts on the Bush administration’s assertion that Saddam Hussein had sourced uranium from Niger.
The current climate of extreme political correctness could lead to many more good analysts being fired or worse – jailed – for producing the ‘wrong’ intelligence analysis.
The focus of this post was on the implications of curbing the freedom of expression for OSINT. But of course, the question of freedom of expression is much larger than OSINT.
Words alone don’t kill, nor do they turn people violent. For words to turn people violent they must echo a desperate reality. Criminalising or banning dissenting voices does not shut them down. On the contrary, it reinforces their resolve and gives them the legitimacy of the martyr.
Now, this is not to say that the explosion of communications and the echo chamber effects of social media do not have а real-world impact. They do. But having those conversations in the open allows OSINT analysts to monitor these chatters and pick up indicators of imminent threat and act on those.
Having spent countless days watching, reading, and deconstructing propaganda from Islamists and far-right extremists, I’ve learnt to grow a thick skin and not be offended. But most importantly, it was a great experience in critical thinking. I’ve learned to know my target, to understand its rationale, and to deconstruct its message.
And in doing so, I’ve realised that as a society we are doing to those extremists what they dream to do to us. We are forcing them to believe in a certain set of values and principles.
And this raises the question of who decides what speech is hateful? Such a question cannot be resolved through objective metrics as it is inherently highly political. While today, it is mostly the liberal progressists that decide what is hateful and politically incorrect, tomorrow, it might well be the far-right or far-left. If we tolerate liberal censorship today, we give legitimacy to someone else’s censorship tomorrow.