Someone recently asked me whether or not they could take the scale of the public outcry on social media in relation to so-called ‘outrage events’ (events that touch on fault line issues relating to race, gender, religion, etc.) at face value. My reply was that one really needs to put the role and dynamics of social media into context. There’s not a one-to-one relationship between what happens on Facebook, Twitter, Instagram and TikTok, and what the general public tends to feel, although there is a feedback loop. In addition, so many interests are trying to manipulate these platforms that one can’t take what happens on them at face value. One needs to ask who is being outraged? What other issues have they promoted or responded to in the past? What is their political alignment? To what extent does the outrage exist within specific echo chambers only? One needs to understand these dynamics and more before making a judgment. This is not to say that social media responses are irrelevant; far from it. But, one does need to understand how social media and the ‘real world’ feed into, and play off of, each other.
I have been researching South African social media for over a decade, with much of my research published on this website. I have also published academic papers in this area, and my research has been covered in the news media in the past. In addition, I have a social media intelligence consultancy where we provide our clients with insight into the socio-political landscape that their organisation operates within on social media (you can contact me via the Contact page if you want to know more about this).
We use a mixture of data science techniques to make sense of large amounts of data in order to identify the patterns of behaviour that characterise our online discourses. In my time researching it, I have become intimately familiar with the South African social media landscape, and I have had a front-row seat to observe many past influence campaigns that have attempted to sway our national discourse. The ways in which our country’s conversations have been buffeted by vested interests is instructive for citizens, countries and organisations all over the world.
Influence agents… leverage emotions and network effects to create a dangerous cocktail of artificial influence and social manipulation
My research experience has taught me that accurately gauging social media users’ responses to a specific issue, and how this relates to the perceptions of the nationally-representative population, is incredibly difficult. It is often not clear the extent to which those users that respond to an outrage event on social media would have individually felt so strongly about the event had they been exposed to it in isolation. Often, users are swept up by the power of the bandwagon effect (or mob mentality) which social media is uniquely built to encourage and leverage, and which can lead to runaway conversations online that spike for a few days before being replaced by another conversation topic, and that don’t necessarily translate to real-world changes in perception within the general population.
Social media massively increase the scale of connections between people compared to in the past. In addition, it sometimes connects us in ways that wouldn’t naturally occur in “real-life” (although, granted, the distinction between social media and “real-life” is blurry these days). Influence agents often try to artificially connect like-minded people that support a specific worldview in order to create echo chambers where users spur each other on to ever-increasing emotional heights of anger and outrage. These echo chambers can be wielded as instruments of narrative control to achieve specific agendas, and they are present within most discussions, whether we realise it or not. Indeed, even members of these echo chambers often don’t realise that they have been co-opted into them.
Researchers, social media platform owners and influence agents know that highly emotive content spreads best, giving them an incentive to share content that engenders anger, fear, outrage, happiness, etc. Outrage events are one kind of highly emotive content. As with many complex issues, such highly emotive topics bring users together. In the process, facts and motivations are simplified in order to be internalised by larger and larger communities. Complex, grey issues become cast in simple dichotomies where one is either for or against an issue.
Influence agents are well aware of these social effects and are well-versed in harnessing them for their own ends. They leverage emotions and network effects to create a dangerous cocktail of artificial influence and social manipulation that should be approached with extreme caution when drawing conclusions about the real world and how people really feel.
Can outrage be manipulated?
It is well established by this point that, throughout the world, pressure groups, special interest groups, foreign actors and those with specific financial interests have used their resources to create debates where none exist, enflame otherwise civil debates, encourage polarisation within societies, and suppress discussions that run contrary to their interests. Across Brazil, the USA, Europe, Asia and Africa, dozens of examples of covert influence by such groups can be found in the literature on this topic. Indeed, many countries and special interest groups have teams dedicated to either using such inauthentic techniques to push their agenda or to fight against such manipulation by others (or both). And, more often than not, social media platforms allow them to go about their business completely anonymously, making it very difficult to 100% attribute specific campaigns to specific groups.
What forms of manipulation are possible?
Conversations might be artificially amplified or manipulated in a variety of ways including, but not limited to:
When the topic of social media manipulation comes up, computer-controlled robots – or “bots” – that behave like real social media users are the first thing that come to many people’s minds. However, bot usage is reducing over time as platform owners get better at detecting them (so far, their behaviours have differed substantially from humans’; although, with the rise of modern AI, these differences are likely to shrink).
As it stands though, bots tend to be mainly used to amplify content to give it the air of legitimacy artificially. For example, they will be used to retweet or Like a post a few hundred or thousand times so that real users think the content is popular and worth engaging with.
Actual conversations between users where one of the users has an influence agenda are still left up to real actors, often acting under assumed identities (sockpuppets). Again though, this is likely to change in the near future due to advances in AI’s ability to converse like a real human.
Real-world co-ordination of key influencers
In this scenario, a group of people might create a content plan to outline the topics to be discussed or attacked on a specific schedule, including hashtags to use and memes to share in order to build up a narrative, and the actors that push this content might be organised in their activity via various ‘dark web’ channels such as private WhatsApp groups (this was, for example, how the Zuma RET faction co-ordinated much of their social media output in 2016 and 2017).
What makes this approach different from regular marketing is that the content is not labeled as marketing-related but is instead passed off as real-world content from real users. At the same time, the actual identities of those involved are hidden.
Dedicated influence agencies
Similar to the above coordination of influencers, this approach takes things to the next level by outsourcing the influence work to a third party. Two well-known examples of this are Russia’s Wagner-linked Internet Research Agency (IRA), and its associates which are currently very active in Africa, and South Africa’s “Guptabots” which was a network of hundreds of fake accounts across Twitter, Facebook, Instagram, Tumblr and Disqus.
The Guptabot fake accounts were “inhabited” by a team of real people likely based in India with ties to the Gupta family. When controlled by real people, such fake accounts are known as “sockpuppet” accounts. The Guptabot network of accounts relentlessly attacked critics of the Zuma regime and the Gupta family (including journalists and opposition politicians) and helped to popularise the hitherto little-known concept of “white monopoly capital” within South Africa’s mainstream discourse.
Their actions are often erroneously associated with the Bell Pottinger agency and, while the bots did build on a narrative that Bell Pottinger fleshed out with Duduzane Zuma, there is no evidence that Bell Pottinger was actively involved in the activation of this campaign via the sockpuppet accounts run out of India. The “Manufacturing Divides” report (PDF) gives a detailed overview of how this campaign was deployed.
Curated influence networks
Perhaps the most widely used form of narrative manipulation today is the phenomenon of curated influence networks that are created and leveraged for specific agendas. Political and other interest groups create such networks by artificially connecting like-minded users (through techniques such as ‘follow trains’) into a hyper-connected echo chamber that moves in lock-step around specific issues and narratives, and that can be wielded as an influence tool. Some ‘influence merchants’ build these networks and then rent them out for financial gain.
Perhaps the most well-known example of a curated influence network in a South African context is the creation of the anti-foreigner political platform on Twitter. amaBhungane has published an extensive exposé on this topic and how it relates to the #PutSouthAfricansFirst and Operation Dudula movements. Their article shows who the early adopters and key beneficiaries of this campaign were, including ActionSA, the ATM party, and the South Africa First Party (although this is still circumstantial evidence). The article also gives a detailed case study and breakdown of how such networks were created and curated in the first place, as well as linking to examples of this same tactic being used by Donald Trump’s supporters in the 2016 US elections (here and here).
[Curated influence networks] play a large role in obfuscating the real-world impact of an outrage event since these networks are embedded within real user networks, because they are made up of real – albeit radicalised – users…
Such networks play a large role in obfuscating the real-world impact of an outrage event since these networks are embedded within real user networks, because they are made up of real – albeit radicalised – users, which has the distorting effect of increasing polarisation between communities and heightening the outrage levels within each polarised community, or echo chamber, across a greater variety of issues than just the one for which the curated network was created to influence.
In short, pervasive curated influence networks distort real conversations, simplifying them and driving users to extreme positions. It would be foolhardy to take these reactions at face value.
Conclusion: Can we accurately gauge the scale of authentic social media responses?
Given the brief overview of social media manipulation tactics, the pervasive manner in which they are embedded across our online discourse, and the examples of how they have been used in the past in South Africa – mostly by the same set of actors – we can definitely not take the size of a social media outrage response at face value. This, in turn, means that we cannot accurately evaluate a commensurate charge nor punishment for those accused of creating ‘social harm’ based purely on social media response alone.
Instead, one needs to take a more nuanced view of an outrage event. Who is being outraged? What other issues have they promoted or responded to in the past? What is their political alignment? To what extent does the outrage exist within specific echo chambers only? These are the kinds of questions that need to be asked before taking social media responses at face value.