So, here is a little story: STRATCOM is the NATO Strategic Communications Centre of Excellence. Recently, they re-ran a test of “the ability of social media companies to identify and remove manipulation”. The result is published, the report can be viewed and downloaded here. It is titled “Social Media Manipulation 2020”.
What did they do, in a nutshell? The researchers used thirty-nine authentic posts on FaceBook, Instagram, Twitter, YouTube, and Tic Toc. As far as I see from related news, for this they cooperated with two U.S. politicians, one a Republican, the other one a Democrat. The German news about it can be found on the German tech-news-site HEISE. Using these thirty-nine posts on social media, STRATCOM bought fake engagement on these posts from three specialised Russian service providers. Providers like these offer manipulated engagement with existing posts, for example. I guess they also offer much more, but this is just an example in which authentic posts of legitimate and appropriate information content were used, since this is NATO. The researchers paid the ridiculously small amount of 300 Euro to these Russian providers.
What did they get in return? 1150 comments, 9690 likes, 323202 views, and 3726 shares! Meaning plain and simple: I can create content on social media, with good, questionable or malicious intentions, and instead of hoping that I will attract many comments, likes, and views on my own, I can buy fake ones. Comments, likes and views increase the “digital weight” of the post. The more of this “digital value”, the more likely other people will look on these pieces of information or disinformation, and the more likely also these posts will be ranked higher by Internet search engines, such as Google. Finally, I can also buy distribution, through shares, of these artificially boosted posts. The more “oomph” I have in getting these informations pieces out into the right target groups, the more I increase chances of further distribution. And I pay very little money for it.
So, let me use a hypothetical example, but one which is commonly being used for manipulation purposes: I create a story with specific target groups in mind. Examples are countless. Like fake news stories which were designed for target groups of color in the U.S. in 2016. They were designed to raise doubt within these groups that a contender, in that case the democratic Presidential candidate Hillary Clinton, would be really interested in matters of grave concern for those communities of color. The unauthentic or fake pieces of news were designed to reduce the willingness of people in these target groups to participate in the elections. Many of us, I guess and hope, know about this form of manipulation. It also happened 2020, and it is fair to say that both domestic actors and foreign forces all over the world use this tactic in order to influence the outcome of an electoral process. It happens everywhere, it is commonplace. And it is manipulation at least, and more often violating law in many jurisdictions. But what it chiefly does: It contributes to burning credibility of truth to ashes. It leads to so much confusion about which information I can trust, and which not, that people may give up, or they just decide to only believe what their “friends” say.
By the way, coming to think of it: When sites like FaceBook (I don’t remember who came up with it first) began to abuse the term “friend” for the process of clicking a button asking somebody to “be my friend” and the invited party just accepting this request, I had a revolting feeling in my stomach. It lasts until today and is one of the reasons why I only engage minimally on social media. The notion of reducing the term “friend” to a digital connection with no real deeper meaning feels like the strike of an evil genius to me. It leads to the craving of having as many friends as possible, it is part of the addictive design of social media sites. It hollows out any real understanding of what friendship means. Terms like “friend” and “follower” are almost equal in their meaning. I need these digital friends for status and validation. Having friends has become an online currency. It is like a material possession, not an internal value which gives me the comfort of a deeper emotional, intellectual and spiritual connection to another person in whose well-being I take an interest. It adds to real-world-isolation because it strips the notion of having friends from what the term means in real social life. May be that is for a deeper reflection in another blog article. But ultimately, this manipulation can be used in order to get people into a network of “friends” for purposes of influencing them, depriving them from other sources of information, making them pawns in a game they do not understand, but crave to be part of. Emotions like the wish to belong mix with emotions such as fear, and anger, fake information is being used as a narrative giving them a feeling of meaning.
Back to the STRATCOM report: The first remarkable fact in the results of this research for me was the price for this form of manipulation. 300 Euro is so cheap that this service can be used as a mass tool, whenever this suits own devious interests.
Secondly, this is a shadowy grey and a criminal market. The methodology can be used by State actors with sophisticated technology and staff at hand. Or money, just hiding traces and buying a service from some groups like the above. Likewise, non-State actors can use it for political purposes, for ideological purposes, for religious purposes, or as a marketing tool. Which is a hint towards how wide the scope of potential manipulation is. The targets are you and I, and we may, very often, not ever know that we were manipulated. So, this is a profound ethical issue with consequences for whether, and how, we want to regulate it, how we want to deal with it.
Thirdly, STRATCOM re-ran the test because they did it before. They did that because the industry promised they would get better at identifying manipulation like this, better at identifying fake, or robot, accounts. Better at curbing influence. The conclusions of the report state that “platforms continue to vary in their ability to counter manipulation of their services”. Read the details, I’m not going to rank services here myself. But there are platforms where manipulation requires a small effort and where costs are one thenth of what is to be invested on other platforms.
Fourthly, the report makes it clear that these actors are not a few, they are an industry. One chapter is titled “The Social Media Manipulation Industry”. With market rules which survived efforts to fight this industry. Quoting the report: “Social media manipulation remains widely available, cheap, and efficient, and continues to be used by antagonists and spoilers seeking to influence elections, polarise public opinion, sidetrack legitimate political discussions, and manipulate commercial interests online.”
Fifthly, the report states that this industry prospered during 2020. I finish with quoting the three core insights which the researchers came up with:
- “The scale of the industry is immense. The infrastructure for developing and maintaining social media manipulation software, generating ficticious accounts, and providing mobile proxies is vast. We have identified hundreds of providers. Several have many employees and generate significant revenues. It is clear that the problem of inauthentic activity is extensive and growing.
- During the past year the manipulation industry had become increasingly global and interconnected. A European service provider will likely depend on Russian manipulation software and infrastructure providers who, in turn, will use contractors from Asia for much of the manual labour required. Social media is now a global industry with global implications.
- The openness of this industry is striking. Rather than lurking a shadowy underworld, it is an easily accessible marketplace that most web users can reach with little effort through any search engine. In fact, manipulation service providers still advertise openly on major social media platforms and search engines.”
What do do? Of course, we have a regulation debate. Part of the findings relate to that social media promised earlier to root out this phenomenon, but that they have not become good at it. To me it feels like the contrary, may be because of unwillingness, sloppiness, or the sheer size of the problem. Or any combination of these three. However, regulation always leads to an escalation, or attempts to evade, if the business model generates revenue. Which it clearly does.
My take is to focus on education. I am not a young nerd. I am a nerd in my early sixties. This world rapidly changes, and often I don’t like the course. But disengagement is, I feel, not an option. It is about giving people the knowledge and skills to make their own informed decisions. That is a core principle of open societies based on democratic rules. Truth matters, so we need to know about how truth is demolished in the digital invisible world of the Internet. We need to be able to learn, staying curious about learning, engaging in meaningful discussions, empower people to better identify manipulation when it occurs, giving them the skillset needed for quality decisions for their own lives.
When I talk to my youngest children about how much of their personal information is sucked from their smartphones, iPads, and computers without their knowledge, I am often presented with a sense of “why bother, I don’t feel it, I don’t feel harmed, or hurting”. May be we need to find ways how to reinforce the understanding as to which extent the digital and the real world are interconnected. People seem to discriminate between these two worlds.
Education is more relevant than regulation. Which motivated me writing this article. Hope you enjoyed reading it.
Pingback: “With The Help of Social Media Memories Can Be Erased“ – Considerations on the Relativity of Truth | Stefan Feller