Role of AI in Psychological Warfare
How Cambridge Analytica and SCL Group Pioneered Data-Driven Psychological Warfare
How Cambridge Analytica and SCL Group Pioneered Data-Driven Psychological Warfare
The world of psychological warfare has been around as long as humans have waged war. In the 6th Century B.C., Persians drew images of cats on their shields, knowing that Egyptians worshipped the cat god Bastet. Genghis Khan used terror as a psychological weapon during the middle ages by decapitating enemies and parading their heads to spread fear.
For psychological weapons, the narrative is the payload—a rumor or disinformation to yield confusion or a behavioral shift. In World War I and II, information operations were not valued due to clear advantages in missiles, tanks, atomic bombs, and submarines.
Therefore the extent of psychological warfare was paper-leaflets thrown out of an airplane for mass propaganda. However, this all changed when you are given access to the largest distribution channel in humankind: the internet. You can reach everybody with a mobile device now.
Nonetheless, it is still difficult to give a viral message on the Internet strategy costs a lot of money and is largely ineffective. This is where AI is powerful. It understands which messages stick the most to certain people. Targeting led by AI leads to highly efficient messages, which leads to behavioral shifts. A perfect weapon for psychological warfare.
Psychological Warfare In The Modern Age
Cambridge Analytica and the SCL Group have been the pioneers and prominent leaders of psychological warfare using AI. One of their first use cases was on the island of Trinidad and Tobago.
Trinidad and Tobago was an ideal pilot use-case: an isolated island with a divided population between the Afro-Caribbean and Indo-Caribbean. Cambridge Analytica worked for the Indo-Carribeans during the elections, and their strategy was simple: spread apathy in the young Afro-Caribbean voters.
They designed a reactive and non-political political campaign called “Do-So!”. It meant “I am not going to vote” and was a salute of resistance against politics and voting. The campaign was targeted to the young audience, which caused a massive shift in behavior.
The turnout for 18–35-year-old was 40% lower for Afro-Carribeans, and that was all that was needed for the Indo-Caribbean side to clinch victory. You can see a short snippet from The Great Hack describing the movement on Youtube. It is honestly quite mind-opening, so I highly recommend the three-minutes watch.
The company has done more work with Brexit and the MAGA campaign in the U.S.
The Methodology
Cambridge Analytica combined psychology and sociology. They microtargeted people as “personalities.” This is mainly because political ideology is deeply tied to the personality of the person.
Initially, the data availability of the mass population was superficial. For instance, they tried to predict voter behavior through a person's address or familial structure, but it didn’t work. Instead, the best predictor for voter behavior was the personality of a person.
Hence, Cambridge Analytica looked to social media. To collect personality data of the mass population, Facebook proved to be the best instrument.
Dr. Aleksandr Kogan at Cambridge University worked on an app to collect the maximum amount of data for each person.
“Facebook knows more about you than any other person in your life, even your wife” — Kogan
The apps would give special permission to get data of the person who is using the app and also the data of the entire friend network. The app allowed Cambridge Analytica to get data on 50~60 million people in a 2~3 month period. The data is all the activity on Facebook (likes, profile information, and private messages).
Actually, social media data is the best predictor of you.
In fact, a 2015 study by Youyou, Kosinski, and Stillwell showed that using Facebook likes, a computer model reigned supreme in predicting human behavior. With ten likes, the model predicted a person’s behavior more accurately than one of their co-workers. With 150 likes, better than a family member. And with 300 likes, the model knew the person better than their own spouse. — Christopher Wylie, Mindf*ck
After having almost all of the information of a single person for a massive population, all that was necessary was to create the content that changed the perception of a person (reinforcement, countering, etc.) to shift voter behavior.
The content (tone, information, length, etc.) and the consumption medium (blog, newspaper, video, etc.) were also suggested through data science and AI. Internet sites, blogs, videos were created by professional creators, designers, and videographers.
Final Thoughts
The methodologies of Cambridge Analytica undoubtedly have created a fragmented society. There is deep polarization in the countries where microtargeting through AI has been used.
The dangers of AI have been proven, but how can we prevent it? Investment in data analytics and AI has been on unprecedented levels. Son Masayoshi of Softbank has built a $100bn fund on investing mainly in AI.
And there was severe scrutinization of Cambridge Analytica, but what is the difference between GAFA and Cambridge Analytica when using such technologies?
How can we prevent terrorist groups from having the same technology: ISIS, Boko Haram, and AQAP?
There are so many questions to answer with this phenomenon. It is a positive movement that Google and Apple have started to ban 3rd party cookies. On the other hand, it is no question that large corporations will start to know more about the user with more sensors (collecting vitals) and computer vision (collecting behavioral data) moving on with smarter algorithms.
As a consumer, it is crucial to have an awareness of the movement and power of such technology.