Part 1: Misuses of AI and Election Interference
Interview with Vivian Pavlopoulou, Cognitive linguist and AI specialist
by Linda Manney
The potential for social media platforms to spread disinformation has been well documented; however, the reach, speed and sheer ugliness of current social media blitz campaigns which aim to denigrate Democratic Party presidential candidate, Kamala Harris, is unprecedented (See https://www.nytimes.com/2024/07/30/technology/kamala-harris-toxic-internet-politics.html?searchResultPosition=6).
In this interview, Vivian Pavlopoulou, cognitive linguistic and AI specialist, describes elements of Artificial Intelligence (AI) that have been misused to create disinformation and disparage public figures online.
Vivian is currently working on a PhD degree in Cognitive Linguistics at Aristotle University Thessaloniki, where she is an active member of the Cognitive Linguistics Research Group. During Spring 2024, Vivian delivered a series of lectures on metaphor identification and interpretation in natural language processing, in both the English Department and the Information Technology Department, Aristotle University of Thessaloniki. She has also presented her research on metaphor and natural language processing at a number of international conferences. Apart from her academic work, Vivian is the founder and CEO of a linguistics consulting firm, Council of Owls, whose mission is to provide linguistics services for AI training.
LM: Vivian, perhaps you could begin by defining key terms for us. For starters, what exactly is a bot?
VP: A bot is short for a software robot, and it has been around for decades. A bot is not intrinsically good or bad; it is a computer algorithm, that is to say, a set of steps for a computer to follow in order to complete a procedure. How we use bots is what makes them benign or malignant. The benign category includes bots that aggregate content automatically from different sources, like news feeds, or automatic responders that help with customer service.
A prime example of a bot is a chatbot. If you go to a website, and there’s a pop-up window, usually in the bottom right corner, and it asks you a question, like “How can I help you today?”, this is a chatbot. Chatbots are computer algorithms that have been designed to hold a conversation with a human being, and they can help you.
If we’re talking about social media, another type of bot is a social bot, which is a computer algorithm that automatically produces content that emulates human language behavior. A social bot can interact effectively with humans on social media, and it mimics the behavior of humans quite skillfully. In some cases, social bots can influence or alter the behavior of humans as well.
Of these two types, social bots are more worrisome, and we should be aware of their possible behaviors and intended uses.
LM: How have social bots been used to create disinformation online? How do they influence public opinion or alter human behavior?
VP: On social media, people or organizations often use social bots that pretend to be real people (accounts) who write messages. And if we put a lot of coordinated bots together, it’s possible to create a buzz around a certain person or issue, so as to push a certain point of view or a certain agenda. There are also people who pay for bots to push their products or ideology, or to promote their posts. For example, someone could falsely create one million likes of a particular post on social media, so as to create a buzz. The more a post circulates, the greater the chance for you and me and anyone else to see it and believe it.
Some bots are specifically designed to create harm. These bots mislead, exploit, and manipulate social media discourse through rumors, malware, spam, slander, or just noise that can result in several levels of damage to society.
In the case of a national election campaign, a bot can artificially inflate support for a particular candidate. Typically, such candidates live and feed on human fear and terror, and a bot can inflate that kind of emotional response. A bot can also disparage the candidate’s opponent by creating and spreading disinformation about the opponent. So, the activity of malicious bots can damage the foundation of democracy by influencing the outcome of elections. And of course, this has already been observed, in both the US and in Europe.
In addition to their potential to undermine democracy, social bots can harm our society in more subtle ways. For example, if we are convinced by a bot to reveal our personal information, we could also be the victim of cybercrime.
LM: How does the design of a bot interface with the emotional and cognitive makeup of an individual or group of individuals?
VP: Social bots posing as human users on social media platforms can mimic human behavior quite effectively. Social bots are elusive, so they can easily infiltrate a population of unaware humans and manipulate their perception of reality, with results that we cannot really predict.
A crucial fact about bots is that they are designed to tap into the emotions of the followers of a particular group. They can spark an intensely emotional discussion in a moment, and you, the follower, won’t even know why.
In this regard, emotions on social media are contagious. If you put this feature of uncontrolled emotional response into political discourse, which is almost always emotional, the result is immense and unpredictable. And the more intense the emotion, the more contagious it becomes.
Bots can also outperform human behavior in certain respects. For example, a social bot can be programmed to create a post at any moment, on any day, whereas a human, like a journalist, needs to wake up, verify the information and then post it. So the bot can easily beat the human with regard to speed of posting.
And bots can tag or mention influential figures, in the hope that these tags will help to spread the content to thousands of followers, whereas a journalist or a researcher who verifies their information has a very limited following by comparison. So, it’s up to the people who read the journalist or person who verified the information, it’s up to them to repost, but bots can definitely outperform humans in this regard.
LM: Given the speed and efficiency of bots, can anything be done to counter the flow of disinformation online, especially during national election campaigns?
VP: We as individuals cannot do much except try to verify information. Computer companies have created bot detecting software, and it works, up to a certain point. But in recent years, bots have become so increasingly sophisticated, their detection has become more and more difficult, and the boundary between human-like behavior and bot-like behavior is now even fuzzier.
For example, social bots can search the web and scan the media for information to use in their profiles; they can post collected material at a predetermined time. They emulate the behavior of how and when a real person produces and consumes content, including patterns of their daily activity.
There are also bots that tamper with the identity of legitimate people. So, we have identity theft, we have fake accounts that contain elements of real user accounts, their personal information, pictures and links. The more advanced bots can clone the behavior of legitimate users, interact with their friends, post content with similar patterns, based on the actual user’s account, and then it becomes a lot harder to distinguish a bot from a human.
With regard to fake news or false accounts, we can protect ourselves from malignant bots by always verifying information, and always checking where our information comes from. (We should do this for all news, not just the items we suspect are fake news). However, if we’re talking about influencing people’s opinions and behavior with bots, then it’s really difficult to protect human users from the effects of malicious bots, because bots are designed to tap into human psychology, into people’s needs, wants and desires.
In short, we need to learn how to navigate and react to the online environment, as bots are a part of that environment. And of course, in the case of online discussions during national election campaigns, it is especially important to be pro-active: we can verify the information we see, and we can help others understand the ubiquitous presence of bots. Most important, we can make sure we always exercise our right to vote.