September 30, 2024

AI & Election Interference: Local Experts Weigh In


AI and Election Interference: Local Experts Weigh In

Part 2: Social media in a democratic society


Interview with Dr. Giorgos Pallas

Chief Information Security Officer (CISO), Aristotle University of Thessaloniki

By Linda Manney

Dr. Giorgos Pallas is Chief Information Security Officer (CISO) at Aristotle University of Thessaloniki, where he also works as Associate Professor in the Computer Science Department, teaching Advanced Computer Networking to postgraduate students.  As both a university professor and the father of school age children, Dr. Pallas is especially concerned with the role of technology in the lives of young people.

LM: Do you think that the widespread use of social media platforms is undermining democratic values? 

GP: Before we talk about the role of social media in a democracy, I would first like to share my personal view on the number one danger which undermines democracy, apart from all the technological dangers.  The current situation is that a small number of media owners own and control a large part of the traditional international media, with the ability to set an agenda and impose their agendas country-wide, in a way that no ordinary citizen or group of citizens could match. In my view, this is an instance of oligarchy.

And then we have the new generation of media owners, the social media platform owners.  Of course, social media platforms are owned by single individuals, they are owned by stock holders, they are motivated by profit, so they are not independent platforms for engaging in political discourse. In theory, they could be used for social engagement and democratic conversation, but in actuality they are platforms that push the agenda and the modus operandi of their owners.

In my view, this is the number one danger to democracy, the control of the majority of media by a very few select individuals.  Once we acknowledge this, we can then address the technological dangers to democracy.

LM: How is technology used on social media platforms in a way that undermines democracy? 

GP: Well, let’s consider a common media platform, like Facebook, which is constructed with algorithms that promote a particular post if it has been identified as one with provocative content.

The basic structure of a social media platform is designed to promote provocative content.  For this reason, social media owners are not motivated to exercise restraint and present a more balanced view of a particular issue, or to circulate a number of perspectives on a particular issue. 

On the contrary, they prefer the more provocative posts which are clearly more harmful to the democratic discussion, but which also engage a greater number of users and therefore generate more revenue for their enterprise.

LM: How are we going to counter this misinformation, given that so many people get their news on current events or their information about the world from social media? 

GP: The root of the problem is that a large number of people expect to know what’s happening in the world solely by interacting with their computers and engaging with unknown entities. 

These people never really know for sure with whom or with what they are interacting - are they humans, are they bots?  That’s because in order to detect what an AI bot is, it’s like a cat and mouse problem.  The algorithms get better at detection, and then a new generation of AI bots gets created that are better at evading detection. 

Of course, disinformation can be spread on the social media platforms very easily, apart from the work done by AI bots, which do this automatically and on a large scale. 

For example, whenever someone writes a provocative text, that post will spread very quickly.  This is because the algorithm of the social media platform, by means of its basic form or design, understands that this post will attract attention. So the algorithm spreads the post to a much wider audience than it would for a post with a more balanced view. 

In this way, disinformation can also be spread by single individuals who post content which the algorithm judges to be provocative and therefore should be shown to many more people.

LM: How can this problem of disinformation be addressed?

GP: I think that people need to start experiencing the world outside of their computers, to engage more directly with their local communities, to organize at the level of their neighborhoods and get engaged in everyday politics in their area.  They need exposure to alternative views and opinions which are unmediated by algorithms that are controlled by a couple of media owners. 

Democracy is about people engaging personally in politics in their everyday lives, every day, not just once every four years, or whenever there is a national or local election.  If we aim to get all our information about our political lives from online sources, there is no solution that will save us. This is my ultimate opinion.

LM: The appeal of technology is that it can reach millions of people instantaneously.  So, how can we compete with its speed and efficiency, and get people to participate in their local communities?

GP: I would advocate the use of technology to facilitate community organization, in order to help people find each other in the public space.  

For example, where I live, a lot of people with children in the same local schools are members of the same Viber group, and we can stay in close contact and address problems that come up in our children’s schools.  We have also seen large scale meetings conducted via Zoom to link hundreds of thousands of people with common concerns in a number of different locations and time zones. 

But even with these innovative uses of contemporary technology, we should not fall into the trap of believing that we can experience or control all the subtleties of the real world via the digital world. 

LM: Apart from these innovative uses of technology, is there any way that social media platforms can coexist with the democratic process?

GP: Of course, with legislation, we can offer a number of protections to the citizens of a democracy.  For example, the EU enacted the Digital Services Act in 2022, which imposes restrictions on social media platforms. 

In the EU, the DSA requires the platforms to clear posts routinely and check the content to limit the spread of disinformation.  There is a mechanism to detect and remove fake accounts and fake content, including videos and photos as well as language text.

In addition, the EU is currently implementing a process of watermarking photos for purposes of identification.  The watermarking will be made to manufactured photos and videos, so that people will know that what they see is fake.

There is also a mechanism for AI bot detection and removal, and even though it is not failproof, it represents a solid effort, a start.   And the DSA applies to all major media platforms, like Facebook, X, YouTube, Instagram, Snapchat, and also major search engines like Google.

LM: Apart from the DSA, are there any other EU laws that require the content on social media platforms to be monitored?

GP: There are some additional laws that the EU has recently passed, such as the AI Act, the Artificial Intelligence Act, which imposes restrictions on how AI can be used and how it can be configured. It became effective in August of this year, 2024.  At this time, both the DSA and the AI Act are in effect in all EU nations.

There is one more law that I am aware of which is very important, the Act on Transparency and Targeting of Political Advertising, which was enacted in March of 2024, and should be fully operational by the end of 2025.  This law addresses a problem that has arisen on social media platforms in recent years, the possibility to pay for content to be posted and seen by hundreds of thousands of viewers. 

This practice allows politically slanted content to reach a far wider audience than before.  And this practice could pose severe risks for the entire democratic process, since content could be controlled by economic factors, not by quality or relevance of the content.

So, the Act on Transparency and Targeting requires the platform to clarify if a post was paid for by a particular individual.  This kind of post is marked as an advertisement, paid for by a specific group or individual who is identified in the post.

LM: Apart from legislation, what else can be done?  Do you have any other ideas in this regard?

GP: I think that technology must be publicly regulated, not privately controlled.  By its design and definition, technology, the media platforms and the internet, they all conceal how they are working from behind.  For example, on Facebook, you see a lot of posts, you see the ones your friends posted or those posted by people unknown to you, but you can’t know how these posts are selected.  So, I think that a Facebook-like platform that is publicly maintained and transparent would be a very positive development.

It’s an illusion to think that social media posts are randomly selected or chosen by means of an objective criterion. The fact is that there is an algorithm behind social media posts that manipulates content in such a way so as to increase the profit for the platform.

If social media platforms remain unregulated, they will definitely move us away from democracy.  The real problem isn’t the technology itself; the problem is to protect democracy with laws that define the appropriate use of technology. 

###