Disinformation in the AI era
Disinformation has become a global phenomenon, with the potential to influence elections, polarize societies and put a strain on international relations.
Corina Cristea, 05.12.2025, 14:00
Information travels at dizzying speeds. Anyone can send a message that instantly reaches many people, social networks share and repeat ideas at a pace that was hard to imagine until recently, and all this has led to an increase in disinformation. Disinformation that has acquired a new magnitude – we are now talking about a global phenomenon, most often financed and coordinated, which has the potential to influence elections, polarize societies and put a strain on international relations. It is described by those who study the phenomenon as a war of the mind and perception, an invisible war. University professor Alina Bârgăoanu, a member of the Advisory Board of the European Observatory for Digital Media and an expert in combating disinformation explains:
“What I would like to point out from the very beginning is the idea that the tools that are used to wage this information war are increasingly diverse, namely disinformation, propaganda, hostile influencing campaigns, cyberattacks, algorithmic warfare, data collection for microtargeting and even hyper-personalization. This is not just about disinformation, but about a much more complex arsenal. Secondly, people call this information war the invisible war and this description is partially correct. It is a deceptive form of confrontation and has an invisible character, but, at the same time, it is also very visible. When describing the narratives hostile to the West, I say they are almost an ambient. By this I mean that they are nowhere and everywhere at the same time. As soon as a conversation starts, it is impossible not to have someone among our friends or colleagues who has not been exposed by such narratives. At the same time, you cannot put your finger on them, that is, where did I see them? where did I hear them? So, it is a war that is both visible and invisible. I think that the targets of political and informational warfare has started to be predominantly the EU and NATO, with this very precise focus, according to my observations, on the EU and especially on certain members of the EU. So, it is a war that is both invisible and very visible, it is difficult to bring to the surface. At the same time, when you start studying it, you are impressed by how sophisticated the techniques are, how well the algorithms are trained, how much artificial intelligence is starting to be used.”
While in the past mass manipulation required huge teams, and lots of time and resources, the dynamics of information warfare have been profoundly changed by artificial intelligence, thanks to which a single person can generate thousands of persuasive messages in just a few hours. And with the advent of Large Language Models (LLM), the world has changed again. Alina Bârgăoanu:
“All structural problems can be amplified, accelerated to the maximum with these new artificial intelligence technologies. First of all, artificial intelligence allows content to be generated at a much higher speed, with much more flexibility and with a lot of capacity to adapt to local audiences. Stories, conspiracies no longer have to be generated by people, they can be generated using these methods. And I don’t just mean writing, but writing, audio, video and especially the combination of the three. This would be one element. The second element, which is much more spectacular, in my opinion, is what has begun to be referred to in certain articles as Large Language Models Grooming. This means that the online space is flooded with misleading ideas, with things that didn’t happen, with untruths, with a multitude of different opinions, with very, very inflammatory opinions, but the aim is not to catch the user’s attention, but to pollute large linguistic models so that when we ask Chat GPT, for example, what are the reasons why Ukraine was invaded by the Russian Federation, the chatbot would give us an answer that takes into account these narratives that were injected. When we did a Google search, it’s possible that we would come across links that reproduced Kremlin’s official rhetoric, but we could make a choice. We could read and compare different links, but now the answer that Chat GPT provides is prevalent and our ability to check is much lower.”
A study by the Institute for Strategic Dialogue consisting of a prompt writing exercise with questions regarding Ukraine and conducted using the largest linguistic models – Chat GPT, Groq, Gemini and DeepSeek – showed that, in the case of all four, 18% of the answers were aligned with the official rhetoric of the Russian Federation, answers that had been assimilated by the respective models, says disinformation expert Alina Bârgăoanu.