X

AI: Allied or Foe?

inteligenta-artificiala-geralt-pixabay-com

Artificial intelligence (AI ) is defined as ‘the ability of a system to correctly interpret external data, learn from such data, and use what it has learned to achieve specific goals and tasks through flexible adaptation’. It involves the development of algorithms and models that enable machines to perceive and reason about their environment and take appropriate actions. These algorithms use large volumes of data and advanced techniques such as machine learning, deep learning, natural language processing, and computer vision. Founded as an academic discipline in 1956, artificial intelligence is increasingly present, and its benefits are undeniable. These can range from better healthcare to more efficient manufacturing processes, from safer and cleaner transport to cheaper and more sustainable energy sources. For companies, AI can facilitate, for example, the development of a new generation of products and services and improve workplace safety, as robots can perform dangerous tasks. Artificial intelligence, however, comes with negative aspects, such as possible threats to security, democracy, companies, and jobs. Have we reached the point where we fear that artificial intelligence and the tools with which it alters our reality will decide our future? There are voices that claim that this is not out of the question. An example relates to the political sphere, to the possibilities of influencing the electorate, for example through deep-fakes, created using artificial intelligence. University lecturer Dr. Flavia Durach, specialist in the study of disinformation, explains the operating mechanism:

“There are several risks related to deep-fakes. First of all, it is about exacerbating the emotional component, exacerbating certain emotional dispositions at the level of the electorate, which at a given moment may favor a certain candidate or a certain political formation. Because emotional manipulation plays an important role in disinformation, and because it can lead to irrational decisions, to undermining critical thinking, to buy into a certain type of message that has a strong emotional component, for example fear, or this feeling of being scandalized. In terms of election integrity, here we can really have those kinds of situations where a candidate, or the campaign team, or certain actors who have a vested interest can create deep-fakes to discredit the opposing candidates, to create doubt in the minds of the voters. And here, the biggest stake is undecided voters, those who are subject to perhaps cross-pressures from their environment to tilt the vote to one side or the other, they are probably the most susceptible. Deep-fakes by nature have a high potential to be viral, they are easy to track, they usually have all the characteristics of audio-video content with strong virality, so they can spread easily.”

In an attempt to counter such practices, a group of 20 tech companies, including major developers of artificial intelligence software, recently signed an agreement calling for them to combat election disinformation this year. Major signatories include OpenAI, Microsoft, and Meta. Finalized at the Munich Security Conference, the agreement includes, in addition to AI companies, some of the most popular social networks. Meta, along with TikTok and X (formerly Twitter), must ensure that any harmful or fake content is removed from their platforms. OpenAI, Microsoft and other AI companies will ensure that they identify AI-generated images, videos and audio, and inform users correctly. The measure agreed by most companies is to label the AI content, mainly through a watermark. It is a good start, believes Flavia Durach:

“However, we must take into account that we also have the experience of less sophisticated fake news during the COVID-19 pandemic, when such promises existed, these labeling measures of content have been taken, but independent studies from think tanks or researchers unaffiliated with these digital platforms have found a host of limitations to these measures. In the sense that a good part of the content that misinforms in those contexts managed to escape undetected by the moderation policies, by the detection measures. Therefore, without knowing the technical aspects, I have a dose of skepticism regarding the effectiveness and efficiency of these measures in the light of previous experiences.”

Specialist in the study disinformation Flavia Durach believes that “In the absence of legislative measures, of regulations established at the national, or supranational level, if we do not base our efforts on some policies for the development of artificial intelligence on ethical bases, and with the minimization of risks, we will not be able to do anything”.

Some important steps have already been taken in this regard at the EU level – the new law on artificial intelligence approved this week in the plenary of the European Parliament provides, among other things, the obligation for developers to state that the sounds, images and texts produced by AI are artificial . It will take some time, however, as the law will become fully applicable 24 months after its entry into force, with a gradual approach.

X

Headline

You can control the ways in which we improve and personalize your experience. Please choose whether you wish to allow the following:

Privacy Settings