The Future Stars Today: The power of algorithms
Artificial intelligence is an extremely influential tool that shapes everyday life.
Corina Cristea, 06.02.2026, 14:00
It analyzes data, identifies patterns and provides results in a timeframe impossible for the human mind. In medicine, it can help with the early detection of diseases. In economics, it can anticipate risks. In education, it can personalize learning. We are talking about artificial intelligence – an extremely influential tool that shapes everyday life. In many areas, decisions are no longer made solely by humans: which route to take in traffic or whether to receive a loan are just two examples in this regard. And behind them are algorithms, which, essentially, are sets of rules created by humans to solve problems. As tools, they are extremely valuable. Problems arise, however, when algorithms are no longer just consultants or assistants, but end up in the position of making decisions.
Adrian-Victor Vevera, director of the National Institute for Research and Development in Informatics in Bucharest (ICI) gives us details: “In Ukraine, First Person View drones have appeared, which attack the first person seen and some of them are already testing the artificial intelligence mode that can select targets. The question has been raised whether this is the future. After all, letting a non-human entity make decisions about a human life is something that we have not thought about until recently. If we expand the area of discussion from what artificial intelligence would mean in the public sphere as a support, as an assistant, as a tool by means of which we can become more productive, cheaper to operate, faster, and go to the point where you end up leaving the decision about a human life in the “arms” of artificial intelligence, it’s a very long way. It’s about leaving the final decision in the hands of a non-human entity. Yes, in the medical field AI is clearly one of the most successful and where it will be the fastest to establish itself. It’s much easier for an application that can scan and compare millions of images than for a human who learns this over many years and gets that expertise in giving a diagnosis after a long, long time.”
An algorithm that selects candidates for a job can discriminate unintentionally. An automated risk assessment system can treat people in similar situations differently. How much control should be given to artificial intelligence? Where does utility stop and risk begin? Artificial intelligence should remain a supportive, controllable tool – it should analyze, recommend or warn, but the final decision, especially in sensitive areas such as health or security, should remain with humans, experts say.
Here is Adrian-Victor Vevera back at the microphone: “We should not treat artificial intelligence either as an absolute gift that will bring only well-being, nor as a potential enemy and condemn it from the beginning. I think that first and foremost, when we speak about the use of artificial intelligence, we should look deep into our souls and think about the rules that should be imposed when developing and using artificial intelligence. This means the ethical way to use artificial intelligence, the limits you put to it, the protective measures. There should always be a button to stop something that can go off the path you thought it should follow. Think about the fact that, being a tool, artificial intelligence can be used by people who want to commit acts of terrorism, by people who want to manipulate the mood of a nation. It is a tool. Think of the knife, you can use it in the kitchen to cook or on a battlefield to kill someone. That’s why I was referring to a stop button, to means of protection, the means of security of the way in which artificial intelligence is used and developed, and, of course, the ethical part, which you clearly cannot impose on everyone who would have access, possibilities and knowledge to develop artificial intelligence applications. But, based on a code of ethics, you can create a framework within which you can more easily track the level at which it is developed and, of course, the results.”
Another aspect is related to responsibility, more precisely, who is responsible for a decision made by an algorithm that makes a mistake? The programmer? The company? The user? Or no one? Unlike humans, algorithms cannot be held morally accountable. They have no conscience, intention or empathy, they just do what they are asked to do. And the risks increase as the use of artificial intelligence becomes more widespread. According to studies conducted in 2024, 4% of the world’s population was using ChatGPT artificial intelligence, two years after its launch. As for Romania, almost half of Romanians use artificial intelligence in their daily activities, according to a study conducted in 2025 by Reveal Marketing Research, a full-service market research company. (LS)