Artificial Intelligence: balancing efficiency and responsibility
A discussion on the effects of the increasing reliance on generative AIs
Corina Cristea, 02.01.2026, 14:00
The swift evolution of robotics and artificial intelligence has fundamentally changed how we work, learn and interact with technology.
Robots are now a common sight in factories, hospitals, schools and even our own homes, all contributing to greater efficiency and optimized daily activities. They handle repetitive, dangerous or high-precision tasks, significantly boosting productivity while reducing risk to human workers.
Intelligent systems are no longer just tools executing pre-set commands: they are technologies capable of analyzing information and delivering complex solutions. A prime example is the rise of conversational assistants like ChatGPT or Gemini AI Chatbot, which are increasingly used for getting information, supporting education or even guiding everyday decision-making.
In medicine, for instance, intelligent systems can analyze medical images to support diagnoses, while surgical robots perform minimally invasive procedures with incredible precision. Meanwhile, in education, AI-based platforms are personalizing learning, adapting content to meet the needs of every single student.
AI-based technologies are gaining ground rapidly, and their applications are becoming increasingly diverse. However, specialists warn that to truly benefit from them, we must first understand how they work.
The reality is, in a world dominated by talk of artificial intelligence, very few people actually grasp its true nature. And the consequences, experts say, can be dramatic.
Ana-Maria Stancu, CEO of Bucharest Robots, the founder of Robohub, and the first Romanian on the EU Robotics board, explains:
“That is the scary part. We are already seeing quite serious issues. For example, some individuals have taken their own lives. We also have a study showing people developing psychoses after having conversations with these generative AIs. What you need to realize is that the sole mission of these generative AIs is to give you the most relevant answer based on your input. That means if you are depressed, for example, and you write to ChatGPT: ‘I fought with my colleagues today, I’m very sad and they don’t understand me’, it will feed into that narrative. It won’t challenge you by saying, ‘Maybe you’re the problem’ or ‘Think about it, maybe you’re just having a bad day’. It won’t respond like that. It is not a coach – a coach can call you out when you’re going off track. The AI won’t. Most AIs are programmed to avoid using the word ‘suicide’. But, if you write ‘I want to kill myself’, it won’t use that exact word, but it will suggest phrases that head in that direction. The computer doesn’t understand human emotion. It’s not a person. It is purely a sequence of numbers and algorithms”.
Artificial intelligence fundamentally operates on advanced statistical and mathematical mechanisms. AI models are trained on massive volumes of text and data, which allows them to ‘learn’ the relationships between words, ideas and contexts. Ana-Maria Stancu explains that, while the responses are coherent and fluent, they lack consciousness – they are not the result of conscious thought or a genuine understanding of the world, but simply of calculating probabilities. More specifically, the system estimates the most likely next word in a sentence, based on the user’s prompt and patterns found in the training data. This sometimes leads to incomplete, inaccurate or contextually inappropriate responses.
“This is going to be a real problem. Why? Because those of us who are older have enough life experience to spot when a platform is rambling or making stuff up in a certain field. The issue is that younger people, lacking that experience, can’t tell if what ChatGPT writes is accurate or not. Teachers have observed this in practice: they told me their students submitted messages that very much resemble ChatGPT input, often getting the exact same type of response. Plus, they don’t grasp that ChatGPT can be wrong. By the way, in technical terms, this error is called ‘hallucination’. We don’t say it lies, because that would anthropomorphize it, making it seem human”.
Robots and AI systems are undoubtedly a crucial step toward efficiency and progress. Yet, the challenge remains: how do we use them responsibly? Specialists warn that the excessive automation of thought and delegating our personal judgment to technology can lead to dependence and a reduction in critical thinking. Additionally, there are potential social concerns, such as the replacement of the human workforce, as well as a key ethical dilemma: who is morally and legally responsible for a wrong decision made by an autonomous robot: the programmer, the user or the system itself? (VP)