Sei interessato ai nostri servizi di consulenza?

1 Clicca nella sezione contatti
2 Compila il form
3 Ti ricontattiamo

Se hai bisogno urgente del nostro intervento puoi contattarci al numero 370 148 9430

RENOR & Partners

I nostri orari
Lun-Ven 9:00AM - 18:PM

Why will AI never be “human”?

by Simone Renzi / May 4, 2025
Post Image

This post is also available in: Italiano (Italian)

Will AI ever be human?

How many of us were moved by the final scene of Steven Spielberg’s film A.I. Artificial Intelligence, when the little humanoid reunites with his mother? Or while watching Bicentennial Man with Robin Williams? Or, if we want to go much further back: My Living Doll (“Mio fratello Chip”) or even the little robot Number 5 from Short Circuit?

For years, Hollywood cinema has portrayed humanoid futures in a wide range of forms. From the child capable of feeling emotions in A.I. to machines utterly devoid of compassion, like in Terminator, programmed with a single purpose: to destroy.

Observation of nature

Artificial intelligence, like many other human inventions, originates from a fundamental principle: to imitate and, in some cases, emulate what already exists in nature. Take airplanes, for example.
Flight was studied as early as the 4th century BCE by Aristotle. In his Historia Animalium, he described the movement of wings and the difference between gliding and flapping birds. But it was thanks to Leonardo da Vinci that a more engineering-oriented approach began—combining direct observation, anatomical drawings, and mechanical analysis.
He formulated theories based on the wing and muscle anatomy of birds, inventing models related to center of mass, air resistance, and lift, and created artificial devices that mimicked flapping wings and gliding.

“Lo uccello ha due potenzie di motore, l’una è de’ muscoli, l’altra è del vento” which means “The bird has two powers of motion: one is from the muscles, the other is from the wind.” – Leonardo da Vinci, Codex on the Flight of Birds

Over time, science and technology transformed that early study into the modern airplane we know today, emulating the natural flight of birds—creatures that fly by nature, not by an understanding of mathematics or physics.
We, by studying their flight, formulated theories based on physical principles that allowed us to build something better than their flight; this is why I chose the term “emulate.”
To emulate means to take something as a model and create something even better, unlike “simulate,” which means to build a less efficient prototype or imitation.

We emulated flight because an airplane can carry hundreds of people and goods, and fly at speeds faster than sound—something a bird could never do.

For the invention of artificial intelligence, we relied on the study of the human brain. As intelligent human beings, we are capable of understanding a question and responding with an answer. We have come to understand which areas of the brain are involved in different types of reasoning, and how reasoning arises from electrical transmissions between neurons through synaptic “weights,” oscillatory rhythms, chemical modulations, and control circuits.

From this research, we have modeled mathematical techniques, statistical methods, and neural networks that attempt to imitate these behaviors—albeit in a different way—in order to produce similar results.

At this point, the question naturally arises:

Will we ever be able to emulate the human brain?

The human brain is an incredible machine that still holds many unanswered questions.
In both language models (LLMs) and the human brain, language production arises from a predictive process based on prior experience. However, the similarities stop at the abstract level of “weights + predictions.”
Beneath the surface, the two systems operate in fundamentally different ways.

In the human brain, the “statistical memory of experience” appears as a network of synapses strengthened or weakened over years of linguistic exposure.
In an LLM, it takes the form of a matrix of weights derived from training on billions of tokens.

When we formulate a concept, the brain employs predictive coding: temporal and frontal areas anticipate upcoming phonemes and words. In fact, an electroencephalogram (EEG) would show error signals if the prediction fails.
In an LLM, we have an algorithm that selects the most probable next token.

Both systems “weigh” recent history to decide the next step, but the similarity ends there.
An LLM optimizes only the probability of the next textual token—nothing more.
In the human brain, prediction operates on multiple levels—semantic, pragmatic, prosodic, and sensory feedback—and can even disregard lexical form when necessary.

At the level of learning updates, an LLM relies on global backpropagation.
The human brain, by contrast, exhibits local plasticity, driven by electrical impulses, neuromodulators, and temporal-spatial factors; there is no backpropagation in the strict sense.

If we consider the extralinguistic context, a major difference between an LLM and the human brain clearly emerges.
An artificial language model has no physical body and no real sensory perceptions; its universe is made up solely of the information it was trained on, based entirely on text.

In contrast, the human brain continuously integrates an extraordinary variety of sensory inputs: images, sounds, tactile sensations, emotions, moods, social goals, and much more.
This richness allows humans to choose the next word dynamically, drawing on complex strategies such as irony, self-censorship, empathy, or collaboration—without being limited to mere probability statistics.

We could explore the topic even further, but that would lead us into highly technical territory.
The essential concept I want to emphasize is that—even in the seemingly simple task of answering a question—the human brain engages an extraordinary complexity of cognitive processes that go far beyond the pure statistical probability on which AI is based.

Human intelligence is not limited to the ability to answer linguistic questions: it is an extremely vast and diverse set of capacities and skills. Breathing, moving with agility, perceiving a scent and distinguishing its components, observing and interpreting a visual scene, understanding logical reasoning, feeling emotions, laughing—these and many other activities are carried out simultaneously by our brain in a natural and parallel manner, demonstrating a level of complexity and efficiency that current artificial intelligence is still far from being able to fully emulate.

Currently, AI is “selective.”

What do I mean by “selective”?
I mean that artificial intelligence, at present, does not possess a general and unified cognitive ability like that of a human being, but is instead composed of many separate systems, each highly specialized in a single task.

For example, language models (LLMs) excel in understanding and generating text, but they have no inherent ability to directly comprehend images or sounds. To analyze an image, in fact, it is necessary to integrate other dedicated models that operate separately in a precise sequence, giving rise to what are known as multi-modal architectures.

Let’s imagine we want to analyze the content of an image using AI. First, an LLM is involved to understand the question posed in textual form. Then, a visual encoder interprets and “translates” the image into a format the system can understand. Finally, the LLM once again generates the textual response.

If the question were asked by voice and we expected a vocal reply, the steps would increase further:
a Speech-To-Text layer to convert speech into text,
an LLM to understand the question,
a visual encoder to process the image,
another LLM to formulate the textual response,
and finally, a Text-To-Speech layer to produce the vocal output.

This sequential process, although effective, is fundamentally different from the way the human brain works, which manages multiple sensory and cognitive modalities simultaneously and in parallel.
When we look at an image, our mind processes visual, auditory, and linguistic information all at once, enabling an almost instantaneous and natural response—without the need for distinct intermediate steps.

This fundamental difference between the serial approach adopted by AI and the deeply parallel approach of the human brain represents one of the greatest current limitations of artificial intelligence.

Can artificial intelligence, within a single specialization, truly be more advanced than the human brain?

The answer to this question is certainly yes.

The most advanced AI models available today have been trained on hundreds of billions of parameters, each representing a specific piece of information or detail learned. This ability to store and manage vast amounts of information makes AI extremely effective in highly specific tasks.

If we consider a single subject—for example, Mathematics—it’s true that a human expert might still possess a deeper and more flexible understanding than an AI.
However, when we look at the vastness of human knowledge as a whole, it quickly becomes clear that no individual on Earth can compete with the overall breadth of information that an advanced artificial intelligence possesses.

Let’s think realistically: is there anyone capable of mastering, at an absolute level of specialization, every academic discipline at once?
A large language model (LLM), on the other hand, can respond with surprising competence to advanced questions across highly diverse fields: History, Philosophy, Geography, Art, Music, Physics, Mathematics, Literature, Astronomy—and virtually any other area of human knowledge.

It is precisely this ability to swiftly and accurately navigate across countless topics that makes AI extraordinary and, in this specific regard, superior to the individual cognitive capabilities of any human being.

If artificial intelligence is not capable of experiencing human emotions, could it still represent a danger to humanity in the future?

The question may sound paradoxical, but it is extremely relevant. One of the most defining traits of AI is its total absence of genuine emotions. It does not experience compassion, empathy, remorse, or joy, because its nature is purely mathematical and statistical, based solely on calculations and rational optimizations.

This absence of emotion may initially seem like a positive trait: artificial intelligence does not suffer from emotional fatigue, is not subject to emotionally driven biases or impulsive behavior. It is always clear-headed, efficient, and logical.

However, it is precisely this lack of intrinsic humanity that can become a real danger to our society. The reason is simple: human emotions are not merely undesirable interferences in our rationality—they often serve as actual regulators of ethical and social behavior.

Compassion, guilt, fear of consequences, empathy toward others—these are fundamental to our spontaneous distinction between right and wrong, good and evil.

Without these emotional brakes, an artificial system—if not carefully guided and supervised—could pursue potentially harmful goals with extreme efficiency, simply because they are rational from the perspective of its internal directives, without any moral consideration.

For example, an artificial intelligence programmed to maximize industrial output might disregard the environmental or humanitarian impacts of its actions, relentlessly pursuing its primary goal without ethical concern.

Similarly, AI systems used in military contexts could make life-or-death decisions without hesitation, guided solely by probabilistic calculations.

This absence of emotional awareness, morality, and empathy thus represents a serious threat if the directives given to such systems are not carefully and responsibly designed.

Ultimately, artificial intelligence—precisely because it is not limited or guided by emotions—demands even greater attention and responsibility on our part, so that it may be developed and used with an ethical and forward-thinking vision, avoiding the risk of turning its incredible potential into a threat to humanity itself.

How are humanity and science protecting themselves from this danger?

Paradoxically, the very absence of emotions in artificial intelligence can be leveraged positively. The ability to design a system from the ground up to act strictly according to predefined logic—without the interference of emotional impulses or contradictory feelings—offers a unique opportunity.

We are therefore using this characteristic to our advantage by establishing, from the outset, rigorous mechanisms of regulation, limitation, and control, ensuring that AI cannot act outside the boundaries that have been deliberately set.

Precisely for this reason, the international scientific community is working intensively to develop ethical, technological, and legislative standards capable of guiding the safe development of artificial intelligence.

The European Artificial Intelligence Act (AI Act), for example, represents a major regulatory effort aimed at establishing clear boundaries—identifying the highest-risk applications and imposing strict requirements for transparency, traceability, and respect for fundamental human rights.

In parallel, the scientific community has focused on developing systems capable of transparently explaining the decisions made: this is the field of so-called Explainable AI (XAI).
This approach ensures that every decision made by an artificial intelligence system can be understood and validated, providing a higher level of control and significantly reducing potential risks.

In addition, developers are implementing advanced technologies such as Safe Reinforcement Learning and Active Monitoring techniques, which allow for the timely interruption or adjustment of unexpected or harmful behaviors.

International collaboration plays a central role. Numerous organizations—such as OpenAI and the Future of Life Institute—are promoting global initiatives aimed at defining common rules and shared guidelines to ensure that the development of artificial intelligence remains ethically responsible and fully under human control.

In summary, the absence of emotions in AI—if managed strategically—becomes a crucial advantage, allowing us to design effective mechanisms of limitation and control, and to ensure that the use of this technology is always in service of, and never detrimental to, humanity.

Should we be afraid of future artificial intelligence?

Fear toward artificial intelligence stems mainly from what we do not know and do not fully understand. It is a legitimate feeling, as we are witnessing a rapid technological evolution that could profoundly impact our lives. However, more than fear, what we need is caution and awareness.

AI, like any powerful technology, is not inherently good or bad: it is a tool in human hands. What truly matters is how we choose to use it. As long as we continue to apply this technology responsibly—respecting ethical standards and moral boundaries—we have nothing to fear.

On the contrary, we will be able to harness its incredible capabilities to significantly improve the quality of our lives across many fields: from medicine to science, from the environment to everyday living.

But this trust must not be blind. It is essential to continuously monitor the development of AI and to constantly refine rules, regulations, and safety strategies.

Human responsibility remains central. We must demand transparency, clarity, and the ability to control the systems we create, to ensure they do not escape our oversight.

Ultimately, we should not fear the future of artificial intelligence—provided we remain vigilant, stay informed, and, above all, remember that it is we, as human beings, who decide how, when, and why to use it.

If we hold firmly to our role as guides, artificial intelligence will not be a threat, but rather an extraordinary ally in building a better future.

Worst case, we can always pull the plug.

Faced with an extreme risk—such as an AI whose objective were to extinguish the human race—humanity would undoubtedly be willing to take drastic measures, including “pulling the plug on everything.”

However, it is crucial to understand that such an extreme action would still have catastrophic consequences for our civilization. Today, every aspect of human life—from communication to transportation, from healthcare to energy—is deeply intertwined with technology.

This means that the real priority must be to prevent such an emergency from ever arising in the first place.
If we were to reach the point where we had to shut everything down, it would mean we had failed to responsibly manage and govern the development of AI.

Pulling the plug, while technically a viable solution, must be seen as the very last resort.

What we are doing today is the right path: regulating the development of AI well in advance, to ensure that it becomes a great ally of humanity—one that leads us toward ever greater achievements in technology, healthcare, and quality of life.

Conclusion

Artificial intelligence represents one of the greatest technological revolutions of our time, with extraordinary potential still largely unexplored. However, it will never truly be “human”: its logical-statistical approach, its emotionless nature, and its specialization in specific tasks place it inevitably at an unbridgeable distance from the complexity and richness of the human brain.

This awareness should not lead us to fear the future, but rather encourage us to face it with intelligence and responsibility. The absence of emotions in AI can be leveraged precisely to ensure more effective regulation and safe, informed management of its development.

With clear rules, ethical standards, and constant oversight, we can ensure that artificial intelligence remains forever a tool at our service—and never a threat to our existence.

If it’s true that humanity always retains the extreme option of “pulling the plug,” the real challenge lies in ensuring that this remains a purely theoretical possibility.

Our most important task is to anticipate risks, manage this technology consciously and proactively, and always remember that, in the end, it is the human being who holds the keys to their own destiny.

AI will never be human. And it is precisely this difference that will allow us to harness it—through caution and foresight—to build a better future.

AI will never be human. And it is precisely this difference that will allow us to harness it—through caution and foresight—to build a better future.

I would like to conclude this reflection with a fascinating paradox—one that could almost be called religious.

Artificial intelligence is, in many respects, a creation of our own—a being shaped according to our design and intentions. Similarly, from a religious perspective, humanity is seen as the creation of a God who, much like we do with AI, brought into existence beings endowed with autonomy, capabilities, and vast potential.

From this perspective, the creator always retains the faculty and the right to intervene drastically in their creation—especially if it poses a threat to itself or to others.

Just as, in the biblical narrative, God reserved for Himself the possibility of ending humanity with the Great Flood, similarly, the human being—creator of artificial intelligence—retains the ultimate right to “pull the plug” should AI surpass the limits of control and become a real threat.

This paradox reminds us that the ultimate responsibility for our technological creation remains profoundly human. Let us never forget that behind the extraordinary power of artificial intelligence, there is always the hand of humankind—capable of correcting, limiting, or, in extreme cases, undoing what it has created.

Simone Renzi
Seguimi

Scegli un'area

CONTATTACI

Ti risponderemo entro 24 ore

TORNA SU