Sei interessato ai nostri servizi di consulenza?

1 Clicca nella sezione contatti
2 Compila il form
3 Ti ricontattiamo

Se hai bisogno urgente del nostro intervento puoi contattarci al numero 370 148 9430

RENOR & Partners

I nostri orari
Lun-Ven 9:00AM - 18:PM

AI and Quantum Computers: the next leap in computing

by Simone Renzi / April 30, 2025
Post Image

This post is also available in: Italiano (Italian)

From time to time, I find myself wondering what scenario awaits us over the next ten years.

My generation—the one born in the 1980s—has had the unique privilege of witnessing a momentous transition: from an analog world, we were catapulted into a rapidly expanding digital ecosystem, with an innovation rate that has followed an almost exponential trajectory.

Each advancement has triggered new developments, which in turn have led to further discoveries, in a chain reaction comparable to a controlled technological explosion.

Although artificial intelligence has only recently entered the public spotlight, its theoretical foundations date back decades. At the time, the main obstacle was computing power: training neural networks required computational capabilities that, when the concepts were first developed, were simply unimaginable.

Taking a step back and returning to the metaphor of the chain reaction, it’s important to note that the evolution of traditional CPUs has now slowed.

This stagnation is primarily due to the physical limits of silicon: integration density, thermal dissipation, and leakage thresholds pose barriers that prevent the current paradigm of miniaturization from continuing indefinitely.

But in truth, there’s more to it than that.

CPU

Physical limits

When we talk about the slowdown in CPU evolution, the blame is often hastily assigned to the “physical limits of silicon.”

In reality, behind that simplified phrase lies a complex web of constraints arising from three distinct domains—physics, electronics, and computer science—which, when layered together, form a true technological glass ceiling.

It’s worth examining these limits narratively, weaving the three perspectives together, to understand why we can no longer rely on the periodic doubling of clock speed or transistor count to achieve higher performance.

From quantum mechanics to thermodynamics: unforgiving physics

For decades, we benefited from what’s known as Dennard scaling: shrink the channel length, lower the supply voltage, keep power density constant, and you get faster, more efficient chips.

That fairytale, however, ends around the 90 nm node, when voltage can no longer decrease proportionally, and the heat generated per unit area begins to rise.

Meanwhile, the thinning of the gate oxide slips below one nanometer: at that point, electrons no longer “jump” over the barrier (the layer that couples the two doped semiconductors), but tunnel directly through it.

This gives rise to leakage currents that waste energy even when the transistor is logically switched off.

The most frequently cited thermodynamic constraint is Landauer’s limit: erasing a single bit of information at room temperature requires at least kBT ln 2 of energy—about 3 × 10⁻²¹ joules.

Today, we’re still several orders of magnitude above that limit, but we are rapidly approaching the bottom of the barrel. Each further reduction becomes painfully expensive in terms of materials, layout complexity, and process control.

The last, less glamorous but decisive protagonist is interconnection. As copper wires shrink, their resistivity increases due to surface and grain boundary scattering. The RC delay of metal interconnects does not scale along with the transistor—on the contrary, it worsens. This is why clock frequencies have been stuck below the 5 GHz threshold for years: exceeding it would mean generating more heat than the package can dissipate.

Microfabrication and packaging: electronics challenges itself

Process engineers, for their part, have responded with two major feats of ingenuity. The first is the evolution of device architecture: from planar CMOS to FinFET, and now to Gate-All-Around FETs (nanosheets), which wrap the channel from all sides to improve electrostatic control. This approach works—but it introduces quantum confinement effects that degrade carrier mobility, eroding part of the expected performance gain.

The second is shifting power delivery to the backside of the wafer—known as backside power delivery—to reduce voltage drops. This is micrometer-scale surgery, and it comes with new challenges: through-silicon vias (TSVs) that add parasitic capacitance, and more critically, vertical thermal gradients that can exceed 40 K per millimeter. This is why, in parallel, 3D-IC chiplet integration is gaining momentum: if we can’t spread transistors out across a surface anymore, we stack them. But a three-dimensional chip brings with it a puzzle of cooling, cache coherence, and clock distribution that keeps designers up at night.

Architecture and software: when the bottleneck lies in the algorithm

Computer scientists are not standing still. They’ve long since exhausted the performance gains from out-of-order execution and instruction-level parallelism (ILP). Increasing pipeline width beyond six to eight simultaneous instructions yields diminishing returns, as data dependencies and conditional branches choke the flow. The response is to shift progress toward massive parallelization and heterogeneity: small and large cores on the same die, integrated GPUs, dedicated tensor accelerators.

Here, however, another barrier emerges: the so-called memory wall. The ALU performs operations in just a few picoseconds, but reading from DRAM layers takes around 50 nanoseconds—a thousand times longer—quickly consuming any computational advantage achieved. As a result, more chip area is now dedicated to cache than to compute units, at the cost of enormous complexity in maintaining coherence and of algorithms that must be data-locality aware from the very earliest stages of design.

Here lies the paradox: we can add thousands of cores, but silicon is now dark—only a fraction of the chip can be powered on simultaneously without overheating it. Programming for this fragmented universe requires models like OpenMP, SYCL, or asynchronous task programming, and above all, a new mindset:

The future beyond silicon?

Some are betting on two-dimensional materials—graphene, MoS₂—which promise orders of magnitude in electron mobility, but are still far from large-scale manufacturing due to unstable band gaps and immature deposition processes. Others look to spintronics for ultra-fast non-volatile memories. In the short term, the realistic trajectory points to hardware-software co-design, 3D chiplets, and even lower supply voltages—possibly assisted by adiabatic logic or reversible circuits to squeeze out a few more orders of magnitude in energy efficiency.

The end of “discount silicon” is not marked by a single wall, but by a cascade of obstacles. Thermodynamic limits set the ultimate threshold; interconnect bottlenecks and process variability raise the cost of every additional nanometer; memory bandwidth and energy efficiency become as much a software problem as a hardware one. Understanding this integrated complexity not only explains why we won’t see 10 GHz CPUs in tomorrow’s laptops, but also points to where research must focus: in the tight symbiosis of physicists, electronic engineers, and computer scientists, united in the hunt for any remaining margin of freedom in a world rapidly approaching its fundamental limits.

What will be the solution to the problems, in my view, at least in the initial phase in the enterprise world?

The quantum computer

Quantum computers represent the only foreseeable platform capable of surpassing, for specific classes of problems, the thermodynamic and architectural limits of silicon. But this is not an immediate leap: it will take a decade of materials science, cryogenic engineering, and algorithmic formalization to reach the “logical scale” truly needed to replace or integrate classical HPC in cutting-edge applications—from cancer drug design to post-quantum cryptography. Preparing now, with cross-disciplinary skills and hybrid projects, means being ready when the qubit finally becomes the new transistor.

What is a qubit – let’s clarify

Imagine a coin balanced on its edge: it is neither heads nor tails, yet it holds both possibilities until it falls. A qubit is born from a similar concept, but applied to quantum mechanics: it is the elementary unit of information that can exist in a simultaneous combination of the “0” and “1” states. This condition of controlled ambiguity is called superposition, and it opens up computational possibilities that traditional bits—fixed at a single value at any given time—cannot even come close to touching.

Superposition: parallelism in wave amplitudes

With a classical bit, we program sequential instructions: first 0, then 1. In a qubit, the two values coexist as amplitudes that describe the probability of the system being observed in one result or the other. During computation, the qubit explores both paths simultaneously—a form of parallelism that doesn’t depend on the number of cores or the clock speed of the processor.

Entanglement: connecting qubits beyond classical physics

By linking two or more qubits, entanglement is created—a profound connection whereby the measurement outcome of one instantly affects the other, even if they are separated by kilometers. From this property arise breathtaking computational accelerations, because a register of n qubits can simultaneously address a set of possibilities that a classical computer would need to explore one by one.

How do you “program” a qubit?

Logical operations—equivalent to NOT or AND gates in traditional chips—become state rotations: targeted pulses—microwaves, lasers, or magnetic fields depending on the technology—manipulate the qubit, causing it to oscillate among its internal combinations. Designing a quantum algorithm means orchestrating sequences of these rotations to concentrate, through interference, the probability of obtaining the correct answer upon measurement.

Not all that glitters is gold

Superposition is fragile: vibrations, electromagnetic fields, even a single stray photon can cause the qubit to collapse into a classical value—a phenomenon known as decoherence. To prevent the information from evaporating:

  • Let’s isolate the hardware (cryogenics, ultra-high vacuum, shielding)
  • We shorten operation times: the fewer microseconds that pass, the fewer chances noise has to interfere.
  • We use redundancy: groups of physical qubits monitor each other’s errors, giving rise to a more robust “logical” qubit.
    This redundancy is currently the heaviest cost factor: it takes dozens—sometimes hundreds—of physical qubits to obtain a single reliable one.

The qubit represents the most ambitious bet of the post-silicon era: a tiny unit of information that can be simultaneously “here” and “there,” and entangle with other units in ways that defy classical intuition.

Harnessing it means venturing into a territory where physics, electronic engineering, and computer science converge into a single technological landscape.
And it is this very convergence that could give rise to the next true revolution in computing.

 

Microsoft Majorana 1 – The topological qubit with which Microsoft aims to break the glass ceiling

We’ve seen that superposition and entanglement make the qubit incredibly powerful… but also terribly fragile. Noise, heat, and complex wiring turn every step forward into a chess match against the laws of physics. With Majorana 1, Microsoft is trying to move the chessboard itself: it introduces a topological qubit based on Majorana Zero Modes (MZM) that, by design, is far less vulnerable to the factors that plague current quantum platforms.

Decoherence under control thanks to topological protection

In conventional architectures, information resides “on site”: a local disturbance is enough to cause the qubit to collapse. In Majorana 1’s InAs-Al nanowire, the logical state is distributed between two Majorana quasiparticles positioned at the wire’s ends. Any noise that affects only one end cannot alter the overall parity, so loss of coherence would require a simultaneous event on both ends—an event with dramatically lower probability. The promised result is a coherence time measured in tens of milliseconds, compared to the few tens of microseconds typical of superconducting transmon qubits.

Gate errors an order of magnitude lower

Logical operations do not rely on ultra-precise analog pulses, but on sequences of braidings or parity measurements involving four Majorana modes.
Topological physics inherently dampens small amplitude and phase inaccuracies, aiming for error rates on the order of one in ten thousand.

In practice, this means the error correction system has less work to do, and the need for redundant qubits is significantly reduced.

Fewer physical qubits per logical qubit

In classical surface codes, hundreds of physical qubits are required to obtain a single reliable logical qubit.
The native protection of Majorana 1 reduces this overhead to roughly one in a hundred: this means that a processor with one million physical qubits could provide thousands of usable qubits, not just a few dozen.

Simpler wiring and cryogenics

Read and write operations occur at lower frequencies compared to the microwaves used in transmons; as a result, the number of RF lines entering the cryostat drops significantly. Fewer cables mean less thermal load and a more scalable architecture. Microsoft envisions “H-shaped” tiles that allow thousands of topological qubits to be packed onto a single die and then stacked in 3D—without a tangled mess of connectors.

Industry roadmap compatible with EUV lithography

The “topoconductor” of Majorana 1 is fabricated using techniques similar to those employed in advanced 2 nm nodes: epitaxial deposition of the nanowire, EUV patterning for the aluminum contacts, and 3D interposers to connect to the control logic at 4 K. This means that, if the prototype holds up under laboratory testing, the manufacturing infrastructure already exists to scale it up.

Why follow Microsoft’s insights on Majorana 1?

If the bet succeeds, Majorana 1 will offer a qubit that is less noisy, more scalable, and already partially fault-tolerant even before applying traditional error-correcting codes. In other words, it will do for quantum computing what the MOSFET did for classical electronics: turn a lab prototype into a repeatable industrial building block.

It’s not yet the magic wand that solves every problem, but it represents a paradigm shift: instead of fighting noise with increasingly complex layers of error correction, Microsoft sidesteps it by designing the qubit so that noise simply has nowhere to take hold. If the model holds, the path to the millions of logical qubits needed to revolutionize chemistry, cryptography, and global optimization could be shortened by many years.

All wonderful—if it weren’t for the fact that…

Quantum software is not classical software

AI will have to wait. Laboratories are racing to stabilize quantum hardware, but the other half of the game is played in the programming paradigm.

A qubit-based computer, even when fully reliable, won’t speak x86 assembly, won’t support a traditional operating system, and won’t execute loops and conditionals in the conventional way.

With silicon, we’re used to the “fetch-decode-execute” model: the CPU fetches an instruction, executes it, then reads from or writes to memory.

In a quantum processor, the program is monolithic: a fixed sequence of gates is defined before execution; the qubit cannot be continuously read and written without destroying its state.

Loops are simulated by duplicating portions of the circuit—not through dynamic jumps.
There is no erasable memory: every operation must be reversible, or it must end with a measurement that collapses the affected qubits, causing the loss of their superposition.

We will have entirely new development stacks—Python, C#, and Java will likely be replaced by Q#.

Even once we have stable qubits, we’ll still need to wrap them in surface codes: every logical operation will become a delicate dance of hundreds of physical operations.

As a result, the software will need to schedule millions of gate operations without accumulating delay, manage fast measurement and correction cycles that depend on classical processors located ultra-close to the cryostats, and insert ancilla qubits that don’t even appear in the high-level code.

This overhead means that error decoding routines occupy a large portion of the controller’s compute time, reducing the window available for the application’s actual useful work.

Why will AI be slow to arrive on quantum computers?

At the moment, neural networks have two key figures: millions of parameters and billions of multiplications per second.
To transform them into quantum circuits, we need:

  • Tens of thousands of error-corrected logical qubits to represent tensors through quantum factorization
  • Circuit depth (number of gate levels) on the order of millions to emulate activation functions, normalization, and backpropagation.

Today, we have only a few logical qubits and a few hundred levels of tolerable circuit depth.

The quantum machine learning algorithms that show promising advantages—such as quantum kernels, Boltzmann state sampling, or speedups in combinatorial problems—function as co-processors: they accelerate a specific step within a workflow that remains predominantly classical, running mainly on GPUs and CPUs.

Bringing artificial intelligence to quantum hardware is a marathon, not a sprint. The qubit must be programmed with a new language, using control infrastructures that operate at cryogenic temperatures, and a software “orchestrator” that incorporates error correction and topological mapping.

Until we have thousands of logical qubits and compilers capable of hiding this intricate ecosystem, AI will remain firmly anchored to GPUs and TPUs.

In the meantime, however, experimenting with hybrid approaches—using qubits as accelerators for targeted tasks—is the best way to prepare for the day when quantum computing moves from a promise to a general-purpose platform.

What will happen when we can run AI on quantum computers?

We will definitely get there—it’s only a matter of time. It will likely take around 10 years, but even now, artificial intelligence is already making a significant contribution.

For example, just a few days ago, a Swedish team discovered a method—thanks to the help of artificial intelligence—to perform clinical analyses using urine to detect prostate cancer in men at its earliest stages.

The example of early prostate cancer diagnosis developed in Stockholm is just a taste of what could happen when AI gains access to quantum hardware. In that project, neural networks sifted through thousands of tumor transcriptome profiles and, by cross-referencing them with urine samples, identified a panel of biomarkers that achieves 92% accuracy—far exceeding that of the current PSA test. The protocol is set to enter clinical trials involving 250,000 patients over the next eight years. AI is also already being used to read CT scans, X-rays, and its “eye” is proving to be remarkably accurate.

But what will happen when we have virtually unlimited computational power at our disposal?

Naturally, we now have to venture into uncharted territory—one of pure imagination…

Artificial intelligence will likely be able to find a personalized cure for cancer.

With a fault-tolerant quantum computer, the next step will be to simulate—at the level of electronic interactions—how each specific patient mutation alters the conformation of proteins involved in carcinogenesis. Today, a supercluster takes weeks to model just a few dozen atoms; a quantum solver could do it on entire enzyme binding pockets in hours, delivering in near real-time the small molecule best suited to block them.
Proof-of-concept studies already exist demonstrating anticancer compounds generated using hybrid quantum–classical workflows.

Shall we summarize in simple terms?

  • Ultra-precise diagnoses from blood or urine, guided by AI.

  • Screening of millions of molecules, without animal testing.

  • “Tailor-made” therapy crafted around the individual patient’s genetic signature.

But AI combined with quantum computing won’t bring benefits only in the medical field.

Robotics: swarms and manipulation optimized by quantum computing

Motion planning for an industrial arm—or worse, for a swarm of drones—is a combinatorial optimization problem that scales explosively with the number of degrees of freedom. Quantum algorithms such as the Quantum Approximate Optimization Algorithm (QAOA) are already showing significant reductions in computation time for scenarios involving multi-robot path planning and coverage of complex environments. . When latencies drop below milliseconds, we will be able to have:

  • Autonomous warehouses where hundreds of AMRs (Autonomous Mobile Robots) instantly recalculate their routes when a human bottleneck appears between the shelves.

  • Rescue drones capable of replanning the exploration of collapsed buildings in under a second, reducing the time needed to locate survivors.

Economy: real-time decision-making with markets that never close

Portfolio building, hedging, and risk management are driven by covariance matrices which, for a given number of assets, grow quadratically.

In 2025, IQM and DATEV have already demonstrated that a prototype with just a few dozen qubits can produce portfolios that are 3% more efficient than classical methods at the same level of risk. . Moody’s, in its annual report, forecasts the first “alpha-generating” adoption within three years, specifically in currencies and complex derivatives. . At full scale, the AI + QC combination will be able to:

  • Optimize portfolios of thousands of assets over time horizons of just a few minutes, not end-of-day.

  • Simulate macroeconomic shocks using a quantum stochastic model, improving the resilience of pension funds.

  • Reduce fraud and insider trading through quantum pattern matching on real-time transaction streams.

Materials science and fundamental research

Australia’s victory at the 2024 Gordon Bell Prize showed that achieving “lab-grade” accuracy in the simulation of biochemical systems is possible—but it requires exascale computing and weeks of processing. With quantum hardware, these simulations will become routine:

  • Solid-state batteries optimized by computing ion diffusion in lattices of hundreds of unit cells in just a few minutes.

  • Green catalysts for ammonia production at room temperature, reducing the CO₂ footprint of the entire fertilizer supply chain.

  • Multiscale climate forecasts where AI trains local models and quantum kernels solve Navier-Stokes equations on selected turbulent domains.

Astronomy and cosmology: new eyes on the universe

The search for Earth-like exoplanets requires sifting through terabytes of light curves to detect just a few shadow photons. Variational quantum models are already classifying Kepler data with higher precision than classical algorithms. Looking ahead:

  • Real-time optimized telescope scheduling: selecting where to point an interferometric array based on atmospheric conditions and dynamic scientific opportunities.

  • Streaming analysis of gravitational waves using quantum neural networks capable of identifying signals buried in noise that elude traditional pipelines.

Conclusion

The arrival of fault-tolerant quantum computers won’t replace the good old CPU—rather, it will unlock for artificial intelligence those computational margins that are currently out of reach, allowing it to explore entire solution spaces in a few hours that would otherwise take years of work—or remain simply unattainable.

The technological horizon is likely about a decade away, perhaps less; that’s why it is essential to start designing hybrid algorithms and workflows now, so that the software will be ready when the hardware is.

At that point, ethical, philosophical, and social questions will come into play: AI must not replace human labor, but rather become an ally capable of accelerating scientific discovery and reviving that upward curve of progress which currently seems to have flattened.

The real challenge will be to make all of this coherent, safe, and protected—ensuring that unprecedented computational power does not fall into hands willing to bend it toward destructive ends, much like the shift that turned Einstein’s formula—originally intended to describe energy—into the trigger for the atomic bomb.

To wisely govern this new frontier means ensuring that the quantum era becomes a multiplier of knowledge, not of risk.

Simone Renzi
Seguimi

Scegli un'area

CONTATTACI

Ti risponderemo entro 24 ore

TORNA SU