Foundations of Quantum Programming: A Conversation with Prof. Elías F. Combarro
From qubit states to Shor’s algorithm: how quantum programming really works—and where it’s headed.
From quantum search and error correction to tooling constraints and software reproducibility, programming for quantum computers is unlike anything in classical systems. In this conversation, we speak with Professor Elías F. Combarro—co-author of A Practical Guide to Quantum Computing (Packt, 2025)—about what it means to write, reason about, and teach quantum software in a world where the hardware is still catching up. This book also serves as a prequel to the 2023 book, A Practical Guide to Quantum Machine Learning and Quantum Optimization also co-authored by Combarro.
Combarro is a full professor in the Department of Computer Science at the University of Oviedo in Spain. With degrees in both mathematics and computer science, his research spans computation theory, logic, quantum optimization, and algebraic structures. He has held research appointments at CERN and Harvard, and served on the advisory board of CERN’s Quantum Technology Initiative from 2021 to 2024. His recent work focuses on bridging mathematical formalism with executable quantum systems.
In this interview, we cover foundational algorithms like Shor’s and Grover’s, why Qiskit emerged as the most practical tool for teaching and experimentation, and how to build mental models that scale from toy examples to real circuits. Along the way, we explore entanglement, measurement, abstraction, simulation, and what quantum advantage might realistically mean in the coming decade—for engineers, researchers, and systems designers alike.
You can watch the full conversation below—or read on for the complete transcript.
The Book: A Practical Guide to Quantum Computing
1: What prompted this decision to go back to basics with your second book?
Elías F. Combarro: I would say there are two main reasons for going back to these foundational algorithms in quantum computing with our second book. The first is that we actually wanted to include these algorithms in the first book, but it was impossible. If you’ve read it, you’ll know it’s almost 700 pages long—much more than we expected when we started writing. So it just wasn’t feasible to include anything else.
We were always thinking, “Oh, we should have included those very important algorithms.” It wasn’t possible, so we had this lingering idea to come back and write another book—or maybe an extended edition—that would include them. These foundational algorithms are very important in quantum computing. They’re probably the first algorithms that everyone studies when they start learning the field. But we chose to focus on more modern algorithms in the first book, like those used for optimization and machine learning, because at the time those were hot topics and there weren’t many books covering them.
Then, in addition to that desire to write about these foundational algorithms, new courses in quantum computing have been introduced—at our university and many others. I’m currently teaching two different courses, and possibly a third is coming next year. We felt the need to have a textbook for these classes. The courses include content from our first book on quantum machine learning and optimization, but they also cover foundational topics like Shor’s algorithm, Grover’s algorithm, and protocols like quantum teleportation and quantum key distribution. These are quite different from optimization and machine learning, but equally important.
So we felt both a personal and practical need: we wanted to write about these topics, and we also needed good materials for our students—and for anyone, anywhere in the world, who wants an introduction to quantum computing.
2: Was there any feedback that you received for your first book that influenced your approach in terms of pedagogy or technical depth in the second book, which is more foundational?
Elías F. Combarro: Well, I must start by saying we were overwhelmed by the reception of the book. Even yesterday, I received a message from a master’s student here in Spain—I’ve never met him—but he wrote to say he was defending his master’s thesis in quantum machine learning, and that one of the main reasons he chose the topic was our book. That’s just one example of the many wonderful messages we’ve received. We’ve been really overwhelmed and happy with the response.
I think the main problem—if you can call it a problem—with the first book was that we had a lot of code intertwined with the explanations of the concepts and algorithms. We’re hands-on people. We like to learn by doing, so we thought it was important that readers could fire up Anaconda or JupyterLab and run code to reinforce the concepts they were learning. That was our approach.
But the drawback is that quantum software libraries evolve very quickly. They change versions frequently, and some of the code may become outdated or need small modifications to keep working. We discovered this after publishing the book. Readers appreciated having code alongside the explanations, but it also made the book harder to update because everything was so interwoven.
So, in the new book, we decided to separate the code out. We have chapters that focus only on code, and others that focus only on explanations. That way, readers can learn the algorithm in one chapter and then go to a separate chapter to see how to implement it—run it, modify it, and do exercises. At the same time, this structure makes it easier to update the code since it’s not embedded in the explanatory text.
I still think both approaches have merit and can be useful for students, but this new format is likely easier for someone who picks up the book two, three, or five years from now. They can learn the algorithm, then check online for updated notebooks if needed.
3: In many ways, the second book is like a prequel, isn’t it? For those who may find it more challenging to start with the first one. You’ve used the same hands-on approach and focused more on Qiskit. What makes Qiskit the right choice for teaching foundational quantum computing, in your view?
Elías F. Combarro: Well, this was a difficult decision because nowadays there are several quantum programming languages—or rather, packages or libraries—to choose from. All of them are nice and have their own advantages and drawbacks.
In our first book, we included three different languages: Qiskit, of course, but also PennyLane and D-Wave’s Ocean. We were focusing on quantum machine learning in that book, and for that, PennyLane is probably even better than Qiskit. And if you want to program D-Wave’s quantum annealers, you need Ocean—there’s no other way to access those machines.
But in this new book, we’re going back to basics, so we didn’t need three languages—just one was enough. For foundational algorithms, almost any quantum programming language would suffice. That said, Qiskit has the largest number of features, and it’s the easiest one for accessing quantum computers online. For us, that was very important. You can run code locally on simulators, which are not real quantum computers but simulate their behavior. But at the same time, it’s great to be able to access actual quantum computers online, and Qiskit makes that very easy.
For instance, one of the exercises we propose in the book is to take a quantum protocol, run it locally on a simulator, and then run it on a real quantum computer. You only need to change three or four lines of code to make that switch, but it’s very satisfying to say, “I’m running this on an actual quantum computer.”
There’s one exercise in particular that I really like. It’s based on a protocol used to explore whether nature is quantum or classical. This is called the CHSH game, and we explain it in full detail in the book. We give the code and ask readers to run it on a quantum computer. The result is a figure of merit—the ratio of times you win the game—and this number exceeds what’s possible with classical systems. To me, that’s fascinating. The physicists who originally performed this kind of experiment won the Nobel Prize in 2022. And now, you can run something like it yourself, just with a laptop connected to the internet.
Core Quantum Concepts
4: How should software developers think about a single qubit and its state? For example, why use the Bloch sphere or state vector representation? And how does that view change when moving to two or more qubits?
Elías F. Combarro: That’s a very important question. When you think about qubits and their states—especially with a large number of qubits, like in today’s quantum computers—intuition becomes very difficult.
For example, when you access quantum computers online, some of them have 127 qubits. That means you’re implicitly working with 2^127 numbers. That’s such a huge number, it’s hard to even imagine. So developing intuition about those kinds of structures is really tough.
With just one qubit, though, we have a nice geometric representation called the Bloch sphere. I must say I’m not a very visual thinker—geometry isn’t something I’m particularly intuitive about. I prefer symbolic and algebraic representations. But the Bloch sphere is helpful: every point on the surface of the sphere represents a possible state of your qubit, and quantum gates—operations—can be visualized as rotations of this sphere. It’s a nice way to see what’s happening when you apply operations to a single qubit.
Personally, I prefer thinking in terms of state vectors—ordinary vectors, in this case with complex numbers, though often you can think of them as real numbers. So for one qubit, you only need two numbers. For more qubits, it becomes a longer vector, and the system’s state is just this vector of numbers. Any operation you perform is just a matrix multiplication on this vector. To me, that’s the most useful mental model for what’s happening in the computer.
There are other geometric representations that extend beyond one qubit, but I find them more exotic than helpful—though that may just be because I’m not a geometrical thinker.
What I always tell my students is this: the amount of math you need to get started with quantum computing is surprisingly small. You just need to know what a vector is, what a matrix is, and how to multiply a matrix by a vector. And even if you don’t know that, we cover it in an appendix. So you don’t need advanced math or physics to begin—you can start right away if you understand vectors and matrices.
5: What does an entangled two-qubit state look like in this representation? Why can’t it be factored into independent single-qubit states?
Elías F. Combarro: Yeah, this is a very surprising aspect of quantum systems. Entangled systems can't be described by just looking at the states of their individual parts. So you can have a system with two qubits, and there’s a global state that describes both together. But if you look at either one in isolation, and then try to reconstruct the full system’s state from those parts—you can’t. You need the full global state.
This is exactly why you need 2ⁿ amplitudes for an n-qubit system. If each part could be described on its own, you’d only need two numbers per qubit. But the correlations between the qubits—the entanglement—are encoded in the rest of the numbers. They're not locally accessible from just the individual qubit states.
This was something that was very surprising in the early days of quantum physics. Even Einstein was baffled by it. He called it “spooky action at a distance,” because when you have entangled particles or qubits, a modification to one part instantly affects the other, even if they're far apart. But Einstein also developed general relativity, which imposes a speed limit on how fast information can travel. So the idea of this instantaneous change really disturbed him. But it’s been experimentally confirmed, over and over—including in the CHSH game I mentioned earlier.
Now, going back to representations: block spheres are great for individual qubits, but they break down for entangled states with two or more qubits. That’s why I find the vector representation more useful. It gives you a mathematical way to check whether a state is entangled. If you can factor the state into a product of individual qubit states, it’s not entangled. But if you can’t—if the state isn’t a product state—then it’s entangled, and you must treat the system as a whole.
6: Why is entanglement considered such a crucial resource in quantum algorithms?
Elías F. Combarro: Because this property—entanglement—only exists in quantum systems. It doesn’t happen in classical physics. And that means you can use it to implement protocols and algorithms that are simply impossible with classical resources.
We describe some of these in the book—for example, how to send information using superdense coding, or how to teleport quantum states. These kinds of applications absolutely require entangled states. You can’t do them with classical means alone.
From the perspective of quantum computing and quantum information science, these are concrete ways to exploit entanglement. It could even be central to a future quantum internet. With entanglement, you can teleport states over long distances, which could be a useful communication tool. So these ideas and protocols are practical ways in which entanglement becomes a real computational and informational resource.
7: In what fundamental way does measuring a qubit differ from reading a classical bit?
Elías F. Combarro: This is one of the most surprising things for people new to quantum computing—or quantum physics in general. In classical computing, you take it for granted that you can always inspect your data. You can look at variables, data structures, lists, trees—whatever—and see exactly what values they hold.
But in quantum computing, it's completely different. Quantum states can be in superposition, possibly entangled, and described by a large number of amplitudes. But when you perform a measurement, you can't access all that information. You only get a small part of it.
Take a single qubit, for example. Its state is described by two complex numbers. In theory, that’s an infinite amount of information—real numbers can have infinite decimal places. But when you measure it, you only get a single classical bit: 0 or 1. The act of measuring collapses the state probabilistically into either 0 or 1.
And once you’ve measured it, you’ve destroyed the original state. If you measure it and get 0, and then measure it again, you’ll just keep getting 0—you’ve lost everything about the prior superposition. The system collapses, and that collapse is irreversible.
And this randomness is fundamental. If you run the exact same quantum algorithm twice, with the same input, you might get different outcomes. For people used to classical programming, that's very strange—how can the same inputs give different outputs? But it’s intrinsic to quantum mechanics. It’s not like classical randomized algorithms where the randomness comes from a pseudo-random number generator. In quantum computing, the probabilistic behavior is built into the physics.
So quantum measurement differs from classical data retrieval in two big ways: first, it’s probabilistic; and second, it changes the state of the system. You can’t measure the same system multiple times and expect to extract more information. Once you measure, the original state is gone.
8: How would you say developers can decide which qubits to measure, and at what points in a circuit?
Elías F. Combarro: It depends on the algorithm or the application. But until very recently, with actual quantum computers, you could only measure qubits at the end of a circuit. That might sound like a limitation, but it’s really not—because of something called the deferred measurement principle.
There’s a theorem in quantum computing that says if you want to measure something in the middle of a circuit, you can simulate that effect by deferring the measurement to the end and adjusting the circuit accordingly. So in terms of computational power, there’s no difference.
Now, some platforms like Qiskit do allow mid-circuit measurements. You can measure certain qubits partway through the execution of a circuit. But in practice, it's still often simpler to just measure at the end.
9: What strategies, according to you, can help manage the randomness introduced by quantum measurement?
Elías F. Combarro: First, we need to distinguish between two different sources of randomness. One is intrinsic to quantum theory—this is the probabilistic nature of measurement itself, and it’s unavoidable. The second is due to imperfections in actual quantum hardware—noise, gate errors, and environmental interactions.
To handle the intrinsic randomness, you need to apply statistical methods. For example, suppose you're trying to determine whether a certain element with a specific property is present in a vector—maybe you're looking for a client from Spain in a customer database. You might use Grover’s algorithm for this. Even if the client exists, Grover’s only gives a probabilistic guarantee that you'll find it. Maybe the probability is 99%, but if you’re unlucky and only run it once, you might miss it.
So what do you do? You repeat the algorithm multiple times and either take the best result or use a voting scheme. For example, say you’re using a quantum classifier to determine whether an image is a cat or a dog. If you measure the output qubit once and get 0 (cat), you can’t be sure. But if you repeat the process 100 times and get 70 zeros and 30 ones, then you can conclude it’s most likely a cat.
Similarly, in quantum phase estimation—which is important in many fields—you repeat the procedure to get better and better approximations. The more you repeat it, the more accurate the estimate.
Now, regarding noise from hardware imperfections: in most of the book, we work with idealized quantum computers. But in the last part, we introduce quantum error correction. There are also simpler techniques like error mitigation. One method involves calibrating the machine—measuring how often errors occur when you input known states like 0 or 1. With that data, you can adjust your measurements afterward to account for those errors.
And then there’s full quantum error correction, which is more sophisticated and harder to implement, but also more powerful. It's essential for working with real quantum hardware in the long term.
Key Quantum Algorithms
10: Grover’s algorithm offers a quadratic speedup for unstructured search. Could you explain the core idea of amplitude amplification and what assumptions or resources it requires?
Elías F. Combarro: Grover’s algorithm is probably my favorite quantum algorithm. It’s mathematically beautiful and, from a computer science perspective, very surprising.
The core idea is this: suppose you have some data, like a vector, and you want to find an element that satisfies a certain property. If the data is unstructured—for example, if the entries are in random order—then classically, the only option is to search one by one. If there are a million entries, you might have to check all million.
But with Grover’s algorithm, even if the data is completely unstructured, you can find the correct entry with only about a thousand checks—because of its quadratic speedup. That’s a huge difference, and it gets more dramatic as the data size increases.
How does it work? First, you create a superposition of all possible inputs. Then, through amplitude amplification, you increase the probability of measuring the correct solution. And this happens through a series of geometric transformations—rotations, essentially. With each step, you rotate closer to the solution vector. If you stop at the right point, your probability of measuring the correct answer is very high.
But—and this is important—if you keep going beyond that optimal point, you start rotating away from the solution again. So running Grover’s too many times actually makes your results worse. That’s very different from classical search, where more effort generally improves your chances.
As for resources: like most quantum algorithms, Grover’s assumes you can represent your input as a quantum oracle. You can’t just read a file from a hard drive. You have to encode the information into a function that can be queried by the quantum computer.
For example, if you’re searching for a client from Spain, your oracle takes an index and checks whether that client meets the condition. It returns true or false. In the book, we explain how to implement such oracles for different problems so you can actually use Grover’s in practice.
11: Can Grover’s technique be applied to problems beyond literal database search?
Elías F. Combarro: Yes, definitely. One application we didn’t cover in this book—but did in our previous one—is optimization. The idea of search naturally extends to optimization problems.
For example, suppose you’re looking for the client in your database who spent the most money last year. You don’t know in advance what that maximum value is, so you can’t search for a specific threshold. But what you can do is iteratively refine your threshold using Grover’s algorithm.
You might start by looking for clients who spent at least $1,000. If you find one, you raise the bar—$2,000, $3,000, and so on—until you no longer find anyone who meets the threshold. That gives you a way to zero in on the maximum.
This approach is called Grover Adaptive Search, and we explain it in our first book. It’s a straightforward extension of Grover’s ideas to optimization scenarios.
12: What are the current limitations of Grover’s algorithm on near-term hardware?
Elías F. Combarro: The limitations are similar to those affecting Shor’s algorithm and most of the algorithms in this book. These algorithms were designed for ideal quantum computers—machines with no noise and perfect gates.
But today’s hardware is noisy, and these algorithms typically involve long circuits with many operations. The longer the circuit, the more likely it is that errors will accumulate. That makes it hard to get reliable results.
Another issue is connectivity. Algorithms like Grover’s often require operations involving all qubits at once. But on current machines, you can’t directly entangle distant qubits. You have to insert additional gates just to move information around so that qubits can interact—and that inflates the circuit even more, making it more error-prone.
So the main problems are noise, long circuit depth, and limited qubit connectivity—all of which make it very hard to run Grover’s algorithm at useful scales today.
13: Shor’s algorithm factors large integers exponentially faster than classical methods. Can you outline how it uses period finding and the quantum Fourier transform to achieve this speedup?
Elías F. Combarro: Yes. I have to say this was the most difficult part of the book to write. I think it’s Chapter 11—almost at the end—and we build up to it across the earlier chapters. I’ve studied Shor’s algorithm for many years, so it's second nature to me. But trying to explain it clearly, from first principles, was a real challenge. At the same time, it was a lot of fun, because it forced me to restructure my understanding and find the simplest possible way to present the ideas.
Shor’s algorithm is incredibly important. On the surface, factoring integers may not seem all that exciting, but it underpins much of our modern cryptography. The security of online communication—including the connection we're using now—is based on cryptographic protocols that assume factoring large numbers is computationally hard. Even with powerful classical computers, it would take millions of years to factor large keys. But a quantum computer running Shor’s algorithm could break that encryption much more quickly. That’s why there's a global push to develop post-quantum cryptographic protocols.
The key idea behind Shor’s algorithm is that factoring can be reduced to a problem of period finding. That is, given a number aaa, you want to find the period rrr such that armod N=1a^r \mod N = 1armodN=1. This gives you a periodic function.
Classical computers are bad at finding the period of such functions efficiently. But quantum computers can do it using the quantum Fourier transform, which is extremely fast. And whenever you hear “Fourier transform,” think: “we’re trying to extract frequencies or periodicity.”
So, you create this periodic function by raising numbers to powers and taking remainders modulo NNN, and then you apply the quantum Fourier transform to extract the period. Once you have the period, you can compute the factors of the original number. That’s the heart of the algorithm.
14: What are the main challenges in implementing Shor’s algorithm on today’s quantum hardware?
Elías F. Combarro: It’s very similar to what we discussed with Grover’s algorithm. You need a lot of gates, which means long circuits—and that introduces a lot of noise.
Also, to factor large numbers, you need to store those numbers in the quantum computer. If you're working with cryptographic keys that are 2,000, 3,000, or 4,000 bits long, you’ll need at least that many qubits. Today’s largest quantum computers only have a few hundred physical qubits—and those are not error-corrected.
To get 2,000 or more reliable logical qubits, you’d need many times that number in physical qubits—perhaps hundreds of thousands. That’s far beyond current capabilities. So both the qubit count and noise levels are major obstacles to running Shor’s algorithm on real hardware today.
15: Are there any smaller-scale or simplified versions of Shor’s algorithm that can be run on current hardware? Or perhaps other quantum algorithms for number-theoretic problems that might be practical sooner?
Elías F. Combarro: Yes, actually. In the last few weeks, I’ve read several papers proposing simplified versions of Shor’s algorithm that reduce the number of qubits needed so it can run on smaller quantum computers. But even with those simplifications, the qubit requirements are still far beyond what we currently have.
As for other number-theoretic problems, there’s Simon’s problem, which is a purely academic problem with no practical application—but it's been used to demonstrate quantum advantage in a limited sense. I think just recently, maybe a day or two ago, I saw a paper where researchers ran a reduced version of Simon’s problem on real quantum hardware and showed some advantage.
The challenge is that most of the quantum advantage demonstrations we’ve seen so far are still for academic problems that don’t have real-world applications. They’re very interesting to researchers like me, but they’re not useful yet in a practical sense.
Quantum Error Correction and Quantum Advantage
16: Quantum error correction, or QEC, is essential for scaling up quantum computers. What are the basic principles behind QEC—for example, the distinction between logical and physical qubits?
Elías F. Combarro: The idea behind quantum error correction is similar to classical error correction. In classical computing, error correction happens all the time, mostly through redundancy. If you’re sending a bit over a noisy channel, and you send it just once, you can’t be sure it was received correctly. But if you send it three times—like 000 or 111—and the receiver gets 001, they can infer that the message was probably 0 and correct it.
The more redundancy you add, the more resilient the message becomes. If you use 1,000 bits instead of 3, you can drive the probability of an incorrect message down as far as you like.
Quantum error correction works on the same principle, with some important twists. Instead of storing information in a single qubit, you spread it across many qubits. The individual ones are called physical qubits, and the combined, encoded unit is a logical qubit. It’s an abstraction that behaves like a perfect qubit, even though it’s built from noisy ones.
But there’s a catch: in classical computing, you can check the value of a bit directly. In quantum computing, you can’t do that—you can’t measure a qubit without collapsing its state. So quantum error correction uses partial measurements—what we call syndrome measurements—that only reveal limited information about what kind of error may have occurred, without disturbing the actual quantum information.
From that limited data, you can then infer what correction to apply and restore the logical state. So it’s similar in spirit to classical error correction, but it has to work under stricter constraints due to the nature of quantum mechanics.
17: The term “quantum advantage” or “supremacy” gets a lot of attention. How would you define quantum advantage rigorously? And can you cite examples of problems or tasks where even current noisy quantum devices might outperform classical ones?
Elías F. Combarro: I think there are several ways to think about quantum advantage, and that’s part of why the term creates confusion—people use it to mean different things.
One kind is mathematical quantum advantage. That’s when you can prove, in theory, that a quantum algorithm outperforms any classical algorithm for a given task. Grover’s algorithm and Shor’s algorithm are examples. If you have a large enough, error-corrected quantum computer with the right connectivity, then mathematically, you can run these algorithms faster than on a classical computer. There’s no debate there.
But then there’s practical quantum advantage: showing, in real-world experiments, that a quantum computer solves a particular problem faster than the best known classical algorithms. That’s much harder to pin down, because classical computers are improving too. New classical algorithms appear all the time. So even if a quantum computer beats classical systems today, someone might develop a better classical algorithm next month—and then that quantum advantage disappears.
This actually happened in 2019 when Google claimed quantum supremacy for a specific problem. At the time, classical computers couldn’t solve it. But now they can—so that instance is no longer an example of quantum advantage. Of course, Google and others keep improving their quantum systems too, so it’s a race.
Eventually, for some problems, quantum computers will pull ahead for good. But we’re not there yet. So practical quantum advantage is a moving target—and a subtle one.
Another point to keep in mind: the problems used in today’s quantum advantage experiments are not practical. They're interesting academically, but they don't solve real-world problems. We expect that to change in the future, once we have larger and more stable quantum machines.
18: Conversely, what are some of the most common misconceptions people have when they hear claims about quantum advantage?
Elías F. Combarro: The biggest one is assuming that if quantum advantage has been demonstrated, then quantum computers can now solve all problems faster. That’s just not true.
In reality, these demonstrations apply to very specific, often artificial problems that don’t have practical applications. So people hear “quantum advantage” and think it means we can now simulate molecules faster or break encryption—but we’re not there yet.
Another misconception is assuming that quantum advantage, once demonstrated, is permanent. As I mentioned earlier, it’s not. It can vanish if a better classical algorithm is developed. So it’s not a one-time milestone—it’s part of an ongoing race between classical and quantum approaches.
Quantum Software and Engineering Practices
19: As quantum computing frameworks like Qiskit mature, what programming abstractions have emerged? How do quantum circuits and gates, for example, map to classical programming concepts?
Elías F. Combarro: This often surprises classical programmers. I remember the first computer science student who came to my office interested in quantum computing. He asked, “How do you implement a loop in a quantum computer?” And I had to say, “Come in and sit down—I have bad news.”
Quantum programs are fundamentally different. You don’t have loops. You don’t have persistent memory or data structures in the way you do in classical programming. What you have is a quantum circuit—a finite sequence of operations that runs once, from start to finish. You can't stop, inspect, or loop within the circuit. You run it, you measure, and then you’re done.
That’s why many quantum algorithms require a classical computer to post-process the results or control repeated executions. You might run a quantum circuit hundreds or thousands of times and then use a classical routine to aggregate the measurements and make decisions.
This structure makes it hard to build higher-level abstractions. Quantum circuits are more like assembly code—very low-level, with no branches or loops. That said, some reusable quantum subroutines have emerged, like amplitude amplification or the quantum Fourier transform. These can be treated as modular building blocks—like functions or libraries in classical programming.
But the core challenge is that quantum circuits must be executed in full. You can’t pause, inspect, or reuse intermediate results, because measurement collapses the state. That makes composition and modular design harder than in classical systems.
20: Are there any emerging design patterns or standard libraries of quantum operations that developers can adopt to manage complexity in quantum code?
Elías F. Combarro: Yes, libraries like Qiskit include some useful abstractions. In our book, for instance, we build a kind of design pattern for Boolean functions and oracles. These let you express conditions or constraints within quantum circuits, and they’re essential for algorithms like Grover’s.
That said, circuit design is still tricky. I’m not an expert in hardware-level optimization, but I’ve collaborated with people who are—one of our technical reviewers works at Qiskit and specializes in designing efficient quantum circuits for arithmetic operations like addition and multiplication. These optimizations are important for reducing the number of gates and minimizing noise.
There are some emerging design patterns, but it’s still early. Circuit construction is very problem-specific, and often requires deep insight into both the quantum algorithm and the hardware limitations. So we’re still a long way from having general-purpose, high-level abstractions like you’d find in classical software engineering.
21: How does the notion of a quantum compiler or transpiler differ from a classical compiler? And what should a developer know about optimizing circuits for a given hardware backend?
Elías F. Combarro: The concept of transpiling or compiling is quite similar to classical programming. In classical computing, you write code in a high-level language like C or Java, and then it gets compiled into machine code. In quantum computing, it’s the same idea: your code—written, say, in Qiskit—is translated into a sequence of low-level quantum operations that can be executed on hardware.
However, there are important differences. First, quantum “high-level” languages are still very low-level by classical standards. You don’t have loops, branches, or complex data structures. So the abstraction gap is smaller.
Second, unlike classical compilers, quantum transpilers can’t completely shield you from hardware details. In classical computing, you usually don’t have to think about the processor’s internal wiring. But in quantum computing, that kind of detail really matters. Not all qubits in a quantum computer are connected to each other. So if you want to apply a gate to two distant qubits, the transpiler has to insert extra operations to move data around—introducing noise and increasing circuit depth.
That’s why, as a quantum developer, you need to know something about the machine you’re targeting. For instance, suppose you’re writing a circuit that uses qubit 0 and qubit 10, and those two qubits aren’t physically adjacent. The transpiler will find a way to bring them together using swap gates—but that adds overhead and error risk.
In fact, even the quality of individual qubits varies. Quantum hardware is calibrated daily, and some qubits perform better than others. So if your algorithm is sensitive to noise, you may want to restrict it to the highest-quality qubits on a given device. Qiskit lets you see this kind of diagnostic information, and we explain how to use it in the book.
22: Given that quantum states can’t be directly copied or fully observed, how do developers test and debug quantum algorithms in practice?
Elías F. Combarro: That’s where classical simulation becomes absolutely essential. If you write quantum code and run it on real hardware and get unexpected results, it’s very hard to know why. Is it because of noise? Is it because of a bug in your logic? Is it just the randomness of quantum measurement?
To untangle that, you start by running your code on a classical simulator. These simulators are deterministic and noise-free—they give you the exact mathematical result of the circuit, assuming perfect qubits. This lets you validate whether your logic is correct before moving to actual quantum hardware.
The limitation is scale. Classical simulators require a lot of memory. For example, to simulate 38 qubits, we needed 8 terabytes of RAM. To simulate 39, we’d need 16 terabytes. So there’s an exponential wall. But up to 30 or so qubits, simulation is still feasible, and it’s extremely useful for debugging.
In practice, you’ll often go through three stages: first, run the code on a perfect simulator; second, use a simulator that includes noise; and third, move to real hardware. That way, you can isolate where the errors come from—whether they’re in your code, in the noise model, or in the physical device.
23: What kinds of bugs or errors are most common in quantum code? And does Qiskit have any specific tools for debugging such issues?
Elías Fernández Combarro Álvarez: Since quantum programs are often run on classical simulators during development, you can use standard debugging tools—like breakpoints, inspection, or logging—just as you would in regular Python code. One of the most useful techniques is inspecting the state vector at different points in the circuit, especially when using a simulator.
There are also visual simulators that let you build circuits by dragging and dropping gates. These tools allow you to observe the state of the system at each step without performing a destructive measurement. That’s extremely helpful. You can see how the state evolves and whether it's behaving as expected.
Of course, if you’re working with 20 qubits, your state vector has over a million complex amplitudes, so inspecting the full state isn't always practical. But in many cases, the circuits have structure or symmetry that helps you reason about what's going on without needing to track every number.
As for common errors: they’re often conceptual. For example, misunderstanding how measurement affects state, or using gates that don’t preserve intended entanglement. Indexing errors can also creep in—especially when you’re copying parts of circuits or trying to modularize components.
24: Quantum computations yield probabilistic outcomes, and results can differ across hardware backends. How can developers ensure reproducibility of quantum experiments?
Elías Fernández Combarro Álvarez: Well—they can’t, at least not in the strict sense. Quantum computations are inherently probabilistic, so you can’t reproduce the exact same measurement result every time. What you can do is ensure a high probability of success.
Any useful quantum algorithm comes with some success guarantee. For example, Grover’s algorithm might give you a 99.9% chance of finding the right answer. But there’s always a non-zero chance it won’t. You could run it 1,000 times and still miss the correct result—it’s very unlikely, but possible. That’s just how quantum mechanics works.
However, reproducibility is possible in simulations. Simulators use pseudorandom number generators, so if you set a fixed random seed, you’ll get the same result every time—as long as the simulator version and environment stay the same. That’s what we do in the book: we specify the random seed so that readers can reproduce the results exactly.
So in summary: reproducibility is possible in simulations with fixed seeds, but not on real quantum hardware, because the randomness is fundamental and unavoidable.
25: Let’s talk about developer experience. Qiskit and other tools have come a long way, but what gaps do you see remaining in terms of usability, documentation, or tooling?
Elías F. Combarro: I mentioned one issue earlier—the rapid pace of change in quantum software. Qiskit, for example, used to update frequently, and each new version could break existing code. Maybe after just two or three months, your scripts would stop working if you upgraded to the latest version.
That situation has improved a lot. When we were writing this book, we knew that Qiskit 2.0—a major new version—was scheduled for release around the time we’d be finishing. We were nervous because we hadn’t written the book using that version, and we didn’t know what might change. That’s one of the reasons we separated the code from the explanatory chapters—to make updates easier. Fortunately, when Qiskit 2.0 came out, we only had to make a few changes. Most of the code ran out of the box.
Still, documentation is an area that needs more work—not just for Qiskit, but also for tools like PennyLane, which we used in our previous book. The reality is that many of these projects rely heavily on volunteers. Even at companies like IBM and Xanadu, which develop these libraries, not everything is fully documented—especially the newest features. Sometimes you have to read the source code to understand how something really works.
For that reason, in both our books we try to explain not just how to use the tools, but what’s happening under the hood. That way, readers don’t get stuck when something unexpected happens. For example, while writing this book, we discovered that the name of the result variable in Qiskit changes depending on whether you declare measurements at the beginning or at the end of a circuit. That wasn’t documented, and it took us a while to figure it out. So we included a clear explanation in the book to save others the trouble.
Hardware and Roadmap
26: The hardware landscape includes superconducting qubits, trapped ions, photonics, spin qubits, and more. Could you compare some of the leading approaches in terms of qubit connectivity, coherence times, gate fidelities, and scalability?
Elías F. Combarro: I should say upfront: I’m not a quantum hardware expert. My background is in algorithms and quantum software. But you do need to understand some hardware basics to run code effectively—otherwise, you’ll get results that are hard to explain.
Each technology has its strengths and weaknesses. The systems you can access with Qiskit, for example, are based on superconducting qubits. That’s the most mature platform right now. It’s close to classical hardware in terms of fabrication, which helps with scalability. But it has limitations. Coherence times—how long a qubit stays in a useful quantum state—are very short, usually in the microseconds. That limits how many gates you can apply before decoherence ruins your computation.
Gate fidelities on superconducting systems are getting quite good—99.9% or better—but they still introduce errors, especially in long circuits. Connectivity is also an issue: not all qubits are directly connected, which means more swap operations and more noise.
Trapped ions offer better coherence times—sometimes up to seconds—and generally better connectivity. But they’re harder to scale. You might get a few dozen high-quality qubits, but not hundreds or thousands yet.
Photonics is another promising direction. These systems maintain coherence for longer and can operate at room temperature, which is a big plus. But certain operations—like entangling gates—are harder to implement.
There are also newer platforms, like Rydberg atoms and neutral atom arrays. These use lasers to trap and manipulate individual atoms, and they have some unique advantages. For instance, with optical tweezers, you can physically move atoms around, allowing qubits to interact even if they’re far apart—solving some connectivity issues. But the operations are slower than in superconducting systems.
So each platform has trade-offs. And honestly, I think the technology that will enable large-scale, practical quantum computing probably hasn’t been invented yet. Many research teams are exploring different directions, and the winning approach may be something entirely new.
27: Looking ahead 5 to 10 years, what do you consider realistic timelines for quantum computing to deliver practical benefits?
Elías Fernández Combarro Álvarez: That’s a very difficult question. People have been asking me this for years, and I still find it hard to estimate. Some things have moved faster than I expected—for example, we’re now seeing early demonstrations of quantum error correction with a few logical qubits constructed from many physical ones. That’s exciting.
At the same time, many of the limitations that quantum hardware had 20 years ago are still with us. So it’s a mixed picture.
Just recently, I read a paper by IBM researchers claiming they had matched classical accuracy on real quantum hardware for quantum chemistry problems. That’s not quantum advantage yet—they’re just reaching parity—but it’s a milestone. They’ve even said they aim to achieve practical advantage by 2026. That seems optimistic to me, but if it happens, it would be amazing.
Personally, I would guess five years. But if you ask me again next year, I might still say five. So it’s hard to pin down. Some breakthroughs could accelerate things dramatically, but until then, it’s wise to remain cautiously optimistic.
28:What advice would you give to software architects and engineering teams who want to prepare for integrating quantum computing into their technology stack within the next five years?
Elías Fernández Combarro Álvarez: That’s a great question. We work with many companies that are trying to integrate quantum technologies into their workflows—not because they expect immediate results, but because they want to be ready when the time comes.
My advice is simple: start now. If you think quantum computing might be relevant to your domain, begin exploring it as early as possible. The learning curve is steep. Even if you already know Python, Java, C++, and Rust, quantum computing requires a different mindset. There are no loops, no traditional data structures, no copying of information. Measurement changes everything. You have to relearn how to think about programming.
In both our books, we’ve tried to make the field accessible. And based on the feedback we’ve received, I think we’ve been successful to some extent. But it’s still not easy. If you wait until quantum computing is mainstream, it may be too late to catch up.
The earlier you start, the better positioned you’ll be—both to understand the field and to take advantage of it when it becomes practically useful.
To explore the ideas discussed in this conversation—including how to model multi-qubit systems, implement protocols like quantum key distribution, and run foundational algorithms like Grover’s and Shor’s on simulators and real hardware—check out A Practical Guide to Quantum Computing by Elías F. Combarro and Samuel González-Castillo, available from Packt. This self-contained introduction uses Qiskit 2.1 to walk readers from single-qubit concepts to full quantum applications, with runnable code, clear mathematical explanations, and examples ranging from quantum money to fault-tolerant computation. It’s an ideal starting point for students, professionals, and self-learners preparing to engage with quantum programming in practice.