The 6 Most Common Quantum Computer Mistakes

The 6 Most Common Quantum Computer Mistakes

Quantum computing is a field in its infancy and the mistakes that people make when using quantum computers are continually evolving. Sometimes I think that we should call it a “quantum computer art” instead of science. Here are the six mistakes that I run into most often (in no particular order):

1. Not realizing that the qubits are noisy, and that errors will occur on every single gate you run.

2. Not taking advantage of the available error correction strategies to reduce these errors.

3. Thinking you can easily simulate a quantum computer without considering the impact of noise and errors (even with error correction).

4. Trying to create the best gate sequence or quantum algorithm without considering the impact of noise and errors (even with error correction).

5. Trying to solve an NP-hard problem using a quantum computer that is exponentially larger than what is needed (e.g., assuming your 30-qubit quantum computer will be able to factorize a 1000-digit number).

6. Thinking that a quantum computer is like a classical computer but faster for all problems; certain problems may actually become slower on early generations of quantum computers (e.g., integer factoring).

The 6 Most Common Quantum Computer Mistakes

Anyone who has worked with quantum computers knows that they can sometimes be strange. Even the most experienced programmers have been caught off guard by some of their unexpected quirks. After spending many hours debugging and thinking about the common pitfalls when working with quantum computers, we’ve compiled a list of six mistakes we see people make most often. We hope this guide will help you avoid these mistakes in your own programming journey!

1) Assuming Qiskit is a simulator

The most common mistake is assuming that Qiskit is a simulator. While it is true that Qiskit includes a powerful set of simulators, they are not the same thing as the real hardware! Though the differences between simulation and hardware are subtle, they can quickly lead to bugs if their key differences are not taken into account. The key difference between the two is noise: hardware has it and simulators do not. For example, using a simulator, you could implement Grover’s algorithm to find an item in an unordered list of size N in O(sqrt(N)) time with certainty (if you don’t know what Grover’s algorithm is, check out this awesome tutorial!). On actual quantum hardware however, one

Quantum computers are in their infancy. They are still very limited, very expensive and very complex to use. In this article, I will list the 6 most common mistakes people make when using quantum computers.

Mistake 1: Using a quantum computer as a traditional computer

One of the most common mistakes is to try to use a quantum computer as if it was a traditional one. Although they both have CPUs and memory, they work fundamentally different. Quantum computers can be used to solve certain problems with an exponential speedup while the best possible speedup on a classical computer is polynomial.

Examples of such problems include factoring large numbers, solving systems of linear equations or unstructured database search. It is important to note that these problems can only be solved efficiently on a quantum computer if they are formulated in such a way that the solution becomes apparent by measuring the output state of the quantum computer (see mistake 3 for more details).

Mistake 2: Using too few qubits

Another common mistake is not to use enough qubits. One can think about this from two perspectives:

Using too few qubits for a particular algorithm: For example, Shor’s algorithm for factoring numbers requires about twice as many qubits as there are digits in the number

Our team at Microsoft Quantum has now spent the last few years working with some of the world’s most advanced quantum computer systems, and one thing that continues to surprise us is how many common mistakes people make when using them.

We want to help prevent these mistakes from happening, so we thought it would be helpful to highlight the six most common errors that we see our customers making, along with some suggestions for how to avoid them.

Mistake 1: Running the wrong quantum algorithm

One of the most common mistakes we see is running a quantum algorithm that doesn’t actually solve your problem. For example, though quantum machine learning algorithms can perform classification tasks far more efficiently than classical algorithms, they are ill-suited to solving regression problems.

So it’s important to understand whether your problem is a classification or regression problem before you attempt to create a quantum algorithm for it.

Mistake 2: Using an unrealistic dataset

In general, users tend not to think about their dataset until they’re ready to run their simulation. But this is a huge mistake!

The way you prepare your data will have a significant impact on your results, so you should start thinking about this question as early as possible in the process. This will also allow

As quantum tech becomes more mainstream, it’s important to be aware of some of the mistakes that can occur when you’re implementing your own algorithm.

Here, we’ve compiled our list of the most common mistakes made with quantum computing.

1. Assuming that we have an all-powerful qPU (quantum processing unit)

2. Thinking that a qubit is equal to a bit

3. Trying to run a classical algorithm on a qPU

4. Calling your code “decoherence resistant”

5. Using bad randomness

6. Misunderstanding entanglement

Quantum computers have a lot of real potential to solve problems that would be intractable for classical computers. The massive speedup provided by Shor’s algorithm, for example, enables the factoring of large numbers that are unfeasible with existing technology.

In order to get the most out of a quantum computer, we need to avoid making mistakes when developing our quantum programs. This is a blog post around some of the most common mistakes I’ve seen people make when learning or exploring quantum computing.

The first two mistakes listed in this blog post are general programming mistakes, and you are likely to make them regardless if you are developing on a classical or quantum computer. The last four mistakes are specific to developing on quantum hardware and simulators.

Let’s get started!

A lot of discussion in the quantum computing community concerns whether or not we’re seeing error rates that are achievable with a useful quantum computer. I think progress on this question is hampered by the fact that there’s a wide range of different ways to make mistakes in implementing a quantum algorithm.

The way I think about this is how you might implement a given quantum algorithm on an actual device. Maybe you need to use some new kind of gate, which you haven’t used before. Maybe you need to do different kinds of error correction, or prepare states in a different way, or estimate expectation values differently.

I think it’s important to keep track of these things separately because we’ll want to be able to use different combinations of them for different tasks. If we just say “We’ve done it, we’re below threshold!”, without thinking about what exactly was used to get there, then we won’t be able to later do better things with the same hardware!

Here’s my list, with examples from recent papers:

Leave a Reply