Quantum states of the art

Mike Olson
8 min readMar 4, 2021

This is the second article in my series on quantum computing. Part one is here.

I didn’t start out to write a series.

After my original post on quantum, though, I got to talk to a bunch of folks working in the industry. That made me realized that there are two very different schools of thought on what a quantum computer ought to be.

I wanted to understand the state of the art in quantum computing, but the two different approaches mean the art has two different states. The two schools are really active, with people researching and building two very different kinds of quantum computers.

Qubits

I explained in part one what qubits are and how they behave. Entanglement and superposition are fundamental to the field.

The two schools have very different opinions on what you ought to do with qubits.

Quantum gate computers

Traditional digital computers can perform a lot of different operations on numbers represented by binary digits, or bits. They can add them. They can invert them, turning all the ones into zeroes, and zeroes into ones. These transformations are called SUM and NOT. There are others, like AND, OR, XOR and more. For historical hardware reasons, these operations are called “gates.” Classical computers can apply lots of different gates to bits.

There are lots of different operations we can apply to qubits, as well. “Hadamard” turns a qubit with a known value of zero or one to a qubit in superposition — could be zero, could be one, you won’t know until you read it. Hadamard gates are the fundamental way we manipulate randomness in quantum programming.

It’s also possible to rotate a qubit in each of three spatial dimensions — ROTX, ROTY, ROTZ are three more gates. There’s an equivalent to the classical NOT gate, but for qubits instead of bits. There are gates that take more than one qubit and entangle them. And so on!

The quantum school of thought that gets the most press these days is quantum gate computing. It’s analogous to the classical gate computers we’ve grown accustomed to for bits. Quantum gate machines manage a bunch of qubits and supply different gates that can apply to them.

Doing all of HAD, ROTX, NOT and so on is complicated. You could build a single physical device to perform all of these different transformations. But that’s a lot of flexibility and generality in a single package. Making it reliable is harder still.

You could build different physical devices, each of which implements just one or a few of these operations. That makes the overall system more complicated. There is more gear inside the chassis, qubits need to be routed to the right device at the right time and in the right order, and so on.

A quantum program, in this architecture, would apply a series of gates to a collection of qubits, in sequence, to solve a problem. The machine has to keep the qubits in superposition for the duration of the program. It’s hard to keep qubits isolated for long periods of time.

There’s a twist to this story: You’ll often read in the literature about quantum universal gate computers. Adding that word “universal” pulls in a slab of theory that applies to the field. It also requires reliability delivered via error correction that current-generation quantum gate computers just can’t deliver. They simply have too few qubits to provide extras for error correction.

Google and IBM are two of the companies building quantum gate computers. This is the architecture I concentrated on in my first post. It’s interesting and progress has been good. But it’s only one of the possibilities.

Quantum annealing

A basic rule in physics says that objects tend to seek the lowest-energy state they can find. Lifting weights in the gym is hard, because the low-energy state for the weight is on the floor. You have to do work (add energy) to change that. Pizza fresh from the oven burns your mouth because it’s got a lot of heat energy stored in it; wait a bit, and that heat will leak out, warming up the plate and the air a little bit, as the pizza seeks a lower-energy state.

Quantum systems follow this rule, too. If you have a bunch of entangled qubits, each of them in superposition, so that they are each either zero or one with some probability, the whole collection will tend to find the lowest-energy point — the likeliest state for the entire network — when you actually read the value of any of the qubits. That instantaneous result takes into account the degree of entanglement of the various qubits (how tightly each is linked to the others) and the probabilities attached to each of them.

There’s a branch of mathematics called optimization theory. It works on questions like, how can we pack the most different-sized boxes into a shipping container? What’s the best route for a traveling salesperson to use to visit all her customers? As the number of boxes or customers gets large, these problems get very hard to solve using classical methods. If you’re familiar with mathematics, you’ll know about linear programming, and gradient descent, and Markov chains, and queuing theory, and so on. They’re all different kinds of optimization problems.

It turns out that many optimization problems can be represented as a network of nodes, with differently-weighted weighted links among them and different probabilities assigned to the nodes. Translating the traveling salesperson problem into its network representation is a little tricky, but just doing the translation is much easier than exhaustively solving the original problem using a classical computer.

That’s interesting, because if we could somehow turn the network representation into a collection of entangled qubits, we could rely on physics to give us an instantaneous solution to what would otherwise be very time-consuming to compute.

And that brings us to the second school of thought in quantum computing: quantum annealing computers. These are simpler than quantum gate computers. They have qubits, yes, but you can only do two things to them: entangle them more or less closely, and use superposition set their probabilities.

The only kind of answer you can get from a quantum annealing computer is the final energy state of a network of qubits. If you set up your quantum network right, though, that’s an almost instantaneous answer to a complicated optimization problem. The quantum computation happens in a single step, called annealing.

The name comes from a physical process of getting something very hot, letting it cool down slowly, and then looking at the structure that results. If you do that with glass, for example, you’ll generally get a nice regular crystalline structure. Engineers have used digital analogues of this process for decades. Laying out complicated circuits to minimize power consumption and signaling delays relies on simulated annealing using classical computers.

Of course quantum systems are probabilistic. You might not get the same answer every time you run the annealing step. That’s true for hot glass, too. You won’t get exactly the same crystal structure every time you heat and cool it. But if you do it over and over again, you’ll get to look at different solutions, and the most common ones are likely to be the best.

Quantum annealing computers have far fewer gates, which naturally makes them easier to build. You don’t do lots of different steps on each qubit — you set up entanglement and probabilities, and then step back. That reduces the time you need to keep the qubits in superposition, which makes these machines easier to operate.

D-Wave Systems is building quantum annealing computers.

Which is better?

Google is at the forefront of the state of the art in quantum gate hardware. Its most sophisticated quantum processing unit, the Sycamore, has 54 qubits. It’s demonstrated quantum supremacy (that is, it’s provably better than any classical system) by performing in 200 seconds a computation that would have taken the world’s fastest supercomputer 10,000 years.

Quantum gate systems can perform more different operations on qubits than annealing systems, and thus in principle should solve a wider variety of problems. That’s not a solid theoretical result; maybe any job you can do on one, you can do on the other. But you can at least do different things on a quantum gate machine, and continued experimentation here is worthwhile.

In particular, we’re still working on the programming tools and abstractions we need to use quantum gate machines easily and well.

A key current limitation of quantum gate machines is their relatively small scale. The state-of-the-art Sycamore chip has just 54 qubits; that’s not enough for computation and error correction. For the near term at least, that small number of qubits is a real limitation of gate architectures.

D-Wave Systems leads the state of the art in quantum annealing. Its largest system, the Advantage, offers more than 5,000 qubits. It can represent systems about a hundred times larger than the Sycamore. Each additional qubit increases the power of the system exponentially, so this is a vastly more powerful processor, able to store and operate on much larger and complex systems of qubits.

Of course, it can only be used for one thing— finding low-energy states in those complex systems. If you need to solve an optimization problem, though, it’s fast and powerful, and pretty easy to use. You figure out what your network looks like, upload it into the processor, and hit the “anneal” button.

In practical terms, there are more real use cases of quantum annealing deployed in production today. Programming these machines is easier because we already understand the math that underlies optimization theory. I’ll write more about this in another post, but there are lots of optimization problems in the world. Folks already turning them into networks that quantum annealing can solve for them.

In a sense, a quantum gate machine is analogous to our classical computers. It can perform a wide variety of computations involving qubits. A quantum annealing machine is more like a calculator. You load your equation into it, and get an answer.

There’s another analogy, a little less precise, nevertheless apt: Before the advent of digital computers, engineers built analog systems using gears and motors or vacuum tubes or hydraulics to model real-world systems. Slide rules, Antikythera and the perpetual calendar at the Capelli di Mercanti are examples. Quantum annealing is in a rough sense (there are lots of digital components involved, but forget about them for a minute) an analog computer constructed from qubits. It uses the fundamental laws of quantum physics to model real-world quantum systems.

Quantum gate and quantum annealing start from very different viewpoints about how we should apply quantum phenomena to our programming problems. Both have made huge strides in recent years.

It may be that quantum gate machines and quantum annealing machines converge one day. It may be that each stakes out an eventual domain in which it is dominant. For at least the near term, though, quantum annealing will beat quantum gate for its special kind of problem.

Both states of the art will continue to evolve over the next decade. Quantum computing will stay interesting!

Acknowledgments

My secret professional superpower is finding smart folks working on interesting things and hanging out with them. Once again, this strategy has paid off. Useful insights and correct ideas in this post are from, or inspired by, the folks below. Errors, naturally, stay right here at home.

Geordie Rose, founder and CEO at Sanctuary.ai, was previously founder and CTO at D-Wave Systems. His thoughtful feedback on the first post in this series made clear to me that I’d done a lot of work to understand quantum gate computing, but had entirely ignored quantum annealing.

Michael Helander, President and CEO of OTI Lumionics, spent a delightful (for me!) pandemic Sunday morning on the telephone with me, talking about practical applications of quantum annealing in materials engineering. I only got through about half of the good stuff I learned from him in this post. Stay tuned!

Tom Erickson at the University of Wisconsin roped me into Creative Destruction Lab. Ajay Agrawal founded CDL. He introduced me to Geordie and Michael.

Grateful to all.

--

--

Mike Olson

Berkeley-based techie with an interest in business. Worried about the world.