Tuesday, December 9, 2014

D-Wave, Boss, D-Wave!



Some of my colleagues (and my followers on both social and anti-social media) have been asking me to comment on the D-Wave Systems (so-called) quantum computer, in light of all the seemingly contradictory press releases, scientific papers, and blog posts that have appeared over the years. This request of me is similar to asking me to assess and comment on the status of a pond filled with ravenous blood-thirsty piranhas by stripping down to my Speedos, slathering myself with the blood of a fatted calf, and then diving into the pond and thrashing about. However, I can assure you, that the sight of me wearing Speedos would keep even the most ravenous piranha — or perhaps even Scott Aaronson? — at bay.

Just kidding Scott! Love ya bro! (Let the flames begin….)

In my recent book, I do indeed have a section on the D-Wave phenomenon, but that was written in 2012 and perhaps needs a light-hearted updated. When writing the book my friend and colleague and known Grand Poobah of fine D-Wave Systems hardware, Daniel Lidar, asked me if I wanted him to read over my section on D-Wave Systems computers. I told him, flat out, no. He pressed on and said, with a wry grin, “You don’t want to say something you might later regret.” I replied, “Dan I can imagine many things I would like to have engraved on my tombstone but one of them certainly would not be…

HERE LIES JONATHAN P. DOWLING —
HE NEVER SAID ANYTHING HE WOULD LATER REGRET!

In addition, for reasons I will leave nebulous for the moment, in the past few months I have had to give myself a crash course on the status and performance of the D-Wave Systems machines and perhaps I have a clearer view of what is (or is not) going on.

Firstly let me say that this vigorous and open debate about the D-Wave machines is good for science and is the hallmark of the scientific method at work. At times I wish the tone of the debate were more civil but history is filled with uncivilized scientific debates and perhaps civility is too much to wish for. Who can forget the baleful Boltzmann-Loschmidt debates on the statistical mechanics reversibility question? How about the forceful Huxley-Wilberforce debate on evolution? (“So tell us, Dr. Huxley, is it from your father’s side of the family that you claim your descent from an ape or is it from your mother’s?”) What about Cantor’s cantankerous feud with Kronecker over the nature of mathematical infinities?

Anybody? Anybody? (Tap, tap, tap....) Is this thing even on!?

Like scientific debates of yore this D-Wave business is not without its polemics. I attended a workshop at Harvard in March where another invited speaker, a friend and colleague of mine, when asked about what he thought about D-Wave, replied in no uncertain terms, “I think they’re a bunch of charlatans.”(“So tell us, D-Wave Two computer, is it from your father’s side of the family that you claim your descent from an abacus, or is it from your mother’s?”) On the other hand my friends and colleagues like Colin Williams are pro (or at least not totally anti) or like Daniel Lidar who is steadfastly neutral.   I can assure that particular Harvard-workshop-speaker colleague of mine that neither of them is (at least currently known to be) a charlatan.

I will try to give here an abridged history of what is going on, where I will fill in the gaps in my knowledge with the usual wild-asset conjectures. So as far as I can tell, as usual, this all can be blamed on us theorists. In 2001, Ed Farhi at MIT and his collaborators published a paper in Science that proposed a new model for quantum computation called Adiabatic Quantum Computing (AQC). This model was radically different than the standard circuit model for quantum computing, which we’ll call Canonical Quantum Computing (CQC), developed by Deutsch and others.

The standard circuit-gate model describes how a universal quantum computer can be built up out of circuits of one- and two-qubit gates in a way analogous to how a classical digital computer is built. It is called ‘universal’ as it can any quantum algorithm, such as Shor’s factoring algorithm, can be programmed into its circuitry. (By contrast it is strongly believed that the factoring algorithm cannot be programmed into a classical computer’s circuitry without suffering an exponential slowdown in run time.) While the circuit-gate model is very likely more powerful than a classical computer for this problem, to factor large numbers of interest to, say, the National Security Agency (NSA), the circuit would have to contain billions or even trillions of qubits and gates, allowing for quantum error correction and all that. To date, the record number of qubits constructed in the lab in such a way, the ion trap quantum computer holds the record, is about 16 and not a trillion. Every few years or so, the ion trappers carefully polish up and add another qubit to their trap, but the pace is very slow.

Adiabatic Quantum Computing (AQC) seemed completely different than the circuit-gate model or Canonical Quantum Computing (CQC). In AQC, instead of building gates and circuits in the usual sense, one identifies a large physical quantum system, say a collection of interacting quantum spins, with a particular Hamiltonian structure. The problem to be solved is then “programmed” into the state of these spins. Then the system is allowed to adiabatically evolve by slowly tuning some parameter, such as temperature or time, allowing the system to evolve into its ground state. If set up properly then the spin-state of the ground state of the Hamiltonian contains the answer to the computational question you programmed in at the higher temperature (or earlier time).

What set the quantum computing community off into a paroxysm was that Farhi and the gang claimed, in their original Science paper, that ACQ could solve NP-complete (travelling-salesman-type problems) with an exponential speedup over a classical computer. This was an astounding claim, as it is widely and strongly believed that the CQC model gives no exponential speedup on NP-complete problems over a classical computer. And so, in a nutshell, Farhi and collaborators were claiming that AQC approach was exponentially more powerful than CQC, at least on this one particular class of important optimization problems.

A vigorous scientific debate ensued, rotten tomatoes were thrown, and finally, in 2007, Dorit Aharanov and collaborators proved that AQC was computationally equivalent to ordinary circuit-gate quantum computing. That is AQC was not more powerful than CQC. (They are polynomially isomorphic hence both equally universal models of quantum computation.) Hence the claim that AQC would give an exponential speedup on NP-complete problems was very likely incorrect since it is strongly believed that no such speedup occurs on the CQC.

Got it?  

The situation reminds me of the situation in quantum mechanics in 1926 where you had the Heisenberg matrix mechanics and Schrödinger wave mechanics and nobody knew if they were the same theory or gave different predictions or what until Dirac (and others) showed that they were transformable into each other and hence made exactly the same physical predictions and hence were not two different theories of quantum mechanics but rather two different pictures of quantum mechanics. However, as is well known to the quantum mechanic on the street, some quantum theory problems are much easier to work out in the Heisenberg picture than in the Schrödinger picture (and vice versa). As a consequence, due to the Gottesman-Knill theorem the Heisenberg and Schrödinger pictures are not computationally equivalent even though they are physically equivalent. That is a quantum computing circuit, one composed of only gates from the Clifford algebra, that looks like it would be impossible to efficiently simulate on a classical computer in the Schrödinger picture is in fact efficiently simulatable in the Heisenberg picture.

Okay, so AQC model of quantum computation is neither more nor less powerful than circuit-gate quantum computing, but because of its mapping into fairly simple spin systems, it looks more like a design for a straight-forward physics experiment than a design for a gigantic digital quantum computer. The open question is that maybe AQC is easier to implement than the CQC. That is, instead pursing the circuit-gate paradigm, adding one ion qubit to your trap every few years until you hit a trillion of them, perhaps you could cook up some simple spin-like physical system with a whole lot of qubits (maybe hundreds but not trillions) interacting in a certain way, code in your problem, cool the whole thing down slowly, and read out your answer — build your quantum computer that way! And based on that potential, I surmise, is how and why D-Wave Systems got involved in the quantum computing game.

In 2007 D-Wave systems demonstrated a 16-qubit-prototype system called The Orion and used it to solve some computational problems such as a pattern-matching problem related to drug design and a Sudoku puzzle. In 2011 they unveiled their first commercial machine, the D-Wave One, a 128-qubit machine that sold for $1,000,000. They have sold precisely one of these and that was to Lockheed-Martin, which then immediately handed it over to a research group at the University of Southern California, consisting of Daniel Lidar and collaborators, who operate the machine with complete autonomy from that company. (USC does provide the physical infrastructure for the machine and its operation but that is role that is compartmentalized and separate from the research activities.) In 2013 D-Wave systems produced the D-Wave Two, a 512-qubit machine, and they have sold also precisely two of those. The first was also bought by Lockheed-Martin and is run by USC and the second was bought by Google and is located at the Quantum Artificial Intelligence Lab at the NASA Ames Research Center at Moffett Field, California (near San Jose). This particular NASA machine has a front-end user interface run on a Google proxy server that allows the user to submit batch jobs to the D-Wave Two hardware via that route.

So just what are these machines and what are they doing? This question is difficult to answer, as we are dealing not with a science experiment but a product produced by an industrial concern. If a scientist, say at a university, claims they see such and such a result in an experiment; then they are required to publish sufficient details about the experiment so that independent researchers at other universities can construct identical experiments and confirm or refute the result. But the D-Wave machines are industrial products, not science experiments, and details about their design, construction, performance, and even the software that runs them is proprietary. They don’t want other companies to know all the details of how they are built.

So scientists, instead of constructing their own duplicate of the machine, are forced to use the D-Wave machine to test itself — to run tests or “experiments” treating the thing as a black box. One bit of front-end software that interfaces with the machine is in fact called “black box”. And for some years now the game has been to choose some computationally hard problem, run it on the D-Wave machine, and then compare the run time when the problem is run on a fast classical computer, and then publish the results of the race. The results greatly depend on which problem you choose, and hence this is why there are so many conflicting reports appearing in the popular press. One day we hear that the D-Wave machine is more powerful than a classical computer and the next we hear it is no more powerful. Well this is because the term “powerful” is problem dependent. It makes little sense to say one machine is more powerful than another in general. It only makes sense to say one machine is more powerful than another when computing a specific problem or class of problems. When computer scientists say a quantum computer is more powerful than a classical one, they mean on a specific set of problems. For example factoring. For some problems, say computing the permanent of a matrix, a quantum computer is likely not any more powerful than a classical one.

Let’s take an analogy. Suppose in 1984, ten years before Shor discovered his factoring algorithm, an alien spacecraft lands on the roof of the physics building at the University of Colorado (where I was a grad student at that time) and out from the UFO emerges a luminous, one-eyed, green-skinned, alien named Xiva the Decompiler of Words. Xiva hands me a pocket-sized device that she calls the Xblox and claims is a universal quantum computer and tells me to try it out. I decide to pit it against the nearby Cray X-MP supercomputer. If I first run the oracular ordered search problem (don’t ask), the Xblox will perform better than the X-MP by at most a factor of 3, and I would exclaim, “This quantum computer sucks!” If instead I randomly chose to try and factor a large number with the two machines, I’d find the Xblox has an exponentially shorter run time than the X-MP.  I then would exclaim, “This quantum computer really rocks!” On other problems (NP-complete or unordered search) the Xblox would show only a quadratic improvement at all over the X-MP. The question, which computer has the better performance, is meaningless unless you specify the particular problem they are performing the computation on.

What is inside the D-Wave machines? I suspect that D-Wave had originally hoped to implement something like full-blown Adiabatic Quantum Computing on their machines. Since the 2007 result of Aharonov, et al., we know that AQC is equivalent to ordinary QC, as discussed above, but AQC is perhaps easier to implement. Hence if the D-Wave machine turned out to be a general-purpose AQC machine it would be truly a universal quantum computer. This hope may have been the source of early (and somewhat naive) press reports from D-Wave that they were building a true universal quantum computer with entanglement and the whole nine yards.

The speed of hype may often greatly exceed the speed of write.

As mentioned above, AQC could be implemented by a building a controllable lattice of coupled spin-half particles. However, given that the spin-half particle (a neutron for example) is a quantum two-level system, any array of coupled two-level systems will do. D-Wave decided to go with superconducting flux qubits. These are little micron-size loops of superconducting current-carrying wires where all the electrons can be made to circulate counterclockwise (the qubit |0 state) or all clockwise (the qubit |1 state). If the little buggers are sufficiently cold and sufficiently protected from the environment then they have a long enough coherence time so you can make a cat state or the superposition |0+|1⟩, which is then the starting point for generating massive amounts of entanglement across the lattice, via qubit-qubit interactions, and away you go to implement AQC (or even CQC). The particular Hamiltonian system D-Wave has chosen is something called the Ising spin model (ISM) whose Hamiltonian has a binary and tunable coupling between each pair of qubits

This Hamiltonian shows up in the theory of quantum spin glasses, for example. There is one additional hardware implementation step. This Ising (pronounced “ee-sing” not “icing”) Hamiltonian presupposes, via qubit-qubit coupling, any qubit (in say a square lattice) can be coupled to any other qubit in the lattice. That is not the case in the current D-Wave One and Two. Only nearest neighbors are coupled. In addition some of the qubits don’t work due to manufacturing issues, but you know in advance which of them is going to be offline as D-Wave characterizes the chip and tells you.

To handle this hardware issue an additional wiring diagram is needed to map the Ising problem (with arbitrary qubit-qubit coupling with all the qubits in play) onto the D-Wave hardware (with only nearest-neighbor coupling and where some of the qubits are no-shows). That wiring diagram is called a Chimera graph. (I suppose because the graph looks the fire-breathing hybrid of a goat and a lion and a serpent to the unaided eye.) It is important to note that each different computational problem will require a different Chimera mapping, which can be quite hard to work out for a large number of qubits, but the D-Wave black-box front end will help you find one if you are to dumb to figure it out on your own.

So the idea is that you program your input as an initial state of this Hamiltonian (time t = 0), then slowly tune the time parameter t until the thing reaches its ground state (time t = T), and then read out your answer and you have a type of AQC. The problem is for this protocol to work as universal AQC the qubits must have coherence times that are at least as long (and preferably longer) than the adiabatic run time T. This might be the case if the qubits were cold enough and the protected from the environment enough but that is not the case. The qubits in the current D-Wave machines are quite noisy and have very short coherence times, due to temperature fluctuations, imperfections, and other technical noise. Hence the qubits are not sufficiently coherent so that long-range entanglement can be spread across the lattice and hence full-blown AQC cannot yet be implement and by the Aharonov et al. proof neither is it then universal for quantum computing. That is the current D-Wave machines cannot solve all the problems a general quantum computer can solve. Well if they can’t do that, what can they do?

There is a watered-down version of AQC that is called Simulated Quantum Annealing and the good folks at D-Wave Systems claim their machines can do this. Annealing is a physical process, used for example in the manufacturing of swords, where by the metal alloy of the sword is cooled very slowly into its ground state. The idea is that if you cool the sword too rapidly it will freeze in some higher state of energy than its ground state, a local minimum of the alloy structure, instead of the global minimum. These higher-energy local minima produce defects that weaken the final product. Hence you want to cool it slowly to get a strong sword. The physics is that while the alloy is cooling, if it does find itself in a local minimum energy well, because it is still at a relatively high temperature, the thermal fluctuations will cause it to leap over the well’s barrier height and allow it to leave the local minimum and continue the search for the true minimum.

Computer scientists have emulated this physical process by designing an optimization procedure called simulated annealing. You are seeking the optimal state of some large computational problem and so you simulate the annealing process with a slowly adjustable parameter (usually time and not temperature) and you add in by hand random fluctuations to mimic the thermal fluctuations of the sword-making process. Simulated annealing often works much better than other optimization methods, such as steepest descent, in finding the true global minimum of a large problem.

In Quantum Simulated Annealing (QSA) the bits are replaced with qubits and the thermal fluctuations are replaced with quantum fluctuations inherit in any quantum system. Qualitatively, it is claimed that quantum simulated annealing is more efficient than classical simulated annealing since, instead of fluctuations driving the up and over the potential barrier of the local minimum the system is stuck in, quantum mechanics allows also for the tunneling of the system through the potential barrier. For this reason claims are made that quantum simulated annealing should find the global minimum of a system faster than classical simulated annealing. Quantum simulated annealing should be viewed as strictly contained in AQC, and hence is not universal for quantum computing. What remains to be seen is it more powerful than a classical computer on a specific problem?

And so we come the core idea. As far as I can tell, D-Wave Systems claims their machines implement quantum simulated annealing. The question then is then, is this version of QSA with noisy qubits, short coherence times, and no entanglement good for anything. To answer this question, D-Wave farms out time on their machines to various researchers to see if there are any computational problems, mapped into QSA, for which the D-Wave machines out perform a comparable classical machine. So in addition to being problem dependent, the answer also depends on what you mean ‘comparable classical machine’. It is due to these ambiguities that we have what appear to be contradictory publications and press releases on the power of the D-Wave machines.

I’m not going to go through the list of all the computational problems that have been tried on the D-Wave computers but focus on the one that seems currently to show the most promise. This computational problem is known as QUBO. A few months ago I had never heard of QUBO and I was shocked to find out, after I had, that the Q in QUBO does not stand for ‘quantum’ but rather ‘quadratic’ as in Quadratic Unconstrained Binary Optimization.

Borzhemoi! This I knew from nothing!

Well there was a steep learning curve but without going into all the details, a QUBO problem is a type of optimization problem whose mathematical structure maps directly onto the Ising Hamiltonian that is native to the D-Wave hardware. It seems likely that if the D-Wave machine gives any computational advantage, and if that advantage is quantum in origin — the jury is still out on both these points — then it will only be on problems of the QUBO type. However this is an interesting class of problems with applications that seem primarily have applications to image recognition. In particular certain types of synthetic radar image processing, of NASA Earth Science interest, maps onto the QUBO form.

If the D-Wave machine gives an advantage it will be only on practical problems that can be mapped (with little computational overhead) to a QUBO problem. Whether that set of problems is the empty set, is still unknown.

Here is a summary of the step required to map a practical problem like image recognition onto the D-Wave machine:

1.)  Find a practical problem that has an efficient mapping to a QUBO problem and carry out that mapping. 

2.)  Next the QUBO problem must be mapped to the specifics of the D-Wave circuitry via a Chimera graph that is essentially a wiring diagram for the quantum computer. This is done by an intermediate mapping to a Quantum Ising Spin Model (QuIS) (for which the D-Wave machine was designed to solve in the generic AQC model. In this way programming the D-Wave machine is akin to programming a classical computer in assembly or machine language, as was done decades ago, before the development of higher level compilers and programming languages. That is why the programming process is somewhat convoluted to the uninitiated. To carry out the Chimera mapping for nontrivial problems, D-Wave provides the user access to a front-end server called “black box” that carries out a search for an optimal Chimera mapping.

3.)  Run the problem on the D-Wave machine. For the D-Wave 2, purchased by Google and located at the NASA Ames Research Center, Google has developed a proxy server that interfaces with the actual D-Wave machine that will allow the user to submit batch jobs to the quantum machine.

So we can see that there is a lot of overhead between the physical problem to be solved and the final run on the machine. If that overhead is too large, any quantum advantage of the D-Wave machine may very well be swamped.

In particular, my colleagues ask, “What’s the deal with these announcements that come out every few weeks or so? Some claim to see a speedup and some claim they do not.” We are now prepared to answer this. Experiments on the D-Wave machine that test algorithms that are not of the QUBO form show no speedup. Some problems that have an efficient mapping to QUBO seem, at least in a number of publications, to show some speed up. Whether you see a speedup depends on what problem you choose and how efficiently you can map your problem to a QUBO problem.

Let me be frank and say that it is still possible that D-Wave machines will show no speedup of a quantum nature, but the jury is out. If they are proved to show a quantum-enabled speedup it will be only on a specific narrow class of important  problems that have an efficient QUBO implementation. This means that the current D-Wave machines are clearly not a universal quantum computer but potentially are a very narrow scope, special purpose, possibly quantum, QUBO problem solving engine.

 I should say that I'm in favor of a top-down approach like this to compete, say, with the ion trappers that add a very high-fidelity qubit to their trap once every year or two. I think the competition and the associated discussions are good for the field. Let's avoid calling each other charlatans and instead let's all sit down and try to figure out what is going on and see how we can reach our common goal — a universal quantum computer.

Thursday, November 20, 2014

You Can't Teleporter Anything He Doesn't Want to Hear

The early days of the experimental demonstrations of quantum teleportation, particularly the first three experiments by Zeilinger, DeMartini, and Kimble, were not without controversy. This I learned first hand in 1999 when I attended the Sixth International Conference on Squeezed States and Uncertainty Relations in Naples, Italy. As the three different experimental group’s publications had just appeared, the conference decided to have a panel discussion on these three different experiments. Curiously, the organizers asked me to chair the panel discussion. I was a bit puzzled by this request, and carefully stated that while I was honored by the invitation, I was hesitant to accept, since I had never carried out research (either theoretical or experimental) on quantum teleportation, and was certainly not an expert on the matter. The organizers looked about the room nervously and then in hushed tones took me aside and explained their rational — apparently there was a very contentious debate raging among the three experimental groups as to who had done the first “true” demonstration of quantum teleportation, and the organizers expected the panel discussion to be loud and chaotic and they feared the panelists might become unruly — quantum physicists run amok! Therefore, they explained to me nervously while averting their eyes from my gaze, that they needed a moderator who would be able to run the thing with a strong hand (and a loud voice) in order to keep order. “The organizers are unanimous in our decision that you, Dr. Dowling, are our only hope.” I had been in a moment demoted from world’s expert in quantum physics to the quantum mechanical equivalent of either Obi-Wan Kenobe or a barroom bouncer.

The panel was chaotic from the inception. As I expected, there was one panelist per experimental group, Austrian physicist Gregor Weihs from the Zeilinger group, DeMartini from his own group, and Australian physicist Samuel Braunstein from the Kimble group. (American physicist Marlan Scully once called me the “Bob Hope” of theoretical physics. If that is the case then my good friend and colleague Sam Braunstein is surely the “Woody Allen” of theoretical physics.) Since the debate to be among these three competing groups, I was again puzzled when the organizers, at the last minute, added two additional Italian physicists to the panel, other than DeMartini. I was even more puzzled that these two last minute additions seemed to have even less experience with quantum teleportation than me! I gentle inquired as to why they should be on the panel — my role as bouncer was clear — but their roles were not. Again more hushed tones and averted gazes and the organizers explained that they were both big-shot Italian professors who asked to be on the panel discussion on quantum teleportation, in spite of knowing absolutely nothing about quantum teleportation, but for the simple reason that they were big-shot Italian professors, and thought it would look prestigious to insert themselves onto the panel. I had lived in Italy for a year as a graduate student, and I knew a thing or two about Italian politics in the universities, and conceded to their admittance to the panel in the interest of keeping the peace. Peace, however, was not long kept.

The night before the panel discussion I was in a bit of a panic myself in my hotel room as I wondered how to organize things. I decided that in the hour-long time slot I would give each of the six participants, including myself, five minutes to speak or present a few slides, and then reserve the second half hour for questions from the audience. Now I knew there was this debate between the three experimental groups about whose teleportation experiment was the “best” experiment, where “best is very subjective, but I had not followed this debate at all, and did not really have time to read through all three of the experimental papers and try to figure it all out. Hence I had an experimentalist friend and colleague, German physicist Andreas Sizmann, give me a quick tutorial on the nature of the experimental debate, at the bar, and then went back and carefully reread the original theory proposal by Bennett and colleagues, and with the help of Sizmann constructed a few overhead transparencies on how I thought a quantum teleporter should work and how, if handed such a device, one might be able tell if it was working properly, and if handed three of them, how I might gauge which of the three was “best”. It was then, for the first time, I devised the story of the mythical National Institute of Quantum Standards and Technology (NIQuIST) and the equally mythical quantum teleporter-testing machine NIQuIST had constructed to test the three claimed experimental implementations. In other words I did not compare the three experiments at all, I figured the panelists could do that themselves, but instead I put up a series of tests that each teleporter should pass to get the NIQuIST seal of approval, or more accurately something like a Consumer Reports rating: Recommended, Best Buy, or Not Acceptable. The quantum teleporter-testing machine that I drew up by hand looked like that in figure.



In the figure we show a compactified version of the quantum teleporter, being slowly lowered into the NIQuIST teleportation-testing machine (brownish orange). As before we have Doug, now a NIQuIST employee (purple, left) who provides the teleporter with a photon whose quantum polarization state is unknown to Alice, Bob, or Charlie in the teleporter. Doug produces a large number of such unknown states using a machine called the “ensembler” that produces ensembles or collections of single photon polarization states, where the ensembler can be programmed to choose the states at random, or in a pre-selected sequence. There are an infinite number of states so ensembler should produce a large number of different states so the test of the teleporter will be statistically significant. That is if the sample of states is random enough and sample enough of the possible infinite space in a way that the teleporter operators cannot anticipate, the more likely that it will be a fair test and that the teleporter operators cannot somehow cheat by using some inside knowledge of the states being transmitted. 
The next part of the tester will be to see how good the teleporter is doing. If an unknown state Y is sent in by Doug to Alice, we want the state that emerges on Bob’s side of the teleporter to be as close to Y as possible. This is quality control and one measure of the quality of such a state transport is called the quantum fidelity. The fidelity is 100% if the outputted teleported state is identical to the inputted stated to be teleported. The fidelity is 0% if the outputted teleported state is as far from the inputted state as possible, which would be hard to arrange without trying. If the teleporter simply completely scrambles the input state then (on average) the fidelity of the output states, with respect to the input states, will be 50%. That is, if the teleporter sucks, on average the outputted state agrees with the inputted state only half the time. Hence anything better than 50% is considered good; I would call it the bronze standard of teleportation. The gold standard would be 82% fidelity, that is the minimum required for the teleporter to be used to teleport one half of an entangled photon pair and still violate Bell’s inequality. Anything less than 82% can be modeled with a classical local hidden variable and hence does not really test quantum mechanics. A new character in our pantheon, Ellen, on the right (purple), extracts the fidelity; she runs a machine called the “fidellerator” that measures the state of the teleported photon and compares it with the state that was actually teleported. Doug sends the complete information on the states he provided, and Ellen sends the complete information on the states she received, to François (bottom green), who runs a machine called the comparator (which I do not put in quotes because “comparator” is a real word), which compares what Doug sent to what Ellen got and then ranks the teleporter in the test.
 
The next part of the tester will be to see how good the teleporter is doing. If an unknown state  is sent in by Doug to Alice, we want the state that emerges on Bob’s side of the teleporter to be as close to that state as possible. This is quality control and one measure of the quality of such a state transport is called the quantum fidelity. The fidelity is 100% if the outputted teleported state is identical to the inputted stated to be teleported. The fidelity is 0% if the outputted teleported state is as far from the inputted state as possible, which would be hard to arrange without trying. If the teleporter simply completely scrambles the input state then (on average) the fidelity of the output states, with respect to the input states, will be 50%. That is, if the teleporter sucks, on average the outputted state agrees with the inputted state only half the time. Hence anything better than 50% is considered good; I would call it the bronze standard of teleportation. The gold standard would be 82% fidelity, that is the minimum required for the teleporter to be used to teleport one half of an entangled photon pair and still violate Bell’s inequality. Anything less than 82% can be modeled with a classical local hidden variable and hence does not really test quantum mechanics. A new character in our pantheon, Ellen, on the right (purple), extracts the fidelity; she runs a machine called the “fidellerator” that measures the state of the teleported photon and compares it with the state that was actually teleported.  Doug sends the complete information on the states he provided, and Ellen sends the complete information on the states she received, to François (bottom green), who runs a machine called the comparator (which I do not put in quotes because “comparator” is a real word), which compares what Doug sent to what Ellen got and then ranks the teleporter in the test. 
 
So I presented this slide and this spiel in about five minutes at the Naples panel discussion, along with an additional slide and discussion from my partner in crime and sidekick, Andreas Sizmann. Then I allowed each of the remaining five panelists to speak. As expected, the two-big shot Italian professors who were admitted to the panel for political reasons, gave presentations that had nothing to do with quantum teleportation, much to the consternation of the audience, consternation that was on display through audience member protestations consisting of gnarled grimaces (or audible snoring). Then in rapid fire came (in alphabetical order) the three true experimental teleporter panelists, Braunstein, DeMartini, and Wiehs. This took up about a half hour of time, as anticipated. As the other panelists spoke, Sam Braunstein scribbled away furiously on a little notepad and, periodically with great showmanship and dash, he would loudly tear off the note sheet with the scribbles on it, neatly fold it in half, and hand it to me, while he displayed a very serious face to the audience. On the notepad sheets, that I would carefully unfold like a poker hand of two cards (so that nobody else could see them), Braunstein had drawn cartoon caricatures of the various panelists (other then himself), with their name and such artistic embellishments as crossed eyes, drool, maniacal grins, or feathered and bloodied arrows sticking through their craniums from ear to ear. 
After he would hand these cartoons to me, he sitting and I standing, with great seriousness and I would peer intently at them. Astonished, I realized that he was simply trying to get me to laugh out loud during the panel discussion! Not to be fooled, I would inspect the crude cartoons carefully, out of eyesight from the other panelists and the audience, and then fold them neatly, tuck them into my shirt pocket, and announce, “A very good point, Prof. Braunstein, I will be sure to bring it up in the discussion session to follow!” All the panelists, the audience, and Braunstein would then nod in solemn approval of this curious ritual and then Braunstein would then return madly to his work of furiously scribbling out the next cartoon caricature on his note pad.
   
Each panelist had spoken their piece and then it was time for the grilling from the audience. I did my best to maintain order, timing the questions, cutting people off when sufficient, deafening others (with my ribald vocabulary of Italian curse words) when necessary, but all was lost when a young man (whose name I never got) with a Germanic accent (German, Swiss, or Austrian), grabbed the portable microphone and launched into a diatribe of his own. The young Teuton first assailed the structure and organization of the panel, “This panel discussion was badly organized. I have no idea who these first two Italian guys were or why they were even on the panel, as they had nothing to say about the topic of quantum teleportation!” The Italians looked at the German-accented Wünderkindt in horror as he dared speak the obvious. (I liked this Kindt already.) Then he continued, “Even the presentations from the panelists, who have done work on quantum teleportation, were mostly incomprehensible.” Now all the hackles were raised. “In fact,” continued this spawn of Odin, “The only person on this panel who made any sense was Dr. Jonathan Dowlings (sic)!” (I liked this Kindt even more.) I grinned ear from ear and smiled at all the organizers, but the were not smiling back. Then the blond-haired, German-accented, intemperate audience participant went in for the kill, “And you, DeMartini, your teleportation experiment meets none of Dowlings’ criteria for a success at all!” I stared at this guy in horror as a shudder when down my spine and spread rapidly through the floor and across the audience
That was it! DeMartini snatched up his transparencies and stormed off the podium, off the stage, down the steps, and out of the auditorium side door, never to be seen again (at least at that conference). I, jokingly, announced, “Did anybody see where DeMartini went? Can somebody check the toilet? No? Okay then I officially declare the panel discussion to be at an end!”

— The unedited story adapted from Schrödinger's Killer App: Race to Build the World's First Quantum Computer


 

Tuesday, November 4, 2014

Einstein's Least Famous Equation?


In 1980, my first year in graduate school, the English physicist, Paul Dirac, Nobel Prize 1933 awardee, came to the University of Colorado to give a popular talk at The Gamow Memorial Lecture. As a big fan of Dirac, I dragged all my non-physicist friends to the “popular” lecture early to get good seats in the middle and second row from the front. The place was packed with the mayor, the chancellor, the provost, the deans, all the physics professors, a blonde woman from the Sufi community dressed in a turban and a white cloak sporting ceremonial dagger in  her waistband, and so forth. (This is Boulder, Colorado, after all.) Dirac gave what I thought was a very interesting talk on the history of quantum theory, but with no slides, no notes, no audiovisual aides, and no nothing. He just stood at the podium and talked for an hour. He was 78 years old at the time and he spoke in a very soft high-pitched, English-accented, mouse-like voice. So soft it was that you could barely hear him at all and the technicians kept cranking up the amplifiers until it screeched periodically from the feedback. The talk put all the non-physicists in the audience immediately to sleep. Then Dirac got to the part where he discovered the Dirac equation predicting the existence of antimatter.

He clearly gets a bit excited and impossibly goes up an octave, whereupon the feedback kicks in waking everybody up, and Dirac says, “I was led to the idea of the discovery of antimatter by considering Einstein’s most famous equation, E = ….” All my buddies from the English department began to nudge me and the crowd visibly perked up. They had not understood a goddamn thing but for sure even the English majors knew what “…Einstein’s most famous equation, E = ???” was going to be. Dirac continues triumphantly onward to the hushed auditorium, E = the square root of p-squared times c-squared plus m-squared times c to the fourth!?"

(Einstein’s least famous equation?) The audience visibly collapsed upon themselves in utter disappointment—they understood nothing—and I in the tomb-like quiet that followed in the hallowed Rocky Mountain granite of the vast Macky auditorium— burst out laughing uncontrollably. (And I was the only one.) Dirac, normally an endearing bird-like little man, scowled, halted the talk, stepped out from behind the podium, and stared down at me in silence, vulture-like, for a full minute. The rest of the audience looked back and forth between Dirac and me as they coughed and inspected their watches. Then, without a word, after my torturous minute was up, he returned behind the podium and finished his talk as if nothing had happened at all.

From: Schrödinger's Killer App: Race to Build the World's First Quantum Computer (Page 42). Taylor & Francis Press.


Monday, June 30, 2014

Sampling arbitrary photon-added or photon-subtracted squeezed states is in the same complexity class as boson sampling



Jonathan P. Olson, Kaushik P. Seshadreesan, Keith R. Motes, Peter P. Rohde, Jonathan P. Dowling

Boson sampling is a simple model for non-universal linear optics quantum computing using far fewer physical resources than universal schemes. An input state comprising vacuum and single photon states is fed through a Haar-random linear optics network and sampled at the output using coincidence photodetection. This problem is strongly believed to be classically hard to simulate. We show that an analogous procedure implements the same problem, using photon-added or -subtracted squeezed vacuum states (with arbitrary squeezing), where sampling at the output is performed via parity measurements. The equivalence is exact and independent of the squeezing parameter, and hence provides an entire class of new quantum states of light in the same complexity class as boson sampling.

See: arXiv:1406.7821

Thursday, June 26, 2014

An introduction to boson-sampling

Bryan T. Gard, Keith R. Motes, Jonathan P. Olson, Peter P. Rohde, Jonathan P. Dowling

Boson-sampling is a simplified model for quantum computing that may hold the key to implementing the first ever post-classical quantum computer. Boson-sampling is a non-universal quantum computer that is significantly more straightforward to build than any universal quantum computer proposed so far. We begin this chapter by motivating boson-sampling and discussing the history of linear optics quantum computing. We then summarize the boson-sampling formalism, discuss what a sampling problem is, explain why boson-sampling is easier than linear optics quantum computing, and discuss the Extended Church-Turing thesis. Next, sampling with other classes of quantum optical states is analyzed. Finally, we discuss the feasibility of building a boson-sampling device using existing technology.

See: arXiv:1406.6767

Friday, June 20, 2014

Scalable boson-sampling with time-bin encoding using a loop-based architecture

Keith R. Motes, Alexei Gilchrist, Jonathan P. Dowling, Peter P. Rohde

We present an architecture for arbitrarily scalable boson-sampling using two nested fiber loops. The architecture has fixed experimental complexity, irrespective of the size of the desired interferometer, whose scale is limited only by fiber and switch loss rates. The architecture employs time-bin encoding, whereby the incident photons form a pulse train, which enters the loops. Dynamically controlled loop coupling ratios allow the construction of the arbitrary linear optics interferometers required for boson-sampling. The architecture employs only a single point of interference and may thus be easier to stabilize than other approaches. The scheme has polynomial complexity and could be realized using demonstrated present-day technologies.

See: arXiv:1403.4007

A Quantum Phase Representation of Heisenberg Limits and a Minimally Resourced Quantum Phase Estimator


Scott Roger Shepard, Frederick Ira Moxley III, Jonathan P. Dowling

Within the quantum phase representation we derive Heisenberg limits, in closed form, for N00N states and two other classes of states that can outperform these in terms of local performance metrics relevant for multiply-peaked distributions. One of these can also enhance the super-resolution factor beyond that of a N00N state of the same power, at the expense of diminished fringe visibility. An accurate phase estimation algorithm, which can be applied to the minimally resourced apparatus of a standard interferometer, is shown to be resilient to the presence of additive white-Gaussian noise.



See: arXiv:1403.2313

On The Future of Quantum Computing as Seen from the Recent Past


During the period 2011–2013 I was honored to participate as a co-investigator on one of the four teams funded under the Intelligence Advanced Research Projects Activity (IARPA) Quantum Computer Science Program (QCS). To quote from the IARPA web page, “The IARPA Quantum Computer Science (QCS) Program explored questions relating to the computational resources required to run quantum algorithms on realistic quantum computers.”

First let me state that this blog post represents my own viewpoint and opinion of the program and not that of IARPA or other participating teams or government agencies involved. However, as most of my readers know, given my involvement in the US Department of Defense (DoD) and Intelligence Agency programs in quantum computing from their inception in 1994, both as an advisor to the government and as a government-funded researcher, I perhaps have a unique perspective that, hopefully, can help us see the big picture of what emerged from the QCS program. (For readers who don’t know my background in this area, please pick up a copy of my new book, Schrödinger’s KillerApp: Race to Build the World’s First Quantum Computer, and then very carefully set it down again.)

Quantum computing is not merely the sum of a qubit hardware technology and quantum control and quantum error correction and an algorithm, but rather is a system where the combination of those elements may work better or worse than the sum of the parts.  This, of course, leads to the research questions related to where those nonlinearities are, how to avoid or exploit them as appropriate, and how to build tools to allow easier control of those nonlinearities. Our particular team’s work on decoherence-free subspaces, in this context, was an early example of what we hoped to achieve in that it ignores the line between quantum computing and quantum error correction and focuses on how to build a protocol that performs both functions in an integrated way better than you'd get with individual pieces.

Particularly I would first like to spell out that our team was a unified and integrated research program and not just a loose federation of independent research efforts. Frankly, going into this, I thought this unification would be difficult, if not impossible, to attain. My fear was that our team, consisting of tens of researchers from backgrounds in classical computer science, quantum physics, engineering, and so forth would have a great deal of trouble even communicating with each other — to get beyond the specialty specific jargon issue. This was perhaps a problem in the first few months but we all learned from each other very rapidly and by the end of the first few months of Phase I, everybody was on the same plane and had a much better view of the integrated whole than, say, when we wrote the original proposal. Sometimes throwing a bunch of people from disparate backgrounds into a room and leaving them to figure out how to communicate actually does work.

As this intra-team communication progressed, I personally came to better understand in just what ways quantum computer is more than the sum of its individual parts. That is a quantum computer is not just a particular qubit hardware platform that is cobbled together with quantum control, quantum error correction, quantum compilers and programming language. For example, as a theoretical physicist that has worked close to the qubit hardware for many years, it was difficult for me wrap my brain around what a quantum programming language would even look like. Hence it was quite comforting to me, when I met our team member Peter Selinger in person for the first time at the kick-off meeting, that he confessed to me that he, in spite of taking the lead, both internationally and on our team, in developing exactly such a programming language (Quipper), that he had trouble wrapping his brain around the some of the things we found in the government furnished information such as the description of errors in terms of Krauss operators. Hence it was that, while our team had experts in all of the disparate sub-areas needed to carry out the requisite resource estimates, at least initially I am sure none of us really saw the big picture. That changed very quickly as the project began to pick up steam and our team was really able to integrate these disparate sets of expertise so that what we were able to produce was type of interactive and nonlinear system analysis where the various pieces of the quantum computer were being tailored in real time to work well with all the other pieces in a way that at the outset I had not expected to happen so quickly.

An outsider once asked me what the QCS program was all about and, somewhat flippantly, I replied that it was the Manhattan Project for quantum computing. (The only difference is that, for the Manhattan Project, the end goal was to ensure that something blew up but for the QCS program the end goal was to make sure that nothing ‘blew up’.) In hindsight I do no think this remark is flippant at all. Surely we were not, unlike the Manhattan Project, trying to build an actually working device, but like the Manhattan Project we had experts from disparate fields all working on different aspects of the development of the resource estimation. Let me give a particular example. Going into this program, no one our team really understood the interplay between the resource estimates, the type of quantum error correction (QEC) code employed, and the type of error mitigation techniques such as quantum control (QC) and decoherence free subspaces (DFS) that needed to be used. We all understood that there was going to be a tremendous amount of overhead coming from the QEC codes, particularly if the errors where near threshold where large numbers of concatenation would be required, but we did not really have a quantitative feel for how this all played out. As the baseline resource estimates of our team and the other three teams began to give these initially somewhat long time scales for running various algorithms, we were then able to quantitatively go back and investigate the quantum error mitigation methods with a much better understanding. As it was unclear which mitigation would work best with which government-provided physical machine description (PMD) we tried a number of them in parallel including active quantum control with feedback, dynamical decoupling, and DFS. This is similar to, for example, in the Manhattan Project where they pursued both uranium and plutonium designs in parallel, as well as spherical implosion and gun designs for inducing detonation. One of the things that we quickly realized is that for most of the PMDs the noise and decoherence was not well enough understood nor characterized in the experimental literature for the theorists to actually optimize over the differing mitigation protocols.

Another interesting result to come out of our teams ‘divide and conquer’ strategy was that, going into the program, I had the feeling the each of the QEC encodings would perform reasonably well. I would wager this was the assumption of many in the field. What clearly came out of this program is that some QEC encodings, for the particular PMDs investigate here, really do much better and across all PMDs. Although resistant to this idea at first, we finally did convince ourselves that what some vocal members of other teams had been saying all along was likely correct; surface codes were clearly the best shot at moving towards a scalable quantum computer with current levels of errors in the gates and the qubit storage. However we were able to quantify just in what way that was correct using the tools developed in our research program so we did not have to just take another team’s word for it. 

It is interesting to reflect on the role of error correction codes in the development of classical computers. In the 1950s VonNeumann (and others) developed rudimentary classical error correction codes to address the problem that the digital computers of the time, such as the ENIAC and the EDVAC, were blowing out vacuum tubes about as fast as they could replace them. These codes are not used today because of the technological growth curve that has taken us from vacuum tubes to the integrated circuit where fault tolerance is built into the hardware. To foresee that classical technology curve in the 1950s, VonNeumann would have to have predicted the invention of the integrated circuit (1960s), the Mead-Conway VLSI design rules (1980s), and the consequent path of Moore’s Law that has given us the fault-tolerant computers of today. In many ways the QCS program is much like an attempt to carry out resource estimates for various classical computer algorithms while assuming that the hardware is some variant of the ENIAC or other vacuum tube technology. Indeed, a single ion-trap quantum computer with millions or billions of qubits, along with all the requisite trapping and gate implementation hardware would easily fill a large warehouse and weight several tons, just as the ENIAC did in its day.

Very likely, if resource estimates for now common classical algorithms were carried out in the 1950s, the researchers would have also run into long time scales for their baseline resource estimates (for, say, the algorithm for playing World of Warcraft on an ENIAC) — particularly if large amounts of classical error correction needed to be deployed. Yet we now know that the then unforeseen breakthroughs leading up to Moore’s Law allowed us to reduce the size and weight of such a computer so that it fits in my pocket (iPhone) and the run time for most algorithms of interest are measured in milliseconds. The wrong message to take away from a baseline resource estimate made in the 1950s for classical computers was that classical computing was doomed. In the same way it is the wrong message to take away from the QCS program that the large baseline resource estimates that PLATO and all the teams found should be a sign that quantum computing is doomed. These baseline resource estimates (BRE) form an upper bound against which all unforeseen (but highly probable to be found) future technological advances in quantum computer hardware or architecture developed will be measured. All new technologies have initially a slow growth but inevitably new discoveries and innovations come to bear and the growth then settles into an exponential pace of development. There will be someday a quantum Moore’s Law as the these new discoveries are deployed and in tens of years the warehouse-sized quantum computer will fit in my new grandnephew’s pocket.

Our team’s BRE work came out of the need to understand the problem space and develop a baseline.  Most non-algorithm work that has considered quantum algorithms at all has looked at Shor's or possibly Grover's.  These algorithms do have the advantage that they are relatively simple and their practical value would be immediate.  However, if we presume that quantum computers are going to be generally useful, rather than dedicated problem solvers, then we also assume the space of algorithms will continue to grow and that we'll have other, more complex algorithms that are useful.  Focusing on the simplest algorithms has, in computer science, traditionally led to bad decisions about what is possible and what resources are necessary. Thus, the QCS program has examined a range of algorithms — from some that would be practical to run if a quantum computer of modest resources could be built — up to some that will likely never be practical to run. 

Thus the QCS program is similar to the classical computer algorithm analysis in HACKMEM performed in 1972 in “Proposed Computer Programs, In Order Of Increasing Running Time” (problem 77–96).  They consider several games up to checkers and what it would take to “solve” them.  Then they also consider hex, chess, and go; analyzing their complexity.  In particular their results would have seemed to indicate that chess and go would be intractable on a classical computer. In contrast to what some might take away from the QCS baseline resource estimates, the HACKMEN group did not, therefore conclude that classical computing is impossible or worthless.  Rather this sort of analysis is generally taken as being useful for having a broad view of what is and is not feasible as well as presenting the basis for understanding when heuristic short cuts will be necessary.  Thus, chess is an example of a game where classical computational brute force techniques are insufficient, so chess playing computers use a combination of techniques — from stored game openings and endings to search techniques that increase the probability of finding a good move in a reasonable amount of time at the possible expense of finding the best move. Contrary to what one might have taken away from the HACKMEM results in 1972, since 1997 the best chess players in the world are computers ever since IBM’s Deep Blue beat world chess champion Garry Kasparov.

The QCS BRE’s are also necessary as a baseline. For networking audiences, I typically mention that we have a baseline.  For networking programs, we’re either asked how much we can improve over TCP (for pure networking) or how much worse than TCP are we (for security-focused programs).  For quantum computing, we haven't had a similar baseline to compare against.  Even if it's imperfect, this is something that the community very much needs to make it easier to compare claims and to make it harder for people to report results that work only on their carefully chosen problem.  (It may be going to far to point out that various people have started referring to what D-Wave presents results against as "the D-Wave problem" since it was carefully chosen to match the capabilities of their machine.)

So what did we learn in by the end of the QCS program? Along the lines of the Manhattan Project analogy, we have learned that it is indeed possible to put together a team of smart people with disparate backgrounds ranging from physics to computer science and hammer out a common language and mode of thinking that allowed us to successfully make reasonable BREs for the wildly different PMDs and algorithms. Going into this at the kick off meeting I was not sure even that would be possible. And yet here we are at the end of Phase II and all four teams, taking often different approaches, found similar patterns for all of the algorithms investigated.

We learned in a quantitative way just how the bad the QEC overhead is and in just what way that overhead affects the various run times for different choices off the menu. Before QCS there was only a qualitative feeling for how this would play out. In particular QCS gave us a robust way to compare different QEC schemes to each other. Before QCS different researchers chose QEC schemes based on what was either popular at the time or what they were most familiar with. Particularly I was not convinced that one QEC, surface codes, would be dramatically better than the others — in spite of dramatic claims made by members of other teams. Where were the numbers to back up these claims? Well the QCS program produced them and our own numbers produced by our team told us that well indeed that surface error correcting codes were probably the way to go.

Another critical point that came out of the QCS program was the interplay between quantum error correction (QEC), quantum control (QC), and other error reduction methods such as decoherence free subspaces (DFS) and quantum dynamical decoupling (QDD). The physical machine descriptions (PMD) and government furnished information (GFI) was presented to us in such a way that, unless something was done, the native error rates in the hardware were all just right at the threshold for where the QEC would work. My understanding is that the government folks expected all the teams to use QC, DFS, and QDD to lower these given thresholds to a point where large depth QEC codes could be avoided. It did not become clear until the meeting in Princeton in July of 2012 that nearly all the noise models provided to the performing teams by the government teams were of a sort where quantum control techniques would not work.

The most critical point to come out at that meeting was that, in fact, in most of the quantum computing experiments these noise spectra are either unknown or unmeasured, which I suspect lead to the simplest assumption, which was to make them all Gaussian, precisely the type of noise that the most powerful quantum control methods are useless against. At one review meeting it was deemed important get the experimenters and the quantum controllers all into the same room for a two day meeting to hash out just what the theorist needed from the experiments to optimize the control techniques. Such a meeting, I think, would be still critical for the advancement of the field of quantum computing. It was, in my opinion, this being stuck right at the error correction threshold that led to the huge code depths and the long time scales of the resource estimates

It appears to me that rather than having a generic quantum control or other decoherence-mitigation methodology that would be applied to all hardware platforms; it will be critical in the short term to develop quantum control techniques that are specifically tailored to the noise spectrum of each of the physical devices. This suggests a program where the quantum control and QEC community sit down with the experimenters and hash out just what experiments need to be done and what type of data is needed in what form. Then next from these data specific noise spectra models would be then developed and finally the QEC and QC theorist would produce tailored quantum control techniques to match the noise spectra. Close collaboration with the experimentalists is also needed so that the QC theorists understand, particularly for active quantum control, just what types of weak and strong measurements are possible in each experimental implementation. For example, in a recent experiment with ion traps that demonstrated Ulrich dynamical decoupling (UDD) as a decoherence mitigation protocol, the experimenters deliberately added noise to their system, noise with a spectrum that UDD was designed to correct, and then corrected it. This is clearly not the way to go. The QC methods should be tailored to the native noise spectrum and not the reverse.

Another big success of the QCS program was the ability to exploit the Gottesman-Knill theorem in order to characterize errors in large-depth codes that cannot be handled by a straight forward Monte Carlo method due to the exponentially large Hilbert space. In the proposal writing stage in 2010 we had some vague ideas that something like this would be possible in that the gates in the Clifford group, while efficiently simulatable classically, might still span enough of the computational circuit space so that we could compute errors in large-depth circuits without having to build a quantum computer to do so. The concrete result was the resource-estimating tool (QASM-P) that we developed and I was really surprised how well this worked. Such a tool will be a mainstay for quantum circuit characterization for years to come, at least until we have enough quantum computers around where we can start using few-qubit machines to characterize few-more qubit machines in a bootstrapping approach. QASM-P was in fact just one of several stand-alone tools developed by our team on the fly to address particular computations as needed. What I find amazing is that even though these tools were developed by different team members; it was possible to integrate their usage across all the resource estimates and produce coherent results

This issue of large scale circuit performance characterization, to my mind, is the real roadblock for building large quantum computers. It is related to the issues that currently plague, for example, ion trap computers with just about 10 qubits. The known protocols for characterizing the gates and states are quantum process and quantum state tomography; protocols that require an exponentially large number of data points to be taken in the number of qubits. Blatt is fond of saying the he could easily build a 16 qubit ion trap register but the classical resources for characterizing the device’s performance are lacking. Here the Clifford algebra approach does not help. The example I like to use is that when Intel tries to characterize the performance of its latest chip, it does not do so using the ENIAC. Hence there will come a time when we will have to use few-qubit machines to characterize the performance of few-more qubit machines and in this way bootstrap our way up to many qubit machines. However this quantum-to-quantum bootstrapping characterization protocol does not yet exist, even in theory. Until we get around this problem we’ll be limited to machines of only 10–20 qubits by pursing the bottom up approach of adding a new qubit to the device every year or so.

So in summary, when I first read the Broad Agency Announcement (BAA) call for proposals for the QCS program I thought the program was wildly overambitious in scope. I also thought that any winning team would quickly fragment into the different team-member groups working in isolation and that it would be impossible to extract any big picture from the outcome. As my fellow team members will attest, I was hesitant to even join the project for these reasons. I now, quite frankly, state that I was wrong. Our team and in fact all the teams pulled together on what really was an immense project and the output, as stated above, was clearly much greater than the sum of its parts. As readers of this document will know, I have been involved in the DoD and Intelligence program in quantum computing since its inception in 1994 (an inception I myself helped to incept). In those nearly 20 years I have witnessed physics experimental groups, physics theory groups, computer science and algorithm groups, complexity theorists groups, and error correction groups all working in, really, mostly in isolation from each other.

Until the QCS program I did not really have even an intuitive sense of how all of this work would someway fit together into the development of a quantum computer. One of the critical successes of the QCS program it developed the tools and more importantly the common language needed for me (and now many others) to see the big picture. To mangle an old metaphor, I feel like a blind man that for 20 years has been groping his way around an elephant, only to find that a cruel trick had been played on me and that I was not blind but just blindfolded. The QCS program has removed that blindfold for me and I’m sure for many others. I have always been an outspoken proponent of the government program to pursue the development of a quantum computer. But I have always had some nagging doubts. Perhaps I had missed something. Perhaps there is some fundamental physical reason or practical technological reason why a large-scale universal quantum computer could never be built — as many distinguished scientists still claim to this day. (See Chapter 15 of Aaronson’s book, for a list of such claims).

Well after participating in the QCS program I have gone from a somewhat hesitant promoter of quantum computing development to now being all in. There is nothing that we have missed. The development of a large-scale universal quantum computer is a formidable technological challenge, and it will likely take tens of years to complete, but there is no chance it is a technological or physical impossibility. To my mind the QCS program results has ruled the latter out — or at least painted it into a very small corner of improbability. The development of a large-scale universal machine is a mathematical certainty and it is just a matter of time before we get there.

Of this I have no longer any doubt.

Acknowledgements: I would like to thank Scott Alexander from Applied Communication Sciences, our team's Principle Investigator, for comments and suggestions and particularly adding the bit about HACKMEM. This blog post was supported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior National Business Center contract number D12PC00527. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoI/NBC, or the U.S. Government.