Originally published April 29 2024
I’ve used my experience as a marathon runner before as an introductory way to illustrate some aspects of quantum computing and post-quantum cryptography—but the reality is, the marathon is not my favourite distance. I much prefer the half-marathon—still a significant distance at 21.1 Km, and a serious tactical challenge to complete successfully. Training for a half-marathon is easier to fit into a busy professional and personal schedule but it nonetheless requires dedication and effort. After taking up running as a hobby in mid-2009 I ran my first half-marathon in early 2010, notching a respectable finish of 1:46:07. In my next three attempts at the distance, I made incremental improvements of a minute or two, eventually running 1:40:21 in the fall of 2012.
In 2012 and 2013 I began taking a much more serious approach to my running, committing to workouts, coaching and regular strength training. I felt my fitness improving and set an aggressive personal goal to break 1:36:00 in my next half-marathon which I ran in May 2013. It turned out to be a very good day—imagine my delight when I crossed the finish line in 1:34:27—beating my previous time by almost six minutes, finishing in the top 10 overall and placing second in my age category. It would be over five years and 11 more attempts until I would beat this personal record, and then by only 20 seconds.
Mind your step
Such a sudden, dramatic improvement is known as a step change, a term borrowed from mathematics but now commonly used in business and economics to describe any large discontinuous change. Wiktionary, the online dictionary, helpfully suggests quantum leap as a possible synonym. It’s an appropriate suggestion—three announcements in the early months of 2024 suggest to me that we might be coming close to a step change in how noise-induced errors in quantum computing can be corrected. It has even been claimed that this step change may take us out of the current industry state of NISQ—Noisy Intermediate-Scale Quantum—to the stage of resilient quantum computing, which in turn may well usher in the long hoped-for goal of quantum advantage. But just as my improvement in half-marathon performance didn’t come without a great deal of effort, dedication and training—a step change in reliability of quantum computing will also be the result of significant investments in research and development by large and small industry players alike. And let’s not hold our breath just yet—I’m not convinced that the step change has occurred, despite claims made by a number of quantum companies. But I do believe we’re coming close.
Remember, qubits are extremely delicate and sensitive to even the slightest disturbance in their environment, causing decoherence and errors. Three methods are being explored to get around this problem. The first is error suppression—trying to reduce noise by shielding the qubits, including super-cooling them to temperatures just a fraction of a degree above absolute zero. Although somewhat effective for now, this approach has its limitations—the power requirements alone are impractical for commercial use and scalability is also a problem. A second method is the very interesting idea of error mitigation, introduced by IBM in mid-2023 with a couple of papers published in Nature. Error mitigation doesn’t try to reduce the noise, rather, it uses mathematical techniques to undo the effects of noise on quantum calculations. It has been successful enough that IBM has coined the term quantum utility, suggesting that current quantum computers can move beyond the experimental to be commercially or scientifically useful, performing at least as well as or slightly better than their classical counterparts. But error mitigation introduces its own overhead and has so far only proven viable for a few types of use cases.
Then, we have the ultimate goal of fault tolerance, also known as error correction. In classical computing, fault tolerance is achieved through redundancy—adding extra bits and spreading out our data so that in the unlikely event that one bit accidentally changes value, the data can still be reconstructed. The beauty of classical computing is its inherent robustness. Classical bits only hold a value of zero or one, nothing else—and when they fail, it’s very infrequently—at a rate of approximately one in a quintillion, or 10-18. Therefore, it doesn’t require a lot of redundancy to provide extremely high reliability.
Fault tolerance is just a bit (pun intended) more complicated in quantum computing, partly because of the high failure rate of qubits—estimated at one in 1,000 or worse. And, while classical errors are just bit-flips (zero becomes one or vice versa) quantum errors can be much more subtle due to the ability of qubits to store so much more information. This quantum superpositioning means that classical techniques of redundancy don’t work—you can’t just add more qubits and spread the data around. Instead, quantum computing engineers have introduced the notion of logical qubits assembled out of multiple physical qubits so that the quantum software, or gate logic, can operate on the multiple physical qubits as if they were a single unit. Additional operations and measurements, known as the error correction code, must also be executed to detect and correct errors in the physical qubits underlying the logical qubit. The best error correction codes until now have required the number of physical qubits per logical qubit to increase quadratically with the number of errors corrected—driving a need for hundreds of thousands, or even millions of qubits before we can even begin thinking about quantum advantage or quantum supremacy.
Stepping up to the challenge
Let’s look at the three announcements claiming significant improvements in quantum error correction.
First, in February 2024, the Canadian company Nord Quantique announced what they called a one-to-one mapping of physical to logical qubits. The company has developed a technique of using microwave pulses to control photon-based qubits and claims that they can achieve error correction within the physical qubit itself—thus the one-to-one mapping and potentially reducing the number of physical qubits required for useful quantum computing by a factor of thousands. However, what was actually achieved at the time of the announcement was increasing the lifespan of a single qubit by 14 per cent with simulations indicating that the technology should be scalable to multiple qubits. Nord Quantique intends to demonstrate a multi-qubit system later in 2024. I think that Nord Quantique’s results are very promising, and I am proud to see this kind of work being done in my home country but it’s also too early to qualify as a true step change. Let’s wait and see.
IBM came next with a paper published in Nature in March 2024, claiming a ten-fold improvement in the overhead needed for error correction. IBM’s approach is to rethink the mathematics and geometry of how error correction codes operate, making them more efficient. Current codes operate on qubits arranged on a two-dimensional grid, and thus are known as surface codes. Surface codes come from a class of error correction codes known as qLDPC—quantum Low-Density Parity Check—which simply means that only a few qubits need to be checked for errors, and not very frequently.
The scalability of qLDPC depends on how many connections you can make between qubits in the system. But the two-dimensionality of surface codes implies that you can only connect a qubit to its four immediate neighbours, and this in turn drives the quadratic increase in overhead as you try to add more qubits to correct more errors. IBM researchers considered the possibility of arranging qubits on a three-dimensional donut shape known as a torus to enable more connections between physical qubits. Now, IBM didn’t really build a torus-shaped quantum computer—although that would have been very impressive to see. Rather, they were able to fold the surface grid of qubits and use some creative engineering to build additional connections, making a virtual, mathematical torus that worked equally well.
The result of IBM’s work is the creation of a system of 12 fault-tolerant logical qubits based on only 288 physical qubits, where the old surface code method would have required some 3,000 qubits—thus the ten-fold improvement in error correction overhead. This scale of quantum hardware is already feasible. IBM has deployed 127-qubit Eagle computers in several locations around the world already, and the 433-qubit Osprey was delivered in 2022. (The 1,121-qubit Condor was shown in late 2023 but has turned out to be more of an engineering proof of concept.) IBM’s Flamingo architecture due in 2024 will enable multiple parallel chips of 133 qubits each, a better approach to scalability. I’m not sure that the step change we’re looking for is here yet, but IBM is certainly putting in all the work—the scientific and engineering research necessary—to make it happen sooner rather than later.
Not to be outdone, Microsoft and Quantinuum made their own announcement of quantum error correction in April 2004. The two companies claim to have achieved “Reliable Quantum Computing” as a next step beyond the current industry state of NISQ, or Noisy Intermediate-Stage Quantum but not yet at the level of Quantum Advantage or Quantum Supremacy. Whether you call it Reliable Quantum or Quantum Utility, I think it is a very promising approach—I’m all for achieving and learning from incremental gains rather than trying to make an enormous but risky leap all at once.
Microsoft, not known as a hardware vendor itself, has been partnering with Quantinuum and others for a few years already. Microsoft’s contribution to the Reliable Quantum Computing partnership was the development of what they call a qubit-virtualization system which, when layered on Quantinuum’s hardware, allows the assembly of logical qubits from physical qubits. To make a long story short, Quantinuum’s physical qubits exhibit a pretty normal error rate of .008, or on average eight errors per 1,000 executions of a quantum circuit (a set of software instructions.) Using Microsoft’s qubit virtualization, the team created logical qubits and observed an error rate of 10-5, or one error per 10,000 executions, roughly an 800x improvement.
As with the other companies, Microsoft and Quantinuum claim that when scaled to 100 or 1,000 logical qubits their system will definitively outpace classical computers at least for certain types of problems. However, the experiment that was published created four logical qubits out of 30 physical qubits. It’s a step in the right direction, and the improvement in quality is very impressive, but once again I’m hard pressed to recognize it as a true step change just yet.
One step at a time
Nord Quantique, IBM, and the Microsoft-Quantinuum coalition are to be congratulated for the advances they have demonstrated in quantum error correction. Each has taken a different approach but shown significant progress in dealing with errors and advancing the industry toward fault-tolerant quantum. But let’s recognize that in all three cases, the work is still in its early stages and will take more time and effort to prove that it can scale up.
Just like in racing half-marathons, the hard work put in is yielding incremental improvements at first, and sooner or later when we least expect it, the step change will happen.