IBM’s TechXchange community brings together people from around the world who share an interest in the company’s technology and its practical applications. Regular webcasts provide updates on new product announcements, client case studies and other topics of interest to the broader IBM network. One highlight is the annual TechXchange conference, held this year October 21-24 in Las Vegas, and the first time in my career, I was able to attend.
IBM reached out to me back in March 2024, asking if I would be interested in submitting a proposal for a presentation at TechXchange on the subject of quantum computing. At the time, I had just started providing advisory services to Leap QuantiK, a Toronto startup with an interest in developing thought leadership around governance and responsible use of quantum technology. In short, in collaboration with the Quantum Algorithms Institute (QAI) based in Victoria, British Columbia, our proposed presentation entitled “Quantum for Good” was accepted and I found myself on my way to Nevada.
TechXchange is a deeply technical conference, with many presentations focused on the how-to aspects of technology—new coding techniques, hands-on lab sessions and opportunities to take certification exams were the norm. In that sense my presentation was a bit of an outlier, since questions of governance usually consider not the how but rather the why, the what-for, or better yet “what is it good for?” I planned to discuss beneficial use cases for quantum computing, mitigating the risks of malicious use, and finally a governance structure based on work done by IBM and the World Economic Forum.
The first rule of quantum computing
While building the presentation, my collaborator, Dr. Shohini Ghose, the CTO of QAI, emphasized that one of the guiding principles should be avoidance of hype. I wholeheartedly agreed, and the presentation led off with the statement that the first rule of quantum computing is that we speak very carefully about quantum computing. The British academic Carolyn Ten Holter and her colleagues put it best in a recent paper entitled Reading the Road: “Projects and researchers have a responsibility to manage expectations through careful use of language and the avoidance of hype.”
Given the trajectory that AI has taken over the last couple of years, this is sound advice. Personally, I feel like business and society at large have rushed to adopt AI, especially generative AI, without enough consideration for governance and responsible deployment, or sometimes even a thorough consideration of benefit and return on investment. Some companies, like IBM and Microsoft, have stayed active in the AI governance community, and it is encouraging to see more follow suit—I recently attended an excellent event held by Deloitte entitled “Trustworthy AI,” for example.
Quantum is, of course, a very different technology from AI and seeks to solve different problems. My analogy is that AI is similar to a self-driving car, while quantum looks more like an electric vehicle. Generative AI is a different kind of interface, taking over tasks that people used to do for themselves—and like self-driving cars, this can still occasionally lead to a crash. Quantum on the other hand is essentially a different, faster kind of engine under the hood, regardless of who is driving the vehicle. Governance concerns will be different, but avoidance of hype is a common theme. The nice thing is that, since quantum is still not yet ready for large-scale production deployment, we have the time now to establish a governance framework up front.
I suggest that there are five key points you need to remember about quantum computing as we avoid hype and consider its governance and responsible deployment:
It’s a radical paradigm shift. Computing with qubits is completely different from binary bits. Aside from the potential speed this will deliver, it also implies a whole new approach to software development, right down to the lowest level of machine instructions.
It will break Moore’s Law. The famous rule of thumb that the number of transistors on a chip will roughly double every two years will not hold up much longer, because transistors simply won’t be miniaturized any further. Large-scale fault-tolerant quantum computers will bypass this restriction and deliver exponentially faster computation.
It’s good at deep math problems. And by extension, not good at much else. If you have complex optimization problems, search algorithms, or large sets of linear equations to solve, quantum can probably help.
It’s inherently stochastic. Quantum computer scientists like to speak of expectation values, essentially a probability distribution of results from a quantum algorithm. This is OK because many of the deeply complex problems quantum can help with normally have approximate solutions. But if you’re looking for absolute certainty, you won’t find it in quantum.
It’s not a general-purpose computing solution. You won’t have a quantum cell phone or quantum laptop, and you will not be running a quantum spreadsheet or word processor. Quantum will find its role as one service in a hybrid computing environment where AI and classical high-performance computing also contribute to the solution architecture.
But really, what’s it good for?
It’s easy to get caught up in hype thinking about possible use cases for quantum computing. Too often, it’s promised that quantum will solve global warming or world hunger, with no thought of what it would really take to even get started. So, let’s not fall into the trap of inflated expectations followed by inevitable disappointment. I prefer to think of quantum computing in three phases: what’s possible now, what will be feasible in the intermediate term, and what is probable in the longer term. Throughout, I want to outline the practical uses for quantum computing that will maximize social good.
First, the most immediate use cases for quantum computing are in mathematical optimization problems. There are two types—the “travelling salesperson” problem that seeks to optimize a network and the “knapsack problem” that tries to maximize total value given a fixed set of constraints. Mathematically, they’re quite similar and algorithms like Quadratic Unconstrained Binary Optimization (QUBO) and Quantum Approximate Optimization Algorithm (QAOA) already exist and can execute in both simulated and smaller-scale quantum hardware with error mitigation, delivering improved accuracy and performance over classical methods.
Let me give an illustration. Earlier this winter, my doorbell rang at 8:30 one evening. When I answered, it was a DHL delivery driver dropping off a package. Trying to be friendly, I commented on how he was working late, and he answered that he had to make 150 stops that day. It didn’t take me long to realize that even if he worked a ten-hour shift, that would mean one stop every four minutes with no breaks[1]. Optimizing large-scale delivery networks is a very complicated problem requiring enormous computing resources to deliver a reasonable approximation—so if quantum computing can do better, I would hope that it would make that DHL driver’s life easier. One of the key points of quantum governance is that the benefits of the technology should accrue equitably to all stakeholders including employees and the public as well as business owners.
A typical example of the knapsack problem is in financial services—optimizing a portfolio, for example, where there are many kinds of investments available, each with a different risk profile and potential return. A financial advisor must adhere to a “Know Your Client” (KYC) form that documents the client’s investment preferences and risk tolerance. Maximizing returns while keeping the risk within the client’s comfort level, and taking into consideration personal, subjective ethical guidelines such as, perhaps, avoiding military stocks or preferring environmentally friendly companies, is a complicated problem. Quantum algorithms have already yielded improved optimizations, faster than classical. Increased stability and lower risk, for the financial institution and investor alike, is a positive outcome—especially since the 2007-8 financial crisis.
An additional benefit of using quantum to improve optimization is environmental. For example, an optimized delivery route will use vehicles and fuel more efficiently, eliminating ‘empty miles’ and reducing emissions. Airlines are also experimenting with quantum optimization to better balance the cargo load in aircraft. It is hoped that an improved distribution of the cargo weight will reduce fuel consumption, especially during takeoff and landing, again reducing emissions.
Second, quantum computing is a promising approach for modeling and simulations. I’ve written about this before, in the context of battery design for EVs and power grids as well as life sciences and pharmaceutical discovery. The idea is that in any kind of materials discovery, determining molecular structure and interactions is extremely expensive and time consuming in the laboratory. Using computer simulations should provide savings, but classical computers are not powerful enough. IBM has estimated that a relatively simple molecule like caffeine, with only 24 atoms (C8H10N4O2) requires 1048 classical bits, approximately the same number as there are atoms in the earth. A quantum computer, the company figures, would need about 160 coherent qubits. The IBM Quantum System 1 has 127 qubits but still needs error mitigation, so we’re getting close but not there yet. Proteins and other molecules needed for pharmaceutical research are far more complicated than caffeine, but at least the research is going in the right direction.
Lithium-Ion batteries are pretty good in everything from your cell phone to your e-bike to your EV. But they do have safety problems, and the typical EV driving range still is only about 500 kilometers. As power grids rely more and more on renewable energy sources like wind and solar, they need to make significant investments in storage capacity to manage the variability of supply and demand. In Canada, for example, the Globe and Mail newspaper has reported that storage will need to increase ten-fold for the country to meet its stated goal of a net-zero grid by 2035. Automotive manufacturers and grid operators alike are investing in quantum computing research to help model new battery materials for better reliability.
If quantum computing can make pharmaceutical discovery faster and more efficient, the obvious social benefit would be a healthier population with equitable access to medicines that are cheaper, safer and more effective—as well as reduction of potentially harmful human and animal testing. And batteries based on new molecular compounds, that are safer, longer lasting and of higher capacity, will lead to reduced carbon emissions and more stable, resilient power grids.
The third application space for quantum computing is in machine learning and search. This is probably the most long-term due to the size of quantum hardware required, but nonetheless it holds the promise of significant benefits. Machine learning (ML) and Neural Networks are heavily dependent on manipulation of matrices, linear algebra and a certain amount of calculus, especially when Nobel laureate Geoffrey Hinton’s technique of back propagation is applied. Quantum Machine Learning (QML) and Quantum Neural Networks (QNNs) are promising areas of research. Grover’s Algorithm, dating from 1996, is a quantum approach to unstructured search problems and provides a quadratic improvement in speed against classical methods.
Financial and credit card fraud hurts everyone, and costs the banking industry billions annually. Many banks are applying machine learning in an effort to identify fraud more accurately and eliminate false positives. They are researching QML approaches to speed up fraud detection and improve customer service. Wells Fargo is partnering with IBM in exactly this use case.
I recently had to have an MRI scan done on my knee due to a running-related injury and I found the process disconcerting. I was in the machine for a good 20 minutes while it worked noisily, and despite my best efforts to keep still, my leg trembled slightly, causing some distortion in the image. This is not uncommon, and the medical industry is applying machine learning to try to enhance poor-quality MRI images. Adding quantum to this ML use case can make image interpretation faster and image enhancement more accurate. Putting aside my torn meniscus, a much better example of the beneficial use of QML was recently documented in the scientific journal Nature—interpretation of brain MRI scans for the early detection of Alzheimer's Disease.
Another expected beneficial application of quantum to ML is in large-scale modeling. IBM, collaborating with NASA, has built a geospatial model that will help predict and manage the effects of climactic events like floods, droughts, wildfires and hurricanes. I’ve seen it demonstrated at various conferences and in my conversations with the IBM team, they are beginning to work with IBM Quantum to enhance the ML algorithms underlying the model. The Bank of Canada recently described applying quantum computing to Monte Carlo simulations and machine learning, achieving up to 20-fold improvement in speed when executing their macro-economic models and commercial bank stress tests.
Where could it all go wrong?
Having seen several potential beneficial uses for quantum computing, the next question, clearly, is where can it be put to nefarious purposes? I’ll highlight two use cases and two other areas of concern.
First is the obvious threat quantum computing poses to cryptography. It’s been known for years that Shor’s Algorithm, when coupled with a cryptographically relevant quantum computer, will be able to find the prime factors of large integers very quickly and thus render cryptographic standards like RSA-2048 vulnerable. My own best estimate is that we have not more than 10 years left before the threat materializes.
Second, just as quantum modeling of molecular states yields benefits in areas such as pharmaceuticals and materials science, it could just as easily be misused to create new types of chemical or biological weapons. This may not be an imminent threat, but it is real and will require a defence.
I also worry about privacy. Quantum sensing technology has applicability in everything from facial recognition to surveillance, so we should be concerned about its use in public spaces. And if QML and QNNs can enable much faster processing of large data sets, then even more personally identifiable information (PII) will surface, increasing our vulnerability to data breaches and identity theft.
My other concern is what’s known as hoarding the benefits. Granted, quantum is an expensive technology with a high barrier to entry, and those who made the investment have the right to earn a return. But if access to it is restricted to a small first-world elite, then social disparity will grow and the greater good will not be realized.
How we mitigate these risks posed by quantum technology provides a good lead-in to a deeper discussion of governance.
Let’s begin with security. Since we know that quantum computing can break all the locks, we will need new and better keys. Luckily, the National Institute for Standards and Technology (NIST) has recently pointed us in the right direction, identifying three quantum-resistant algorithms that can be used now in the deployment of post-quantum cryptography. Organizations that have not already begun a quantum risk assessment should start immediately.
Following the moral compass of governance is ultimately voluntary. We can’t prevent someone from using quantum (or, indeed, any technology) maliciously but we can create a framework of public engagement, education, incentives and legislation. Controlling inappropriate use by bad actors or even hostile countries will take the same pattern as has been used for other technologies such as nuclear. International non-proliferation treaties with monitoring and economic penalties, although imperfect, are probably the best tools we have—so legislators need to be aware of the technology and its risk profile. The European Union’s AI Act may also provide a model, as it classifies risk from minimal to high and ultimately unacceptable, and categorizes use cases accordingly.
There is already a legislative framework for managing privacy. In Canada, we have PIPEDA—the Personal Information, Privacy and Electronic Data Act and in Europe, GDPR—the General Data Protection Regulation. Admittedly they are old and will need to be updated for quantum computing but at least we don’t have to start from scratch.
Governance of the people, by the people and for the people
Ultimately, governance is not so much a question of technology as it is of people—who make the technology, who use the technology, and who stand to win or lose as a result. Government, the private sector, academia, NGOs and the general public are all stakeholders. The World Economic Forum together with IBM has published a governance position paper outlining seven core values, many of which I’ve already discussed in this post:
Common good
Accountability
Inclusiveness
Equitability
Non-maleficence
Accessibility
Transparency
Surrounding the core values are nine governance principles:
Transformative Capability: This is self-explanatory given the five key points and the use cases I discussed in this post.
Access to Hardware Infrastructure: IBM, for example, offers an open plan with 10 minutes monthly of execution time on its 127-Qubit Eagle processor at no charge.
Open Innovation: Quantum needs an interdisciplinary approach to research. Beyond physicists, mathematicians and engineers, the field needs input from philosophers, ethicists, social scientists and other academic specialties.
Creating Awareness: Public education is important, and quantum mechanics as well as quantum computing need to be in school curricula. A games-based approach can make it more accessible—at a conference at the University of Waterloo’s Institute for Quantum Computing earlier this year, I was introduced to quantum chess, a fun way to teach the concepts of superpositioning and entanglement.
Workforce Development and Capability Building: The industry must cultivate a diverse and inclusive talent pool for quantum research and development.
Cybersecurity: Critically important and becoming increasingly well known.
Privacy: As discussed, this will probably need to be addressed by updating existing legislation and governance frameworks.
Standardization: There is a wide range of quantum hardware architectures, each with its own strengths and weaknesses, and applicability to different types of problems. For quantum computing to become mainstream, standards will have to apply in order to facilitate software development. OpenQasm is a good first step. It is designed to provide a standardized assembler language for quantum computing across hardware platforms, enabling portability of higher-level software.
Sustainability: It may still be difficult to measure the carbon footprint of quantum computers but this doesn’t mean we shouldn’t make it a priority to ensure that quantum not be worse than classical computing—and this may be setting a low bar, given the enormous power consumption of the data centres driving AI and cryptocurrencies.
Governance, of course, will evolve as the technology matures. But finally, I think it comes down to three key elements: quantum computing must be environmentally friendly, accessible to all, and beneficial to all.
Otherwise, what’s the point?
[1] 15 stops per hour. If he had an eight-hour shift, it would be one stop every three minutes and 12 seconds.