Hrant Gharibyan, co-founder and CEO of BlueQubit, is interviewed by Yuval Boger. Hrant describes the company’s focus on developing quantum algorithms and software solutions for complex classical and quantum problems, such as material simulation and optimization. Hrant discusses key projects, including a DARPA collaboration involving neutral atom quantum computers and work with Honda Research Institute on quantum algorithms for image classification. He highlights the importance of hardware-agnostic approaches and the potential for achieving quantum advantage in the near future, addresses the challenges of different quantum hardware modalities, the evolving landscape of quantum computing tools and techniques, and much more.
Full Transcript
Yuval: Hello, Hrant, and thank you for joining me today.
Hrant: Hi, Yuval, pleasure being here. Thank you for this opportunity.
Yuval: So who are you and what do you do?
Hrant: So I am a quantum scientist. I did a PhD in quantum physics. I’ve done a lot of quantum computing research for the past decade. And a year and a half ago, I started a company in the quantum computing space. The company is called BlueQubit. We are a quantum algorithms and software company, and I’m the co-founder and CEO of BlueQubit.
Yuval: What kind of algorithms and software, or maybe we start with who would your users be?
Hrant: Yeah, great question. So I think, you know, there are two components in developing quantum applications. One of them is much more scientific in a sense of developing novel algorithms for addressing different complex classical problems or quantum problems that are hard to solve with the current computing devices like CPUs and GPUs. For that direction, we looked into material simulation and optimization problems, for instance, which are areas where classical systems are limited to solve them, and we believe that quantum computers bring a completely new toolkit to the table. The users are enterprises that have those problems or governments who deal with those very heavy problems at scale, and they’re interested in hiring researchers and scientists to focus on those algorithms. Some of those algorithms are quite well established and are adapted to specific enterprise use cases. Others require novel insight solutions that end up being scientific papers.
But then another component of quantum software is actually implementing algorithms on the quantum devices that we can access. That part is extremely software engineering heavy. We write code in Python using libraries for quantum algorithm development. We manage a product that’s an enterprise-grade product for running quantum workflows, implementing them, maintaining the data, and tracking the execution of various quantum algorithms on real quantum hardware and on quantum simulators.
Yuval: Do you have a customer example where maybe you can talk about what their issue was or what the challenge was and how you helped them address it?
Hrant: Yeah, actually I can talk about two because we are publicizing those. We got a green light from the customer. One is a DARPA project on a specific optimization problem on the graphs that utilizes neutral atom quantum computers, like the ones developed by QuEra Computing, which is based on neutral atom technology. They have 250 atoms, and we developed an algorithm for doing Gibbs sampling of an independent set problem. This is a very theoretical problem that we have been working on, but once you solve it or have an idea how to solve it on a quantum computer, you can hope to extend it to practically interesting problems. For example, in defense, it could be sensor placement optimization, where finding the minimal number of sensors provides maximal coverage for some generic 2D graph that describes some neighborhood or possible locations. This is an example. A lot of the work we do is still very theoretical. A lot of benchmarking is happening on simulations as well as on quantum computers, but this is to give you a flavor of one specific use case.
Another use case that we worked closely with a customer on, which is quite compute heavy—I’m actually going to do a presentation about this in a couple of weeks, and we’re going to publish a paper soon; we’re in the approval process—it has to do with loading images into a quantum computer with very high accuracy and a minimal number of gates. This is a first stepping stone to developing quantum algorithms for image classification. We did examples of loading 2 million pixel images into 20 qubits, then developed a novel method for breaking down the image into blocks and then loading it into a quantum computer. This is work we did with Honda Research Institute. They have a few team members interested in quantum use cases, and we developed the algorithm together, then used our GPU simulators to benchmark and test those algorithms on a real dataset, like the Honda Scene dataset, which is a standardized dataset for image classification in a traditional machine learning framework. We adapted it and carried out many experiments on quantum simulators as well as quantum hardware like IBM machines and Quantinuum machines. For instance, we used Quantinuum’s H2 chip, which is a 56-qubit quantum computer, and IBM Heron, which is a 156-qubit quantum computer with really good two-qubit error rates, which is quite remarkable. We were able to run thousands of two-qubit gates for loading images and then classifying them afterward.
This gives you more concrete examples, but the use cases across verticals are different. Financial institutions look at optimization problems like portfolio optimization or fraud detection with quantum machine learning. A lot of those use cases are extremely heuristic. Particularly in quantum machine learning, there’s no real evidence that it will work or won’t work. There are some results with niche assumptions that things will or won’t work, but those assumptions are often violated or not accurate enough. It really comes down to real practical experiments on hardware and simulators to figure out the real potential of whatever strategy you develop.
Yuval: That’s actually very interesting because data loading has been considered a big problem for quantum computers. So you said two million pixels loaded on 20 qubits? And can they also be unloaded? Meaning, do you just load them, or can you reconstruct the image to a reasonably high quality afterward?
Hrant: That’s a great question. You can’t with exponentially low cost, right? You need to do tomography, but you can do operations. If you have a dataset that’s already quantum, you could do classification on it without requiring you to read out the answer. To process the images, you don’t always need to reconstruct them completely. You can do different operations in the quantum space. That’s one of the main messages of this work: loading is a one-time cost you pay, but after that, you open the door for different other explorations and use cases. Indeed, sometimes there is a cost to pay to read out the data, but a lot of times you don’t need to read out the data fully. In a quantum compressed fashion, you can process and do operations on the data. For example, classify the data or compute the way of dot products between images, which represents the distances between different images.
Yuval: Is there significant classical pre-processing? I mean, are you taking this 2 million pixel image and trying to figure out what 20 states you need to load into a quantum computer in the state preparation? Or is the loading sort of you have 2 million operations on the quantum computer sensor?
Hrant: Oh, no, no. You don’t have that many operations in the quantum computer. It’s a classical training of a quantum circuit. You train a quantum circuit that’s sufficiently shallow to be hardware-friendly in modern quantum hardware, but then you train it to find the amplitudes that mimic the classical distribution. Indeed, the cost is polynomial in the size of the original classical dataset, and it is a classical cost. We do the training on GPUs essentially. But once you train it, you have a circuit description of your data, which is extremely efficient, like a pretty shallow quantum circuit with the parameters of the quantum circuit.
Yuval: Would you consider your company a generalist or a specialist? Specialist meaning, oh, we are primarily machine learning or primarily optimization or something else.
Hrant: Yeah, that’s a good question. I think we are a generalist. Our R&D effort has different interest areas, including optimization and machine learning. In terms of the product side of things, I want to get it out there for quantum users who do large-scale, big experiments, like expensive simulation experiments, such as training QAOA with thousands of parameters beyond 30 qubits. We have a platform that allows you to do it with minimal effort. We have some super users, either academic PhD students who do a lot of numerical research or quantum companies who are specialists. We have a few in the bio-pharmaceutical direction, for example, where they realize they can leverage other companies who do platform and infrastructure components. In that sense, we are a generalist. We have this tooling for quantum workflows, specifically workflows that are analytically hard to track and require numerical experiments. They’re heuristic methods, typically. For those companies, we could be simply a tool, like another quantum tool to run GPU experiments, deploy different workflows on actual quantum hardware, and compile.
But in terms of use case discovery and exploration, we have a more research-oriented part of the team that’s mostly in the US, looking into use cases that could lead to quantum advantage. Our team believes the hardware is really interesting because we got really good at doing quantum simulation, like tensor networks for GPUs. We know how hard it is to simulate the latest 156 qubits from IBM with thousands of gates, and we are at the cusp of crossing into the unsimulable regime, where it’s going to be un-simulable and then come down to is there anything interesting you can fit in this computer that you can do that classically will be in this non-simulable regime? I think that’s going to be the first quantum advantage example. Nobody really knows what use case it’s going to be. It could be some very niche machine learning task, some molecular simulation like preparing a ground state for a target. There was a paper even today from Google’s team for a very interesting condensed matter model which is very hardware friendly that you could prepare the ground state of this that is classically not tractable with tensor networks. Our more R&D side of the team is focused on that—not really fault-tolerant devices, but this mission of finding out if you have 100 qubits and can run 10,000 plus gates, can you do something interesting that’s either scientifically or commercially interesting that’s classically unsolvable? I think that’s the real path to quantum advantage in the next five years. There are a lot of quantum computers now with these capabilities, and they keep getting better and better. I think we’ll see a lot of very interesting things, including cool stuff in error correction. We’ll see interesting things in niche problems that quantum computers can solve.
Yuval: What’s your favorite quantum modality?
Hrant: Oh, by modality, I assume you mean hardware technology? Is that what?
Yuval: Yes. I mean, some people like superconducting, some people like others.
Hrant: I want to be agnostic. I think each of the different technologies has pros and cons. I’ll list some of them. One thing that’s clear to me is that the gate-based formalism is here to stay in terms of algorithms and hardware implementing gates. Even people who started with analog initially, they realize now that you need digital quantum computers, you need to implement gates and we have a formalism to do gate-based algorithm development. I think there’s a consensus now and growing interest. That’s why we exist now—because there are a lot more people who care about gates, want to develop algorithms with gates, and test them on GPUs. In terms of performance and scalability of hardware, I love neutral atoms because they are scalable. There are great examples of really cool ways to do error correction with neutral atoms, which is extremely promising. This is a limitation that is much more severe in trapped ions and superconducting chips. But, the fidelities are not there yet.
In terms of physical fidelities, the neutral atoms are much more harder to get them down below certain thresholds, but they’re easy to scale, and we have pretty good ideas about how to go to thousands or even up to 10,000-qubit systems. Perhaps with a few layers of error correction, we can beat all the other technologies. With trapped ions, they’re extremely slow to operate and do gates and read out the answers, but we were impressed with Quantinuum’s two-qubit gate fidelities and the ability to do all-to-all interactions, which is quite important. And that additional slowdown, I think, in the big picture is going to be irrelevant. Sure, it might be 100x slower, but if you can get exponential advantage in a certain problem, like we’re talking about 2 to the 100 timescale difference, the 100x cost—whether it takes a day to run or 100 days—is irrelevant if you’re solving unsolvable problems.
In that sense, I like how trapped ions address that, but they have a huge scalability question. I don’t think people have a good idea how to scale those systems consistently to thousands. However, there might be a good way to get to 100 qubits and 10,000 gates first and actually do the first quantum advantage experiment because the all-to-all connectivity allows you to make circuits much shorter and have more complex entanglement with very few gates. Doing some interesting condensed matter problem which is maximally entangled with a really sophisticated entanglement structure that they are able to do, getting to 100 qubits, I think, is doable. I think the Quantinuum 56-qubit system is quite stable and works well, and they have a clear understanding of how they would scale it. Other ideas, like those from IonQ and an Israeli company we work with, Quantum Art, have a very interesting approach that leverages both local gates and all-to-all gates, with trade-offs between speed. I think trapped ions might get to an actual advantage use case first, but they are going to be much harder to scale to production-scale quantum computers. The bet you can solve with like millions of gates you can do Shor’s algorithm or solve generic problems will be much harder. Superconductors are great too, but scalability is challenging due to their 2D geometries. Compiling any interesting circuit ends up resulting in a much deeper circuit that becomes challenging. Their fidelities are somewhat comparable to trapped ion fidelities now, they’re about the same fidelity, but all-to-all connectivity is missing there.
Yuval: How do you decide on which hardware to use? If a customer comes in with a problem, let’s assume you’ve even developed an algorithm and ran it on a simulator. Now the customer wants to see it on real hardware. It sounds like the implementation would be very much hardware-dependent. Do you do multiple implementations? What’s your thought process on deciding on the hardware platform?
Hrant: Yeah, I think it depends on the problem. The cool thing is that customers sometimes think of us as independent arbiters because we are agnostic to the hardware. We look at the problem and then decide what problem we want to work on. Depending on the problem, different hardware might have the right advantage. If it has an independent set structure, like your device has a natural framework for it and is larger with a block-like repulsion internally embedded, obviously that kind of problem is more natural to do in that context. For the other project, for example, the one we did with Honda, you could go either way. We did experiments on Quantinuum, and we did some on IBM. We actually benchmarked them to see when you compile different types of circuits, whether the connectivity has a big impact on the accuracy of loading the dataset because it completely changes your Ansatz architecture. So yeah, we did independent benchmarking.
There are trade-offs between the two, like cost factor, performance factor, the depth of the circuit you can run. It really depends on what the customer is after. If they want one implementation, a really good one, then we must make sure we understand the problem well, understand the mapping we want to proceed with, and then find the best-fitted hardware for that. For example, we’re still making decisions. We recently got an award on the Air Force direction for an optimization problem. We are actually deciding, in conversation with the customer, whether they prefer more independent but somewhat shallow benchmarking of different technologies or just making a bet on one and really going deep in terms of adjusting the algorithm. It really comes down to the customer needs and what they are more after. Sometimes customers already have a preference in advance. They already have a relationship with one of the hardware companies, and they’re like, “Oh, we would really love to try this hardware, but we don’t know what algorithm to run.” Then it’s a matchmaking process to see if we have the right algorithm for their needs and if some of the existing algorithms can be adjusted to their use case.
Yuval: You sound optimistic about reaching quantum advantage, but I guess the question is when? What is your best estimate?
Hrant: To be clear, what I mean by advantage is something like what IBM tried to do with their utility paper, but pushing to a regime where tensor network simulations cannot come up with a classical technique within six months or a year. I don’t think it’s going to be a problem where we can prove an analytic scaling difference; I doubt it. I think it’s going to be a heuristic thing, like tensor networks giving a completely different answer, and maybe you can do a physical experiment in the lab. My optimism comes primarily from the fact that if you have 100 qubits and you can do 10,000 two-qubit gates with really good fidelity, the regime we’re approaching now, reproducing these observables at the end of it classically is extremely hard. We did some work on tensor network methods, which are the state-of-the-art classical techniques. Once the entanglement is deep enough and has built up enough entanglement, there are really no shortcuts. You need to track all 2 to the 100 possible trajectories.
That’s where my optimism comes in—that we are there. In my view, when you have that kind of capability to simulate entanglement, there are a lot of condensed matter problems, very scientific problems. I don’t think it’s going to be a very commercially useful problem first; it’s going to be some scientific problem similar to the Ising model or a variation of that model that’s very hardware friendly. We’ll do some experiment, measure some observable, or some diffusion speed or some quantity, and then we can say, “Hey, this is really intractable in that regime.” Then we can explore the phase numerically with quantum hardware, and we won’t have any classical analog to predict these behaviors, which would be the first step to commercial quantum advantage. I want to make sure I’m clear: I think commercial quantum advantage will take a while to do a proper commercially useful molecular mapping. But to get to a problem where we can show real superiority of quantum devices against classical, I think that’s within a five-year window, just from the quality of hardware, scale, and gate fidelities we’re able to do.
Yuval: What do you think of today’s software tools? Are they good enough to get you what you need, or do you need a software engineering breakthrough in how to write quantum software?
Hrant: Oh, yeah, that’s a great one. I think we have the fundamentals. I think quantum is going to be pretty similar to how AI evolved. There is a standardization of gate-based methods. Initially, there was a lot of argument against it, but people realized it’s a universal framework, and most hardware can compile gate-based frameworks quite well. There’s standardization in that domain. We don’t need to invent a new programming language; I don’t think so. I think it’s going to be quite similar to how AI and neurons became a standard, and based on neurons, we’ve built more sophisticated architectures like CNNs, tensor networks, transformers, and so on. Now we have the building blocks, and they’re becoming more standardized and accepted, and more people know how to use them.
Now I think it comes down to algorithms and testing those algorithms and then hardware implementing those circuits efficiently. There’s a lot of improvement and cool stuff going on in terms of quantum control, doing quantum control better, and automating error mitigation to push it further. Obviously, when error correction comes in, you will write your code in a circuit, and it will map to the error-corrected version of it and execute that instead. But I think it’s all going to be within the gate-based formalism. You’ll expand it, and there will be more software needs for that, to do it faster and better. But I don’t think we’ll invent a new programming language. At least I don’t see, from the user perspective, someone building algorithms that it is necessary. You basically have a universal set of building blocks with gates to build a quantum algorithm.
Yuval: What do you know today that you didn’t know six months ago as it relates to quantum computing?
Hrant: Oh, that’s a great one. Six months ago, let’s see when it was. Six months ago, I hadn’t run as big of quantum circuits on the machines that we have access to now. It happened much faster than we thought it would happen. Running 4,000-gate circuits on 156 qubits with very good fidelity that still retains a meaningful signal—I didn’t expect that would happen within six months. Compared to a year ago, we were running stuff on 27 qubits with hundreds of gates. Things moved much faster than we thought. I think that’s encouraging and gives me optimism that there are a lot of people working hard on the hardware side to make this happen. The other one, I also didn’t expect error correction to happen so quickly on three different platforms—neutral atoms, Quantinuum did it on trapped ions, and recently Google’s results on a superconducting system. That was also extremely fast, condensed within a little over six months, I think, for all three results essentially.
Yuval: And last hypothetical: if you could have dinner with one of the quantum greats, dead or alive, who would that be?
Hrant: Oh, that’s a good one. With the live ones, I actually had dinner with Peter Shor and John Preskill. I was lucky enough to work through my academic career with some of the best, like Seth Lloyd. In terms of the dead, I think I would probably prefer Bell, like Bell from Bell pairs. I think it was one of the coolest ideas to separate entanglement from classical correlation. He was a very interesting physicist; he came from a particle physics background and invented criteria for entanglement that really established quantum information, which led to quantum computing. So I would probably want to grab a dinner with him if I had the chance.
Yuval: Very good. Hrant, thank you so much for joining me today.
Hrant: Yeah. Thank you for the opportunity. This was fun.