Bob Sorensen, chief analyst for quantum computing at Hyperion Research, is interviewed by Yuval Boger. They discuss the growing interest in on-premises quantum computing, their recent survey of the substantial potential ROI for enterprises using quantum optimization processes, the reality of Quantum AI, useful lessons that quantum vendors can learn from HPC vendors, and much more.
Full Transcript
Yuval Boger: Hello Bob, and thank you for joining me today.
Bob Sorensen: Hi Yuval, good to talk to you again. Always a pleasure to have a chat with you.
Yuval: Likewise. So who are you and what do you do?
Bob: So, yeah, my name is Bob Sorensen. I work for a consulting firm of sorts called Hyperion Research, and there I am predominantly the chief analyst for quantum computing.
Quick background on Hyperion. We are more or less an advanced computing shop writ large. We look at anything that’s at the point, the end of the computing sphere, whether it be quantum computing, GPU-enabled mod-sim (modeling and simulation) at exascale range, AI developments at scale, interesting processor developments.
So it’s a natural fit for us to look at quantum not as a standalone technology, but ultimately as part of the overall landscape of ways to solve the most compute-intensive or computationally important applications and use cases that are out there in the real world.
So quantum fits right nicely into kind of what I consider to be the next generation of really interesting advanced compute capabilities.
Yuval: We’ve known each other for a number of years, and I think you’ve been doing HPC-related work much longer than that. So if you look at the past 12 months, what’s new? What have you learned in the last 12 months that you didn’t know or the market didn’t tell you before that?
Bob: Well, I don’t want to start out with a humble brag, but when we started looking at quantum, there was a huge enthusiasm for kind of the low cost, low barrier to entry cloud access model for quantum. And I actually did a study a while back that looked at all the myriad ways that any interested quantum computing potential end user could access a quantum system, multiple quantum systems, whether going through a cloud-based model from the vendor through CSP, through a company, I call them curators like StrangeWorks, where they are an interesting value-added layer.
And so the cloud access model was considered kind of the sine qua non of accessing computers. And I was always intrigued by the fact that ultimately people are going to want on-prem quantum. And I was never satisfied with our numbers when I saw a lack of enthusiasm for on-prem. And as I said, vindication, the pendulum, I think, is starting to swing a little bit. I’m seeing much more interest and concerns, or at least the desire to explore the advantages of bringing the quantum system into an on-prem facility versus a cloud.
And the things that really justified it for me from the on-prem perspective is, first off, you have to deal with the economics of what a quantum system via the cloud versus on-prem is. And if you really start to use that system consistently, the pendulum again swings towards economic advantage towards having on-prem. The idea of data protection, confidentiality, not having interesting data that you hold dear flying around perhaps in a CSP or at least across the internet.
The other thing that’s frequently cited is the idea that quantum classical hybrid algorithms can’t really do well if you have to have the internet time of flight delays in a tightly integrated hybrid classical algorithm. But to me, one of the great unappreciated aspects of all this right now, when we ask organizations why they are looking at quantum today, even though it’s pretty clear we’re at the leading edge of what people are starting to call utility-based quantum systems, a third of them are saying, “We’re looking at this because we have no in-house expertise, and we want to build in-house expertise in this capability.”
And I firmly believe, and I came up in the early days of organizations like Cray Research and IBM and such, where you had a system in the basement you could kick the tires on. And you cannot value the advantage of having an on-prem system that you own, that you can play with constantly, that your experts can tear the hood off of and say things like, “Why is this happening? Why isn’t this happening?” You have a degree of intimacy with the hardware and software of an on-prem system that you can never really hope to replicate in a cloud-based access environment.
And so this need, this desire to build in-house expertise, I think is going to drive on-prem in the early days. And as people start to think about, “Wow, this system is really exactly what we need,” or, “We need to be looking somewhere else,” that enthusiasm for on-prem is only going to grow. That’s the first thing I saw.
The second one is, and for those listening who don’t subscribe to Travis Humble’s newsletter out of Oak Ridge National Lab, please do. He’s kind of the lead guy, if you will, in quantum computing down at Oak Ridge. And he has openly admitted that every Sunday morning in his pajamas, he puts together this summary of interesting things that happened in the previous week. And that comes out on Sunday mornings, and it’s great.
And I wanted to do a word count analysis of that because the phrase I’m seeing most often now is the word “partnering.” There is an awful lot of work going on right now where different quantum computing supplies from the hardware perspective, from the software perspective, and the end user perspective are starting to partner. It’s no longer the full-stack organization going it alone. It’s no longer an organization that has one thin slice of the quantum stack and is more or less trying to figure out how they fit in. It’s, “Let’s go out and find the missing elements of what we need to do.”
Or, I think more appropriately, “Let’s see where the synergy between different organizations matter, and let’s start the partnership.” So I think what we’re starting to see is a coalescence of a community of interest, a community of the willing, who want to further take the next step in quantum computing development and are looking to form partnerships to make that happen. And I think we’re going to see more and more of that. And I think that in this case, one plus one isn’t going to equal two. It’s going to equal ten. So I really like the potential for partnerships, clever partnerships, to really start to accelerate the value of and moving towards this goal of utility-scale quantum.
Yuval: Starting from the on-prem point that you mentioned, these computers are fairly expensive. And so it would make for a really expensive college degree if you buy one just to train your workforce. Couldn’t you get the same value on the cloud, not to mention that on the cloud you could potentially access different types of computers and compare and contrast?
Bob: Again, I think that there’s a certain issue with costing, and we can’t ignore that. But the advanced computing crowd is used to pretty expensive systems. When people have gotten over the hump of spending $40,000 on an individual NVIDIA GPU, a full-up quantum system in perspective may not seem all that bad.
So I think there is a certain amount of budgetary discretion at the highest end of computing, especially if you consider yourself a research organization, where the obligation is to continue the trajectory of increased performance capabilities, which is really not coming from the classical HPC world.
To get that next step, we saw the TOP500 list come out at ISC in Hamburg last month, and it is becoming much more difficult for classical computer performance gains to be realized. And to realize those gains, you’re talking upwards of $500 or $600 million a system, and you’re talking upwards of 25 megawatts or more. So organizations that have unmet computational demand are eager to look for new ways to do that. And it’s one of the ways we typify the advanced computing sector. It’s where price performance is not as critical as performance with a little bit of price in there. So I don’t think the price right now is going to be a big issue.
But as I stated earlier, the value of having complete access to a system is, I think, incalculable when it comes to really building that in-house expertise. I always think you’re a little bit hands off. It’s a little more, I don’t even, sanitized, if you will, when you’re going through a cloud environment, because there’s a mediator that’s trying to make that system look appropriate for every end user that comes down the street. You’re kind of getting a vanilla version of the system.
An on-prem system, you’re really able to customize and drive at the things you want most, as opposed to just driving at a system that has been smoothed out, all the edges are smoothed out. So no one’s too offended with that architecture. But it does, it may not be well suited to your particular workload. And right now, if you’re going to spend upwards of, you know, multiple tens of millions of dollars on a system, you don’t want a bland, general purpose, one size fits all architecture. You want a system that at least is some sense targeted for the kinds of workloads and the kind of end uses you envision.
And I think most importantly, the kind of system you can start to consider, how do I integrate this into my existing classical workload environment? And that’s something, again, that becomes much simpler on an on-prem environment than if you’re just trying to bring it in from a cloud and continually dealing with some of the vagaries of one more step removed from the hardware when your main goal is to integrate that into your existing either on-prem or even cloud-based architecture. So again, I think the on-prem advantage is right now pretty persuasive.
Yuval: Who’s paying for it? Where do the budgets for these on-prem machines come from?
Bob: Well, right now we’re seeing, certainly governments are pretty interested in this. I thought it was interesting, I know that QuEra just cut a deal with Japan AIST for a $40 million plus system. That’s a research system. That’s clearly a government-funded organization. RIKEN, also in Japan, just partnered a deal with IBM to bring a System Two over there. I find it interesting that the Japanese government, unlike many governments around the world, are not too concerned with buying domestically right now.
And I say right now mainly because right now I feel like Japan doesn’t have the kind of quantum computing indigenous supply capability that they may have three or four years hence. So for those foreign vendors, enjoy it while it lasts. But the point is right now we’re seeing, and of course the PsiQuantum near-billion dollar, Australian dollar utility system geared for a 2027 procurement. So we’re seeing government procurements, which is how the world is supposed to work.
For nascent risky technologies, in some sense, it’s the role of government not only to fund research, because that’s a wonderful abstract concept, but to fund procurements, because that’s what generates interest. That’s what generates establishing a stable commercial capability within the sector. The other hand, of course, is organizations that are pure research activities, places like Oak Ridge National Lab or university systems, where their charter, in essence, is to explore the technology to understand exactly what it’s supposed to do.
We’re going to start to see, I think, more and more aggressive research organizations. And I point to some of the stuff, there was a recent announcement by BMW and Airbus, again, very interested in having a competition to draw out the ability to do compelling use cases for the automotive and aerospace sector. Those kinds of research intensive organizations, the ones that have some portion of their budget is research oriented, and perhaps more importantly, a significant portion of that research budget is geared towards the access of computational capability to support their research agenda. They’re going to be, I think, the ones taking the first steps towards commercial on-prem procurements.
Yuval: As we move from research organizations to real value, how much are enterprises getting value from quantum computing today? I think you told me you ran a survey recently about perhaps optimization. What are you seeing from the enterprise customers?
Bob: Well, thanks for bringing that up, because we did a study recently for a quantum computing maker, and they were very interested in looking at basically the business process value added of quantum computing optimization processes. So it was a very narrow field in some sense. They weren’t looking at scientific and engineering endeavors as much as what are organizations that are interested in business processes, things like supply chain management, workload scheduling, personnel scheduling, maintenance operations, minimizing logistics of movement of materials, say in a plant or something.
And what we found when we asked these organizations, what do you think your investment in quantum is going to be, and what do you think your return on investment is in terms of increased revenue? The results were actually pretty shocking. What we saw was basically these organizations, and we talked to over 300, 303 to be exact, what we found, this wonderful little sweet spot, in terms of how much we think we have to spend to stand up an operational business-oriented quantum optimization process versus how much that’ll translate into increased revenues for our overall organization.
And we saw for every dollar invested, we were looking at $10 to $20 in return in increased revenue realized. And some of the organizations were very, very optimistic about that. And yesterday, while I was preparing some summaries of this, of the 300 companies we talked to, I added up all the increased revenues these 300 companies expected once they have a stable, QC optimized process, and it came up to over $51 billion in increased revenue simply by quantum being able to optimize their existing set of business problems.
That doesn’t even address all the value added in terms of additional research capabilities, driving innovation, and really advancing the state of the art in science and engineering applications. So that is literally one slice of the overall quantum value-added pie that we’re looking at. Certainly, I think one that’s interesting in the near term and certainly one that has a lot of application. Every business out there right now can do better in terms of optimizing a process.
But the ones we talked to were the ones that we made sure that they had significant revenues and significant R&D budgets to ensure that their hopes and dreams and aspirations had enough money behind them. So we kind of did a relatively high cutoff, but it just speaks to, I think, the value here and the interest. And these are organizations that are not traditional HPC end users. They don’t have the guys in the basement who code 24 hours a day and their diet consists of vending machine output. These are enterprise systems primarily who are just interested in basically doing their lines of business more effectively. It shows the reach, I think, of what quantum computing can bring to not only the HPC world, but the enterprise world as well.
Yuval: $51 billion sounds like a lot. Is that $51 billion in fiscal year 2024 or 2034?
Bob: Okay, I was very, very cautious in my casting of this. A, thanks for pointing out that was an annual revenue increase. But what we said was this is when you reach steady state. So we’re not talking about a given year. Some organizations may say, well, we’re not going to reach a standard, stable, quantum-based optimization operational capability until this year. But once we do, this is where we see it settling down to. I don’t expect anyone was making projections, say, down to 2035 or so. I think the expectations are, say, within the next two to three years, maybe five years max. But yeah, annual number, but it’s predicated on the fact that it’s a stable operation. Because we did some budgetary numbers about what are you spending now in terms of exploring the technology. The numbers were appropriately much lower. So take it for what it’s worth, but what it does show is a degree of optimism in the technology. And most businesses, their idea of long-term is two quarters away anyway. So we try to keep it a little loose when we ask about projections.
Yuval: In tying it to our earlier discussion, did you ask these companies whether they’re going to use cloud or on-prem of the 300 and so organizations that you surveyed?
Bob: We did, as a matter of fact. And what we came up with was 52-48 in terms of where the quantum would be. And I believe the 52 was on-prem and the 48 was in the cloud. So, you know, now remember, we’re asking organizations to really project out a little bit in the future. And the fact that on-prem was about half actually surprised me. Because the easy answer is to say, “Yeah, it’s cloud. We’re just going to straight line project where the world is right now.” But the fact that more than half of them said, “Yeah, once this becomes stable, it’s going to be on-prem,” I think was a rather interesting output from that study.
Yuval: At Hyperion, you also track AI vendors and AI accelerators for HPCs and so on. Is quantum AI just a marketing buzzword to ride the AI wave or is there some meat behind it?
Bob: For those old of us to remember the internet explosion where suddenly everything that you mentioned had to have the letters or the thing at the end called .com. If you weren’t something something .com, you were nowhere. I feel that way right now with AI. Suddenly everything is AI enabled, AI powered. Organizations have really embraced the enthusiasm for AI. And quite frankly, I don’t think the dust has settled yet. I think there’s going to be some day of reckoning for some organizations with unmet expectations and realization that AI doesn’t come cheap. So ROI numbers are going to become harder to justify. We’ll see if that happens.
There was a certain amount of enthusiasm. And in the survey, we actually asked these organizations, “What kind of activities do you have in terms of merging AI and QC?” And the enthusiasm for AI is pervasive right now. So organizations are thinking, “Well, of course, quantum is going to have AI capabilities.” AI capabilities are important. They show some promise. And perhaps most importantly, they’re near-term. People don’t have to wait for this burgeoning interest in AI to take a few more years. If you can write a big enough check, you can be on board with NVIDIA and start doing some AI work right away. So it’s real. But at this point, I think it’s unproven.
And unfortunately, we really didn’t ask organizations that we surveyed, “What do you mean by AI?” Is it natural language translation or is it AI for science where you’re using surrogate models to shortcut complex modeling and simulation calculations by training up an AI on known results? There’s a span of what AI is out there. And I think to really understand quantum’s place in it, we have to have better definitions about how organizations see AI. If you’re just looking to replace a voice on a call center, you don’t have to have a human anymore. You can have an AI chatbot doing it for you. Probably not quantum-ready. If you’re thinking about, as I said, surrogate models to really do advanced science and engineering workloads to accelerate some of your real-life classical computational performances, I think there’s a place for quantum computing to really contribute to that.
Yuval: As you think about budgets, is there a short blanket syndrome where people are pulling the blanket towards AI budgets and less for quantum or is the blanket just becoming bigger every year?
Bob: It’s interesting. I think different people are pulling on the blanket from different places. We did a survey a while back and we asked quantum computing suppliers if they were concerned from the venture capital sector if the bright, shiny object of AI would distract from the quantum computing flow of funding. And the answer really was, yeah, we kind of worry. About 40 percent of the organizations we spoke to said there is some concern that the bright, shiny object, which is near-term, which has some huge upside right now, will detract from what quantum’s going on.
But I also think that the quantum computing sector has done a wonderful job outlining its potential performance advantages and its rollout schedule, which means that the prudent organizations understand this is a long term commitment. And so I think what we’re looking at is kind of a I don’t want to say tortoise and a hare, because that may be a trite metaphor, but it kind of is. So I think quantum has appropriately positioned itself saying we’re not the hare, we’re the tortoise. And there’s no finish line here.
But I think as quantum rolls out, there will continue to be some classical opportunities here. Remember, you know, I stated earlier the classical HPC world is looking for anything right now that can accelerate computing capability. People have just kind of gotten through the the GPU for mod-sim enthusiasm. Now they’re looking for AI to accelerate their jobs. They’re very well be maybe some kind of new classical midlife kicker coming down the road. But I think quantum can exist outside that stream of enthusiasm in some way, mainly because the sector has done so well, not only, as I said, expressing the potential trajectory of performance, but the fact that the scheduling is more realistic. And you can make plans. I always like to say that, you know, the real gambler is willing to make a bet on anything as long as they can calculate the risk. That gives them the odds. And QC sector has given enough information that anybody can calculate the risk and understand what it’s going to take to win down the road.
Yuval: A few years ago, the quantum sector was accused of overhyping the potential. Oh, we’re going to change the world and we’re going to change the world tomorrow. Do you feel that that has subsided? Do you feel that enterprises have a more realistic view of the timeline and the capabilities of today’s computers and what is expected in two or three years?
Bob: Yeah, I think what happened was a lot of organizations realized that the potential for overhype and under delivery ultimately, you know, despite its potential short term advantages in terms of getting funding, whether it be from VC or even government, realized that that thread is going to run out long before the value added can be demonstrated clearly.
And so we saw, I think, a certain amount of retrenching, a little bit of much more realistic, as they said, we started to see companies coming out with roadmaps that basically went out to 2029 or 2030 or beyond. You don’t see that in the classical compute world. Those kind of roadmaps are, you know, two to three years at most.
And so I think what the quantum guys said was, look, we can fight all we want about whose modality’s better, who’s got a better T1 time, but we have to have realistic delivery of expectations to the end users who really want to know when, how much and what do I have to do to get there.
Yuval: As we get closer to the end of our conversation today, what can quantum computing vendors learn from the experience of classical vendors as it relates to on premise deployments of these supercomputers?
Bob: It’s all about integration. How hard is it going to be to drop off a really interesting new technology that can be, I won’t say seamlessly, less painfully introduced into the overall workload environment? That is to me is the single greatest takeaway here.
You could become enamored with the technology and all the minutia and what it takes to eke every little bit of performance out of it. But ultimately, it’s going to be sitting in a compute center and people are going to access it for its specific capabilities, but not in a way that requires significant rewrites of code. That requires that you have a large team of subject matter experts who can deal with the vagaries of the quantum architecture versus the classical architecture. So it’s really about integration capabilities. That also speaks to, again, the concerns with in-house expertise.
You’re not going to be able to walk into a traditional HPC environment and say, “Hi, we’d like to talk to your quantum team.” They’re going to go, “Well, it took the afternoon off.” That’s going to be as close to reality, I think, as you’re going to find. So it’s integration concerns and the ability to address the existing suite of workloads. What are you doing today and how can we accelerate them? Not, “Here’s an algorithm you’ve never heard of. Look what it can do. Now go write some code to take advantage of it.” It’s not going to work that way.
Yuval: Next to last question, you mentioned 25 megawatts for a supercomputer. We know that for quantum computers, it’s more like 25 kilowatts. You’ve been a proponent of the big energy saving and cost saving that results from that, but are enterprises noticing that or is it sort of a tertiary reason at best?
Bob: No, I think cost of energy right now is almost top of mind. We see organizations that are starting to think about things like if we entice a user to use less processors or slow the process down as a tradeoff in energy consumption, are they willing to do this? Unix used to have something called nice values where you could submit a job with a value. How nice are you? Put me at the end of the queue because I’m a nice guy. I think we’re going to see energy nice values in a lot of HPC sites, certainly with energy costs in Europe, with recent political developments there, energy costs are considered a huge deal.
It’s interesting now, if you look at budgets for HPC procurements, certainly at the EU level, they’re now giving budgets in terms of total cost of operation, which means it’s a $500 million system, but they’re paying $300 million for the machine and the rest of the budget is the long tail of operational expenses, primarily energy. So they’re literally starting to think about budget procurements that include overall energy costs. And we’re seeing a lot of countries who are much more sensitive to that. It’s not just politically correct anymore. It’s a reality in terms of how much is this machine actually going to cost me?
We use a rule of thumb here that I don’t even know if it’s still valid, but it’s basically a megawatt of power costs you about a million dollars a year. If you have a 25 megawatt system, you’re spending $25 million a year just in electricity. I think that’s an old number. I think the number may be 2 or 3x that. Suddenly, if you run a machine for four or five years, your energy costs are greater than the actual cost of the initial system.
So, if you’re looking at a total cost of ownership over three, four, five, six or seven years, which is becoming increasingly more difficult, those energy costs start to become the significant determinant of how much that machine actually costs to run over its lifetime.
Yuval: Last hypothetical, if you could have dinner with one of the quantum greats or the HPC greats, who would that person be?
Bob: That’s a really good question and one I wasn’t expecting. I hate to take the easy, easy answer, but it would have to be Richard Feynman. The guy that more or less started all this and the guy I always like to say he is probably the only Nobel winning physicist who was an unpaid intern at a HPC company. He actually hung out at Thinking Machines for a number of years when his son was going to school in Boston. And he’s just such a character.
For those of you who saw Oppenheimer, he’s the one sitting in the corner playing bongos, which is what he did. He was also notorious for opening up classified safes in Los Alamos and leaving them open to drive security crazy. But his vision in terms of what the technology would go, I did meet Seymour Cray many years ago, scared to death to actually talk to him, another visionary.
And I know this is really strange, but one of my favorite executive types was a guy named Lou Gerstner, who basically rebuilt IBM while the train was running down the track and transitioned from kind of a big iron company to an organization that looked at professional services and application support and really delivering solutions as opposed to hardware and really rejiggered how IBM would go.
So it would have to be a relatively large table. And as soon as we had this call, I’m going to think of at least three more people I’d like to talk to. By the way, I did meet Grace Hopper once at National Airport. She was struggling with a suitcase. I went up and said, “Can I carry that for you?” And she said, “Yes, dear.” And I carried it to the baggage check. And I said, “I appreciate your work.” And she gave me one of her famous 13-inch pieces of wire, how far light travels in a nanosecond, which I still have somewhere in the house here.
Yuval: Wonderful. So we’ll have to pick this up another time for your next three dinner guests. And Bob, thank you very much for joining me today.
Bob: Always a pleasure to talk to you, Yuval. You have a good day now.