David Rivas, Chief Technology Officer at Rigetti Computing, joins Yuval Boger to discuss Rigetti’s approach to building superconducting quantum computers. He highlights the company’s full-stack capabilities, including its captive quantum fab and proprietary control systems, emphasizing the advantages of vertical integration for optimizing performance. David explains Rigetti’s chiplet-based approach to scaling quantum processors, a key strategy for achieving large-scale, fault-tolerant quantum computing. He also introduces the Novara QPU, a modular 9-qubit system designed for flexibility and interoperability with third-party hardware and software. The conversation covers the evolution of quantum computing architectures, Rigetti’s move to 3D signaling for better qubit connectivity, and the role of logical qubits in scaling quantum processors. David reflects on the importance of continued hands-on experimentation, comparing today’s quantum computing landscape to the early days of classical computing. He also shares insights into the challenges of being a public company, the necessity of clear technical progress over short-term sales metrics, and the impact of emerging error correction techniques on the industry.
Transcript
Yuval Boger: Hello David, thank you for joining me today.
David Rivas: Hey Yuval, it’s good to be here.
Yuval: So who are you and what do you do?
David: My name is David Rivas, I’m the Chief Technology Officer at Rigetti Computing. I’ve been at Rigetti for about six years now and I’ve been CTO for a little over two.
Yuval: And Rigetti makes superconducting quantum computers, right?
David: We are a full stack superconducting quantum computing company and by full stack we mean we start by manufacturing and fabbing our own chips. We have our own captive quantum fab, in fact it’s the first captive quantum fab in the country. And we do everything else but for the dilution refrigerators. We don’t make all of our cables, we don’t make all of the sort of trellis components inside the dilution refrigerator, although we make quite a lot of hardware. But we also make our own control systems and we do a tremendous amount of software to support it.
Yuval: And thus if someone is into superconducting qubits and believes that they want to buy or use a computer that uses superconducting qubits, why should they use yours?
David: Well, actually it’s a great question. There’s a couple of reasons there that we would certainly say. First and foremost, these are high performance qubits, so if you want to buy an 84 qubit system from someone, there’s not that many people you can go to. Of those that you can go to, the folks that provide literally everything are even fewer in number. It really does come down to maybe one, two, possibly three of us that can give you that capable of a system. Are systems competitive from the standpoint of things like fidelities, coherence times, just general performance. But the other thing that’s true is we distinguish ourselves sort of importantly in two ways. One is we’ve been building complete computing systems pretty much from the beginning. One of the things that’s interesting about this particular industry is because of the sort of importance of the underlying, the newness too, of the underlying qubit technology, that’s where so much of the focus is. As it should be, of course, this is a hard problem to solve. But at the end of the day, these are computing systems that people need and want, which means there’s an awful lot of additional work that needs to be gone into to support the integration of, you know, a dilution refrigerator with some qubits into a full computing environment. And that’s something that you get from us. The business of all that software and the control systems, coupled with our expertise in putting together to build hybrid systems is a big part of it.
Yuval: So you also build your own control systems?
David: We do.
Yuval: And this concept of being almost completely vertically integrated seems very attractive to some. But then if I were a company making dilution fridges or control systems, I would say, well, you know, we’ve got 100 people and that’s their expertise. There’s no way a vertically integrated company of your size could really be as good as we are. What do you say to those?
David: Well, there’s, I guess the short answer is you’re looking at the problem essentially from the wrong way right now. And the reason that people do this, I think, is that when they think about quantum computing, they tend to contextualize it within the last, say, 25 years or maybe even 40 years of classical computing. So for a lot of people, they think about computers and they think about the late 80s or the mid 80s. And at that point, the systems were developed enough, capable enough, the technology was mature enough that the interfaces between various components were very well-defined. Performance of the entire system between those interfaces was very well-defined and could be created quite high. And so it made sense for focus to be put on individual components to be manufactured separately and then integrated.
If you go back another 40 years, call it the late, the mid 40s, say, about classical computing when the things were first being invented, nothing like that, of course, existed. And in fact, it didn’t really start coming into existence until, you know, what, the late 60s, the early 70s, at which point people started thinking about components that were being, high-level components that were being delivered as subsystems into the integration of computing. Operating systems didn’t really exist as a separate thing until much, much later in the game. So I remind people that what we’re trying to do here is quite difficult. It hasn’t been done yet. The machines themselves aren’t really fully computational elements yet in the traditional sense of solving problems for enterprises. And in addition to that, every single element of the system, from the transmons we etch onto a chip, through the compiling technology and operating systems, with everything in between, including the integration of the dilution refrigerators, the components that go into the readout chain, the shielding that goes into the system to protect the QuICs, the control systems themselves, all of that has an effect on overall performance. And at the end of the day, it’s a computing system.
As I said, this is about performance. Right now, we tune every little bit of that, every little bit of that, to get the highest possible performance. It doesn’t yet, my feeling is it doesn’t yet make sense, especially at the state-of-the-art system level, to necessarily say, well, we’re going to pull all that apart. Now, I don’t think that’s not going to come, and we’re starting to see some of that with the smaller-scale systems. One of the things we’ve done recently was introduce something called Rigeti’s Novera QPU. It’s a 9Q system made from the same technology as our 84Q and above systems. It is designed as a unit. It’s a cold finger, it’s a tower that’s designed as a unit to plug straight into a dilution refrigerator. You just bolt it onto the mixing chamber plate of a dilution refrigerator. It’s got cabling that goes right up to the mixing chamber, so you attach the cables you’re used to attaching. You take those cables and plug them into the control system of your choice, run the software of your choice, and you have much more of an integrated system in that case. Performance is less of an issue at this point, because a lot of what’s going on there is science and people experimenting with tools to get to performance. That’s less true in the 24 qubits and above kind of situation.
One last comment on all of this. There are people who want to do work. Our computers go into national quantum computing centers and national labs and such. And there are folks that really do want to do work when it comes to building a new control system or playing with algorithms or low-level software, or even the scheduling elements or the signaling elements that go into the software of the control system. We don’t keep people from doing that. It’s not like we lock our systems down. But there has yet to be a contract that we’ve signed that didn’t also include a component that said, “Well, acceptance is dependent on the performance. Your two-bit qubit gates have to perform at a particular level. We expect a certain amount of coherence out of these systems.” And for us to really get all of that, we’re not yet in the position of trusting third-party components for the most high-performance systems. Does that make sense?
Yuval: It does. How did this nine-qubit system came about? Was it some organized product management process that says, “Oh, people are looking for sub-$1 million systems, and that’s how we go about it”? Or was it just you ran into enough customers that say, “Can I just buy this, and why are your systems so expensive”?
David: It’s a great question. As usual with these things, there’s not one perfect answer here. It was a little bit of a combination of both of those things. So we build a lot of 9Q processors in the run-up to building a 24 or an 80 or an 84 or a 100+Q system. We’re constantly testing technologies, the material stack, the underlying architecture of the qubits. We use those qubits to test new gate designs, so the small systems for a company like us make an awful lot of sense. So we build a lot of those things anyway. Moreover, because we have a fab and because we’ve been working closely with the likes of the Air Force Research Lab and the SQMS Center at Fermi and other places, other universities that have used some government money to use our fab, we had the ability to provide these kind of subsystems to individual customers. We didn’t think at the time about marketing them, and partially that was because four years ago there wasn’t quite as robust a market for the third-party components. You couldn’t just go necessarily buy a high-performance, off-the-shelf control system for that. Software wasn’t necessarily as broadly available. That really has changed. And yeah, we have had a few customers say, “You know, it was less about price.” Okay, price was a part of it, but it was more about, for the experiments that we’re doing, 24Q is a lot. We don’t actually need all 24Q now. I think that’s beginning to change a little bit. But there was a lot of call for five, six, seven, eight, nine qubit systems. And we kept saying, “No, we don’t really feel like there’s value,” and I was building an entire 9Q system for you, including everything.
So all that came together, and we kind of looked at the market and said, “Well, maybe now’s the time.” So we did put some proper product marketing into figuring out what the right system would be, figuring out what the approach to the market would be and such, and the process came up with Novera. And I have to say, it’s been fabulous, not only have we sold a few, which is nice, of course, but it has enabled us to create a partner program around that, and so we feel really connected to the broader industry right now. I announced this two years ago at Q2B, not quite two years ago, but two Q2Bs ago, I announced this at Q2B, and immediately I had interest from half a dozen people who wanted to partner and become part of this and go to market together and things like that. And so that’s been quite nice. There are Novera systems that are being considered and a few that have been bought that are being put together with a pretty wide variety of hardware and software. And like I said, that’s a good thing, I think.
Yuval: Let me take the counterargument. You said that some scientists come in and say, “Well, we don’t need 24 qubits, nine qubits is enough.” I think that over time you’re going to find fewer people that want 24 qubits and more people that want 24,000 qubits.
David: Yes.
Yuval: How do you get there? How do you get to 24,000 or however many thousands or hundreds of thousands of qubits?
David: So we’re particularly proud of our answer to this question. We’ve known that scaling was going to be something that was going to be complex and something that had to be solved to really field proper quantum computing. And so for years we’ve been working on scaling technologies to support a chiplet approach to building out high-performance quantum computing. In fact, we were the first in the industry, I think in any technology but certainly in superconducting, to actually build relatively large multi-qubit systems out of multiple chips. Three years ago, four years ago, we had 80Q processors up on Amazon as part of our own QCS that were built out of two 40-qubit chiplets. And we found that entanglement across the chiplet boundary performed as well, our two-qubit error rates were the same as on intra-chip boundaries. We’ve recently announced that we’re going to tile four 9Qs to produce a 30Q system, which is intended to be released sometime in the middle of this year. And we expect to release a 108Q system or more than that actually by the end of the year, certainly made out of multiple chips. We know that’s the only way you’re going to go. As you get to a million qubits, or even as you get to a few thousand qubits, it’s really the only possible solution to the problem, and we’re pretty far ahead of the curve. We’re seeing IBM and some of the others, I think IBM is the only one that’s really announced a chiplet approach here, but we’ve seen some of the others are finally following suit, but we feel pretty confident about our abilities here.
Yuval: What’s the limitation that’s driving the number of qubits you can put on a single chiplet? Is it yield? Is it something else?
David: Well, I mean, yes, it’s yield, but yield is a complex problem. It’s not a binary decision, right? Yield is really a matter of the performance and capability of any particular chip that you have. And so what we have found is that we can reliably — we get to a reliable yield at a particular size chip, and that grows over time. I don’t think we’ve reached a place where we can say, certainly, this is too big for us to build a viable quantum processor out of, but the other thing is that by producing QPUs with somewhat smaller qubit counts on them, we can do an awful lot of cherry picking to get the very best possible qubits for our systems. And there is variability in performance of individual qubits or pairs of qubits at this stage of the game. All of us are still pushing the boundaries of what we want in terms of performance. So this provides us a tool and an opportunity to be able to build relatively large systems out of the highest-performing qubits on a wafer or collection of wafers.
Yuval: If I think about a chip that has, say, 25 qubits, I could say, almost like the Google announcement, “Oh, I can actually make a high-quality logical qubit from that chip.” Do you see logical qubits cut across chiplet boundaries? I mean, how are you thinking about quantum error correction, qubit connectivity, and sort of the codes that you can run?
David: Yeah, one of the things that chiplets provide us with, too, is an architectural path to memory versus computation versus logical, et cetera. It also potentially provides us with an architectural path to get creative with our connectivity schemes. I’m going to do a little bit of a sidebar, if that’s all right. There’s another bit of technology here that we’re driving and have been driving for a while that’s going to help us with this. For most of the players, I think IBM is finally in the business of doing this, but for most of the other semiconducting players, their connectivity schemes are planar. So what I mean by that is that they go to the edges of the chip, they bond pads at the edges of the chip, and then they present the connectivity to the chip on the edges of the chip. A long time ago now, five years ago, I think at least, maybe more, we went to a 3D signaling scheme. So we’re coming now from the top of the chip down onto the QuIC itself. Moreover, because we’re doing that, we’re actually leveraging what we call the cap of what is essentially another layer on top of the QuIC for signaling purposes. This means that we can get creative, as I said, with architecture in terms of multiple layers of chip to support things like non-nearest neighbor connectivity, long range connectivity that we’re likely to see to need for some of the latest error correction schemes that are coming up. So both of those things are what are going to be brought to bear on solving your architectural problem of how to build a very large scale, fully error-corrected system. And I suspect that what you will see is, I like everyone else, was simply awestruck by the work that QuEra did with respect to quantum computer architecture for error correction with the collection of qubits that they have. I think you’re going to see things like that get mapped into the kinds of systems I was just describing for superconducting qubits.
Yuval: Do you feel that a quantum computer in its current stage or in a couple of years ahead is going to be a general purpose quantum computer? Or do you think that it should be built to support a particular type of algorithm very efficiently?
David: I’ll provide you with an answer and you tell me if I answered your question. The way I see this is going to go is based on first the fundamental belief that we need to build these systems and get them into people’s hands in order to find the best use cases for them as we move towards fully fault-tolerant error-corrected systems. If I hand you an array of qubits connected say four to one and maybe a thousand of those qubits and they’re running at say, I don’t know, a 0.1% error or something like that or actually 0.01% error or something like that. You have a computational resource that the world hasn’t seen and doesn’t before and hasn’t been able to simulate. And so we don’t really know all the things that we’re going to be able to do with that. It’s not a fully fault-tolerant error-corrected system, but we don’t really know all the things that we’re going to be able to do with that. And so I suspect it will be many, many, many things that we’ll be able to do with that. Now, as we move towards that, as I give you 200 qubits or 300 qubits with a half a percent or a percent of error, you’re going to be closer to that sort of crossing that Rubicon of a really impossibly powerful system. At that point, you’ll be trying lots of different things. And yes, we believe that there will be opportunity for us to tailor the architecture of those systems to the problems at hand. How tailored, again, remains to be seen. I don’t know. Is it a general purpose device that we can configure cleverly through multiple layers of chip architecture and switches or something? Can we incorporate some amount of classical logic onto that to do that configuration? Or is it rather we’re going to tailor the underlying architecture specifically to a collection of problems? And the lattice layout will be very peculiar to that. The control electronics will be defined on that. I don’t have an answer to that, but I believe that it’ll probably land somewhere in the middle of that. And then over time, we may end up with something more general purpose. But the most important point is that we will only get there through the continual experimentation of the resources that we have at hand at any given point in time. And that’s sort of how we view the job here at Rigetti is to build those systems that way, get them into people’s hands, but as complete an environment for them to do the work that needs to be done to use them properly.
Yuval: You’ve been doing this for a number of years. What have you learned in the past year? Or in other words, what has most surprised you in the past year?
David: I was very hopeful that we would be in a position to get away from surface codes at some point. What has been a thrill has been the work that’s being done by a broad collection of folks not at Rigetti to move away. It opens up a whole avenue of opportunity for us, I think, and really does bring the idea of a fully fault-tolerant system closer. It surprised me that it happened quite as fast as it did. And what I feel like is we’re in this sort of really wonderful period of time where over the next five years or so, tremendous amounts of good work is going to be done there. And it’ll get even better than it even already is with some of these LDPC codes, sort of bicycle bivariates and others. That’s something that surprised me this year. What didn’t surprise me and what I think is unique about this industry is that because it’s still based on a lot of science, there’s a lot of sharing of information that’s going on. One of the things I like about being in quantum is there’s a lot of information that is being shared and published. Everybody has their own secret sauce, but because it comes from an academic background, there’s as much joy in saying, “Hey, we got this thing working. Check it out.” And I think that’s healthy. I really do think that’s a nice and very good thing.
Yuval: What if the superconducting thing doesn’t work? I mean, how confident are you that this is going to be a winning modality, if not the winning modality?
David: You know, as confident as I can be, to be honest with you, here’s the thing. The trick with superconducting is it leverages a collection of technologies that are well-known. We have a fab here that doesn’t look too different from a fab that you would have built, I don’t know, 20, 30 years ago. There’s a little bit of difference in materials and such, but the fundamentals that we have to bring to bear on solving our problem are not insurmountable fundamentals. In addition to that, we get to leverage everything that’s happened in the last 80 years of classical computing, again, to bring to bear on all of this. So I feel pretty good that we will produce something that will be very, very useful, and we will be one of the technologies that support full-scale general-purpose quantum computing. Every one, it seems, every one of the major obstacles that people would throw in our way, we seem to be finding a way to get around, right? For a while there, it was like, “Oh, well, you’re not going to be able to scale those things.” Well, it turns out that’s not going to be the problem. “Oh, you’re never going to get the error rates down. It’ll never be as good as pick your favorite optical technology or something,” right? Or, “Oh, the coherence times aren’t as good as that.” Well, the problems are being solved, and part of the reason the problems are being solved is because the tooling that we’re using to solve the problems is pretty well understood. So there’s hard engineering problems that we’re trying to solve to get there, but they’re engineering problems that we’re capable of bringing a collection of smart people to turn the crank on to try and solve. So I’m pretty confident. I really am.
Yuval: In terms of your scientific work, has becoming a public company made an impact, a change in the way you work scientifically? I know, liquidity aside and having to meet quarterly reports and so on, how have things changed for you scientifically?
David: Yeah, scientifically I would say I can’t think of anything, honestly, that we’re doing different now. And from a scientific perspective, the thing that is most challenging in some ways of being a public company is you got to face the Wall Street investors on a regular basis and tell them what you’re doing. We changed leadership here about a little over two years ago, and so I got to go front row center with our new CEO to a bunch of those investors. And he did a remarkable thing, and it was surprising to me that it was as supported by the investor community as it was. And he started by saying, “Look, you need to stop paying attention to our sales numbers right now. You need to be looking at us and saying, are they making the technical achievements that they say they are going to make?” And he goes, “What you need to know is that by next year, I’m going to have significantly improved our error rates. That’s what I’m doing.” And it was a bold thing to say to the Wall Street community, right? But most of them, they nodded their heads and gone, “All right.” So the analysts that follow us and the folks that are the investors that are paying close attention and investing in us have been hearing this message, and we’re doing what we said we’re going to do. I have a slide I like to show where it shows how we’ve been halving our error rates in less than every year. I think it’s about on average, but not every nine months or so. And I love showing that slide because it’s exactly the thing that we said we were going to do. Now we’re going to start adding a little bit of scale to that. And the thing that is difficult about being a public company is the pressure from those folks to say things that they want to hear is really high. The thing that I think I’m most proud of the executive team in doing here is sticking by our guns and just saying the thing that we think is right and is true. And weathering the storm, as it were, when we get asked those questions.
Yuval: As we bring our conversation to a close today, I wanted to ask you a hypothetical. If you could have dinner with one of the quantum greats dead or alive, who would that be?
David: That’s a really hard one. I’ve met a couple of them. I’m a big Will Oliver fan. It’s funny, my VP of business development, Mike Peach, took his class a while ago and then found a way to meet him. So he spent more time with him than any of us. But I think what I like about Will is a combination of pragmatism and willingness to move beyond. I haven’t met him yet, but I’ve read a bunch of his papers and they were very useful to me. As you probably know, I don’t have a PhD in physics. I’m an engineer by trade and mostly a software engineer by trade. And so having those kinds of resources and seeing a mind work on the problems and then explain them in a way that makes sense for somebody that’s trying to build one of these things was very valuable to me.
Yuval: David, thank you so much for joining me today.
David: Yuval, thank you. I really enjoyed the conversation.