Renaud Béchade, founder and CEO, Anzaetek

Renaud Béchade, founder of Anzaetek, a quantum software company based in Korea, is interviewed by Yuval Boger. Renaud shares insights about the company’s work in quantum machine learning (QML) for hospitals, focusing on managing limited medical data, federated learning, and potential quantum solutions for personalized medicine. They also discuss hardware-software co-design, the quantum ecosystem in Korea, and the future of quantum applications beyond healthcare, including in finance and optimization. Renaud reflects on key learnings from recent conferences, potential breakthroughs in quantum error correction, and much more

Transcript

Yuval Boger: Hello, Renaud, and thank you so much for joining me today.

Renaud Béchade: Hello, it’s a pleasure also to meet your program.

Yuval: Wonderful. So who are you and what do you do?

Renaud: So my name is Renaud Béchade, and I’m doing quantum software. And in particular, there is one project we’re working on the team right now, which is QML for a hospital.

Yuval: Where are you geographically?

Renaud: We are based in Seoul, and part of the team is also in Daejeon, also at KAIST (Korea Advanced Institute of Science and Technology) 

Yuval: What’s the name of the company?

Renaud: The name of the company, the Korean name is Anzaetek, which is basically some joke about the ansatz 

Yuval: What could be achieved today with quantum machine learning? And by the way, is it expected to run on real hardware, or is it more of a quantum-inspired or simulator environment? Tell me about the work if you can.

Renaud: Okay, so the first step of the work is actually to do it completely classically so we can make sure we have managed the data properly, cleaned it up, do whatever is needed to make sure that the data is handled properly.

Then we go on some simulator, so we’ll dig a bit into that a bit later. And of course, the step that we are trying to propose to the end user is to try a real computer, so of course you have the QuEra solutions that we are very eager to try, both in the blockade mode as well as the FTQC mode.

Yuval: People talk about criteria for developing quantum solutions, that there’s no good classical solution for it or not sufficiently good, or that the problem requires the power of a quantum computer. Where does the work that you are doing fall on that spectrum? Is it a proof of concept, experiment just to see how quantum works, or is there a real problem that the customer believes you can solve?

Renaud: So basically for everything that’s related to the medical world, you have limitations on how much data you can have. It depends exactly what you are getting, but say for instance, you want to try to do a personalized treatment, the amount of samples, is going to be limited.

So there are all kinds of limitations, and sometimes it’s just that by nature you will not have that many samples available anyway. So that’s where the capacity of quantum, from what I understand, is an improvement with the few-shot learning, that’s one thing.

And another thing which is more probably a long-term prospect because it needs to be researched and engineered, is that we have good hopes to reduce drastically the amount of power requirements, which is, for instance, a big subject of campaign in the States.

So it’s really a very serious subject, and in fact, if we can combine both the green aspect and the high-tech aspect, in that case the improvement of health, that would be quite a great achievement.

Yuval: You mentioned the absence of lots of data, and I could envision that happening in two instances. One is if you’re doing a clinical trial, and say you only have 15 subjects, you really would ideally want to have more, but it takes a lot of time or money to get it, and therefore you’re hoping to use quantum machine learning to extract everything you can from these 15 patients. Or that you have a single patient and there’s just a limited amount of data, and you’re trying to extrapolate or get information about that one.

The difference I would see is that in the 15-patient example, you execute the program essentially one time, whereas in a patient-by-patient case, you might do it multiple times. So I’m trying to understand where the power savings would happen. Is it power savings for a one-time execution or is it part of an ongoing production process?

Renaud: It’s more like, well, you have clearly the training time, which is one of the biggest consumption sources right now. And also, of course, inference is a bit less of a problem of consumption at this stage because you also have alternative technologies like the Groq LPU, for instance, that can improve inference.

If you have classical models now, some of the quantum models are not really obvious, so you can’t really run them efficiently on classical hardware for that matter. So you are also basically at some point, you have to make a choice, and we don’t know yet what is going to be best.

What I suspect is that we probably end up having some ensemble models, so basically meaning that you have, say, one classical model, one quantum-inspired model with very specific hardware, and one model that is actually full quantum. You put them together in an ensemble method, and the hope, which needs to be tested for pretty much each data set at this stage, is that you get something that’s better than if you were using the other ways separately.

So that’s something that we hope to target at, but of course, we’ll do that step by step.

Yuval: I’m curious how the company got started. What made you form the company? Is there a particular focus area? Is it quantum machine learning, or is it quantum algorithms in general? What can you share with me?

Renaud: Okay, so something interesting happened. It started with me being a trader in Tokyo about 15 years ago, and basically, I had friends at the bank I was working at, and one of them happened to be the co-founder of a small company called QuantFi that still exists today, which is more focused on the finance and the quantum computing for finance aspect of things.

So I helped them in the beginning. I think the exact title was senior scientist for the finance part because I used to work in finance, and I grew until becoming the deputy CTO. The co-founder was doing both CEO and CTO.

And then I was basically in the situation that I was kind of independent in Seoul. I went to Seoul for completely other reasons Okay, so second event, I befriended some postdoc in the KIAS (Korea Institute for Advanced Study) called Adel Sobhi who is now a team lead at ORCA working also, by the way, on fault-tolerant quantum computation.

And so we did some mini-research together. So the results themselves were not that interesting. So there are plenty of reasons but basically as you know if you try to do any form of research, sometimes you find things that are interesting, sometimes you find things that just work but not enough to become a product. It was the second category.

But still, I found that quantum computing was interesting. And maybe something I should have mentioned too is that I used to be doing some hardware acceleration at a company called Maxeler. So I mentioned the Groq LPU. Maxeler is actually building the compiler for the LPU chips. And I used to work on their compiler maybe 15 years ago again. So it was right after the bank in Tokyo.

And so all in all, I decided I wanted to do some form of hardware acceleration. And it happened that there was a niche apparently of interesting things to do in quantum. There are a few examples that could work for finance. Until we have proper FTQC, there are a number of things that will not be easy.

For instance, anything that is related to pricing, it’s going to be for just the same reason as if you want to do a CFD without fault tolerance, it’s going to be very, very difficult due to the scale of things. But still, I think that there is a huge potential in it.

And what is lacking in the few projects I’ve seen so far is having more people that are connecting the end users with the actual hardware, which corresponds relatively well to what I did in the past as a hardware accelerator developer. And also, there was the potential of doing things in finance.

Things happened in such a way that the first customer to be really interested in our project enough to build a product is actually a hospital. But that being said, I would say that most of the things we’re working on for the hospital could be translated to finance domains. So especially there is one sub-project which is for federated learning.

And if you do QML and federated learning, you could do that obviously on sensitive data like patient data. But another thing would be to say that if you want to do fraud detection, so the fraud data itself is by definition something you don’t want to share that much simply because these are either very personal data or embarrassing data, let us say.

And that could be a domain of application. And similarly, once you have the capacity to train multimodal data, you could imagine, let us look at the Bloomberg data, the machine-readable data, add a few images in it, maybe video if you feel lucky. And that could be also an application.

In practice, the video itself, analysis of video for surgical data, is also a project going on. Now, doing that for analyzing news is probably the next level, quite like the next level. So probably not happening immediately, but that’s something that could be attempted.

Yuval: So it sounds like you’ve been at this for quite some time. What have you learned about quantum in the last six months that you didn’t know before?

Renaud:there were a few interesting things, so I went to a few conferences, technical conferences.

And I think what was interesting, so one point which is unfortunate for people without the FTQC is that, so you’ll have to reference to the paper itself, there were a few papers presented for instance at AQIS (Asian Quantum Information Science Conference), AQIS at Sapporo that basically claimed that if you have too much noise or very little noise for that matter, the depth of a circuit that will be actually exploitable for learning is going to be very limited. So you need to have very wide, not only deep but very wide circuits if you want to be able to do anything if you don’t have the correction. So it validates the kind of choice I wanted to make, that’s one thing.

And maybe another interesting result was at CLEO paper that mentioned that you could have actually complex entropy, so like the complex numbers. The reason being that depending on what kind of model. You have pseudo-probability functions which means that you will not have just positive numbers appearing in your distribution. And then you could construct the equivalent of, so using the same kind of formulas for the entropy, you can construct a complex entropy and prove a useful result with that. So it’s more like maybe a mathematical construction at this stage, but I know as a fact. And another thing speaking of entropy is you could also have similar thermodynamic style of modeling with the entanglement itself.

So you have all kinds of interesting results, both qualitative and quantitative, that are appearing. How to exploit them properly algorithmically is a different story of course, but it’s very interesting to see that in fact, even the basics are not that sure anymore.

Of course, there is a full corpus of things that we’ve discovered about quantum mechanics, but so far as analyzing what’s happening for a bigger system that is not just a few atoms and a few hundreds of states, things become very complicated.

So as you can imagine, when you have these Rydberg atoms together in dynamic arrangements, it becomes very, very interesting.

And lastly, another one that you’re probably aware of is that I found, I read part of a paper about the error correction, some of the error correction method that you can use when you have the Rydberg blockade interaction available.

And it’s interesting to see that there is really a strong, strong, strong link between the hardware and what you can do with it and how you should use it, which also means that eventually the software makers like me will have to really do a lot of work to link their solutions to a few select hardwares to get maximum performance.

Well actually, maximum performance computing, it used to be the motto of Maxeler,But that’s pretty exciting, and it makes the whole venture valid, and it kind of builds a long-term barrier to entry if we do our job properly.

Just as if you have a good coupling between the way you do your error correction and the hardware that you’re using, you’re going to have some advantages.

And another one which is interesting, I think on the Japanese side, it was I think Fujitsu, for some alternative scheme for error correction. So you know that if you want to represent rotations, you need to decompose that. So by this, I mean the rotation gate, if the audience is not super familiar with that.

And usually you have just a very few gates available in error correction, which means that if you have some, I would say, complicated rotation, which is something that’s going to happen with QML in general, you need to have very long circuits, which becomes a problem even if you do the correction because it becomes slower.

And the proposal of the Japanese team is to actually add to the gates that are used some arbitrary rotation gate. And then you have different ways to exploit that, either saying that you have a partial error correction where you have this residual error in the rotation that you’re applying, or you have some, I think it’s a bit similar to the GKP code in the same ideas, that you will multiply the number of rotations and try to do some correction using this multiplication of resources, which of course looks obvious after that, but you need to have the whole scheme working.

So there are really plenty of different things to discover and test in the domain. And it’s not only of course in the neutral atoms, but of course other interesting results happening in photonics and in trapped ions. Bad news is that it means more competition, but good news is that we will be able to bring something really useful to mankind in time.

Yuval: You mentioned several things, and one of them was this idea of co-design of an algorithm with the hardware vendor to make maximum impact of using the hardware given the limitations and the opportunities today. So what is your advice for a software company that says, “Okay, well, I can’t go through this process with seven different vendors.” How do you choose the vendor that you do the co-design with?

Renaud: Okay, so you mean, okay, so basically one of my prospects just says he wants to have a solution with at least two vendors in terms of hardware. So that’s the way many people think today. So it means that as a software vendor, unfortunately, I will have to support at least maybe two or three architectures.

So of course, some of the architectures will be more performant for some problems than the others. And that’s probably one way to do things. So far, what I’m telling the friends I have in Korea in the software domain is that, and I think they kind of agree, so I cannot go too much in detail about what we’re discussing, but basically what’s happening is insteadwhat probably makes sense is to have a few specialists for different, I would say, either domain or specific sets of hardware and bring in some form of consortium approach to what we provide to the end users.

And it’s what I’m trying to do right now. So there might be some announcements coming around these lines. We’ll see. But in all cases, I think trying to have some collaboration, try to build some relatively, I would say, sensical standards on the way you interface the computer, for instance.

And the type of problems that are interesting is something that is going to bring value to the end customer and make the whole domain viable. So that would be something I would say, which is to say, in a sense, what you’ve seen with the GPU acceleration is kind of the pattern we’re probably going to see.

Which is to say that you need to have these low-level things, just like you have the CUDA libraries that people need to have to get the big performance. And that’s the reason why NVIDIA has been dominating for so long, because the other vendors didn’t maybe have the opportunity or the effort, I don’t know, to get to this level of performance, which was having actually not only the hardware performance, but also the software that comes with it so that people who actually want to solve real problems can just download,PyTorch, TensorFlow in the case of AI, and just start working on their problems instead of having to develop something new from scratch.

So it’s more like this question of integration, collaboration, which means that, and that’s something that happened in the classical hardware domain. So for instance, PyTorch works both with NVIDIA and also with AMD, for instance, that’s clear. So there will need to be some form of proper and, I would say, higher goal kind of collaboration to get things going and have good results.

Yuval: You mentioned that you are in Korea and mentioned several of your Korean friends. What can you tell me about the quantum ecosystem in Korea, whether the national program or even before the national program?

Renaud: So for the national program, something I’m going to discover, but mostly from my Korean friends, because I don’t have both the competence and the patience to go through all the papers that describe this program.

So basically, like in any country, when you try to apply for a program, if you’re not a native, it can be quite stressful. And it’s not just Korea. If you try to do things in France, for instance, you will soon discover that the amount of paperworks you have to do can also be a consequence.

So at this stage, all I know is we have good news in terms of the money that will be invested long term in this project. That’s a given. And then the question is to try to find how things are going to be validated. I know the fact that the hospitals will have that part in it, so I’m not too concerned about the long term. Of course, the short term is like in any small company to make sure we can meet hands and continue to have fun and talk to interesting people. And the other domains. So you have basically a lot of things that are already happening, like drug discovery.

So I would say relatively classical domains. But I think that what remains to be seen is how much of new applications are going to be tackled with. Once we have proven that quantum works, something that will be interesting is to try to develop new applications that effectively were not possible without quantum.

So on the back of my head, I would say, if you can do quantum scientific computing, which is something I’m trying to kind of advocate for. So in terms of disclaimer, I graduated in scientific computing kind of works. But more than that, once you have this capacity to actually do real simulation, it means you could try to do, for instance, everything related to optimized design with the whole simulation happening, or I would say a big part of a simulation happening in the computer.

So if you have the quantum CFD that we know is going to bring a certain number of advantages, and you can also optimize, for instance, you could imagine optimizing the global shape of an airplane, optimizing turbines, optimizing rocket engines, etc.

Or that could be something maybe more down to earth, like if you do reinforcement learning with an integrated simulator, which is something that’s going to be very helpful if you want to have some advanced robots.

That could be also, I would say, of course, it’s something that’s going to happen in the long run. So as you well know about the roadmaps of pretty much everyone, we’re not going to have thousands or tens of thousands of logical qubits for some time. So we have to do that progressively.

But I think it’s going to be very, very interesting. Just like the introduction of GPUs made this, I think it was 2012, ResNet and basically a CNN Cambrian explosion that led to AI as we know it. It was also because we started to have the computing power for that in the name of the GPUs.

And if we have, again, some explosion in terms of, I would say, equivalent processing power, we might see also very, very interesting applications. But of course, it’s not before we have tried to do that that we really know that’s what is going to work.

Yuval: Excellent. So as we come to the end of our conversation today, I wanted to ask you a hypothetical. If you could have dinner with one of the quantum greats dead or alive, who would that person be?

Renaud: I don’t know. I think all of them are interesting, so it’s a difficult choice. Great names. So the people I know the name of, I’ve already met them. So it’s a bit complicated.

Technically, it’s interesting because I haven’t met Alain Aspect, even though he used to be a teacher at my alma mater. So that could be also one interesting gentleman to meet. Now, if we go to the more mythical people, I think I would hesitate, I guess, between Oppenheimer and Feynman, I guess. Because they are really the guys that got, maybe not, I would say, hyper civilian applications, but still, they brought the whole science to a next level.

So, for instance, if you take Oppenheimer’s work, it’s not only the work we know in the end of World War II, but he was also the pioneer of basically quantum chemistry. So it’s something that I really have difficulty to get introduced into because I did something else than chemistry altogether during my studies.

The interesting thing is we’re still seeing references to Oppenheimer in today’s, I would say, introductory books at least. Maybe not all the publications, but that would be a bit, still a long time.

And, of course, Feynman is, I think, my teacher when I was in what France called the classes prépas He said that if there was only one book you wanted to read, beyond the courses we had to go through in the prépas system, it was basically Feynman’s lectures on physics.

Yuval: Very good. Renaud, thank you so much for joining me today.

Renaud: Very cool. Thank you. It was a pleasure.