Sergio Gago, CTO of Cloudera and former Head of AI & Quantum at Moody’s, is interviewed by Yuval Boger. They discuss a practical, problem-first path to quantum in finance. Sergio contrasts annealers and gate-based systems, emphasizes hybrid workflows for portfolio optimization and Monte Carlo risk, and explores fraud detection and knowledge-graph use cases. Sergio stresses rigorous benchmarking against top-end GPUs/FPGAs, explainability for regulators, and total cost of ownership over lab metrics. They also touch on post-quantum cryptography (offense vs. defense), realistic definitions of “quantum advantage,” and why disciplined pilots—and candid vendor comparisons—matter more than hype, and much more.
Transcript
Yuval: Hello Sergio, thank you for joining me today.
Sergio: Hi, Yuval, great being with you today and back in the quantum sphere, even if it’s for an hour.
Yuval: Absolutely, we’ve missed you. So remind everyone, who are you and what do you do?
Sergio: So my name is Sergio. This is not the first time talking to you, right? some people know me more from the Quantum Pirate newsletter, which is still alive and kicking and growing. I used to be the head of AI and quantum computing at Moody’s and now, since it’s going to be six months now, I am the chief technology officer at Cloudera.
Yuval: So let’s talk about quantum at Moody’s. I think Moody’s is unique in terms of a lot of people in the quantum market are just trying to sell computers or services to other people, but Moody’s Quantum had a different goal and a different customer base. Maybe, could you talk about that a little bit?
Sergio: Sure. Absolutely. And of course I cannot represent the company anymore. I can, I can talk about the world in quantum for finance and why rating agencies and analytics companies like Moody’s and others can benefit from quantum computing and why I believed I felt that was a fundamental part for that company. And it all started when I joined the company. I joined through an acquisition. I was running a company in AI called Aquarium Media. It was AI before the generative AI craze. Back then we were running machine learning, and expert systems and things like that. And it was a great acquisition for Moody’s and I joined the company. But when I joined, I saw the amount of data that the company has, the amount of models, quantitative models and machine learning models that they run for anything that relates to make better decisions, specifically in the financial world, in the risk management world from all the different angles, whether it is credit risk, which is the most apparent from a ratings agency, but also supply chain, systemic risk, cashflow risk, natural disasters, and weather modeling, and so on. So it was not only the perfect playground for a data scientist like myself, but also the perfect playground for a quantum algorithm developer, which is what I started doing a few years prior to joining the company. So it was a pretty clear use case, right? On one hand, the hypothesis of, can we use new technology to improve exponentially or linearly the modeling techniques that we do today? And we can add more details into some of the use cases. And also, of course, the cryptographical problem that many institutions in the financial industry, many banks and public institutions face with the advent of quantum computers. So how do we, on one hand, use it for the offensive, build competitive advantage and be better at our calculations, more accurate, and at the same time on the defensive side, so that while we protect ourselves and our data against a potential quantum cyber attack, like the ones that we know are gonna be coming sometime soon.
Yuval: You mentioned cryptography and I got to ask you, obviously it’s critically important for all the reasons that you mentioned, but isn’t it just so much more boring than computing? Isn’t computing so much more interesting than cryptography or quantum cryptography?
Sergio: You’re getting into dangerous territory, but I guess it depends on the persona, right? For someone like me, for someone with a software engineering background and data science and so on, I definitely prefer the computing side. Tinkering with algorithms and playing with new machines and so on. I did that from playing with GPUs when we started doing that for machine learning, FPGAs and other types of compute systems. And then when I got to know that you could tinker with quantum computers, then that was a new paradise for me. But lo and behold, when companies, especially private companies, Fortune 500s and so on, when you’re playing in a very level playing field and the promise of quantum advantage from an algorithmic perspective, it’s still intangible and very far. And obviously there’s a lot of noise in the sector. It’s really difficult to keep sustained investment for the long term on the promise of quantum advantage someday. And that is why we have seen many companies opening and closing their quantum teams or refocusing and re-strategizing them and so on, especially in banking. On the other hand, the cybersecurity space, it’s a better sale, especially for the CEO. And I think we have a mantra in this industry where we are able to tell the CEOs or the CIOs, are you the one willing to risk the data state of your company just for a one-, two-, three-, or five-percent chances that there is going to be a cryptographically relevant quantum device in the next, say, five years? And the answer is always, absolutely not. What do I have to do in order to protect myself from this risk. So to your question, is it a bit more boring? For some of my colleagues in the cyber security space, no. They love it, they are super engaged and many people in the industry that we all know are really good at that. Me personally, I enjoy more playing on the offensive, but I think you need both, especially for a long-term and sustained Quantum strategy in the private sector.
Yuval: Boring does not mean it’s not important so i think we agree on the important part you you started by saying that you joined Moody’s and you wanted to see if these new technologies could help them analyze their vast amounts of data better. In a few years into this what’s your what’s your answer can quantum computing help analyze data better and when if not now.
Sergio: Right again on the on the super easy questions to ask i think it’s dangerous territory because many companies may keep making those claims of quantum advantage right. And you always have to take it with a pinch of salt because quantum advantage for what exactly and under what circumstances. So if I take a business perspective, a pure business one, where I ask the question, is there anything on the private sector, specifically in finance today, as in September 2025, that I can do better with quantum computers than with classical systems? And by classical, I mean anything CPU, GPU, FPGAs and so on. The answer is no. Plain and simple. And I’m happy to get contradicted or convinced there otherwise by anybody in the industry with actual benchmark numbers. Now, it is true that many companies and researchers come with ideas and experiments that show traces of quantum advantage, either from a mathematical perspective or even from the algorithmic perspective. We do know that there are algorithms that provide exponential speedups or linear speedups on the problems that we have at hand. But from the enterprise perspective, we cannot just look at the algorithm. We need to look at total cost of ownership and end-to-end cost of a whole flow, right? So imagine you’re running a fraud detection algorithm for a credit card merchant or a credit card provider. You need to ingest millions of credit card transactions from all over the world, billions actually, you need to build a classifier that has certain thresholds that is explainable. And then you need to build the right model to classify something as potentially fraudulent or not. And you can come up with tens or dozens of use cases like those. To start with, mapping big amounts of data in quantum devices, it’s simply not possible. We don’t have enough qubits. And we most likely won’t have that many qubits to deal with, say, training data sets, such as the ones that we use for large language models. So maybe that is not the way of looking into the problem. Many people work on feature reduction, for example, principal component analysis on machine learning problems that in principle are supposed to help our classical algorithms run better and in a more efficient way. So those are the ones that I’m most interested in, where quantum takes a small parts of the process or the problem and builds a speed apps or accuracy improvements in that area. But for the most part, they will be heuristic approaches rather than algorithmic. That means we will have a given problem, say a Monte Carlo simulations of any kind, and we will run in the classical way, effectively throwing dice by the trillions, and we will run in a quantum device, and then we will realize, oh look, the quantum answer is either faster, more accurate, or more economical to run. Neither of those three points exist yet. But we’re seeing the light at the end of the tunnel. And I think this year we have seen amazing breakthroughs, both from the scientific side, from the research side, from the quantum hardware companies, both the pure play companies and the large ones. And I’m really, really optimistic on the timelines before the end of the decade.
Yuval: – Speaking of timelines, you know, it’s easy to put together a PowerPoint slide. It’s of course a little bit more difficult to actually build a working quantum computing. So in your previous job at Moody’s, you probably had many companies come to you and talk about their roadmap and their inherent advantage and this, that, or the other. How do you evaluate what’s fluff and what’s real?
Sergio: – That’s a fantastic question. And it is really difficult. When we build a quantum team at Moody’s, I wanted from the very beginning to make it extremely realistic. that’s a pun intended, an application focus. Many other companies take a more research focus, like we wanna publish papers, we wanna participate with academia, we want to do fundamental research, whether it is on error correction or in new algorithms and stuff like that. From my perspective, I thought that we needed to take a much more practical approach, which is effectively what I was saying before, take a whole end-to-end process, a business process, whether it’s fraud detection or a credit simulation using a Monte Carlo simulations or traversing a knowledge graph, any of those problems that we know either from machine learning or from quantitative finance and so on. And once you have that problem, take a top-down approach. Start trying to apply existing research into the problem, your Grover’s algorithms, your quantum machine learning models and so on, and see where is exactly the bottleneck. Is it the gate speed on the available quantum devices? Is it the amount of qubits? Is it the coherence times? The usual clock speed, the usual bottlenecks that we see on quantum computers today. which effectively boils down to those three, right? Coherent signs, number of qubits, and the lattice and the shape of the chip, the type of computer, of course, and the error correction techniques that we have. You put all those in a bag and then you try to run your problem. Of course, for a problem this size, for an industrial problem this size companies run today, you will not be able to run it. So you start making the problem smaller and smaller and smaller until it becomes what I call a Mickey Mouse problem. And that is the problem that you will be able to run on a quantum device, gate-based—neutral atoms, trapped ions, you name it, annealers. And you compare, right? You have your benchmark, again, business benchmark, faster, cheaper, more accurate, that’s it. Turn from those metrics into whatever makes sense in your specific problem and then evaluate. If at the size of a small problem, neither of those metrics work for a compare against the classical algorithm, you may be in trouble, right? And that for me is the starting conversation with any company that we worked with. Can you solve these Mickey Mouse problem, this experimental problem? It’s not always like that 100% because sometimes what happens is that when your problem grows, you have to simplify the classical part. You have to make assumptions, you have to simplify to remove variables, constraints and so on. So that the promise is that the quantum computing will be able to run those problems without all those constraints eliminated from the problem. So you incorporate that in your model. And effectively what you’re asking yourself is, if I can run this today for this problem, and this gives me this much money because of the ROI of this product or platform, when I have a working device, a quantum in three, five, 10, 20 years, what will I need? And then I can go to companies like you guys or any of the other quantum players, and I can say, look, I need a faction with these characteristics. when do you think you can give it to me? And of course, roadmaps are not that specific for a purpose and for a reason. But that is how I think you convince your executive team and board that you still need basically investment in your quantum team.
Yuval: – But that brings a couple of different questions. I mean, first, some companies say, well, we don’t have a computer right now, but we’ve got this technology and lots of funding and we’re building a computer with a gazillion qubits and right now we have zero, but in a couple of years we’ll have gazillion. So first question is, what do you do about those? The second question is scaling. All right, so I showed you a 20 qubit computer or a 50 qubit computer, and you decide that you need half a million qubits. How do you assess the scaling problem? Is it, does the company have a feasible, credible strategy for scaling the machine?
Sergio: – That’s a fantastic question. I don’t think you can always 100% estimate that. At some point you’re gonna have to trust the team and their chops, research capabilities. Funding is really, really important. Like can this company survive for the next X amount of years and the required capital to build these type of devices? I always relied on how accurate they have been with their promises and delivery so far. Is this a public company or a private one? That tends to affect the decisions a company makes, not only in quantum, in any industry. But at the end of the day you’re not marrying with anybody. At this stage, companies are still, specifically on the enterprise, right, in the financial industry, for the most part, and I know this is not true for everyone, but for the most part, you’re not buying the device. You’re still experimenting and trying to do this kind of a control and benchmark and capacity estimation so that you can still grow with that. And what happens, and it has happened continuously, is that you estimate you need say a million and five hundred thousand qubits. And this coherence times and so on. And then someone comes like Mr. Bacon comes in with a new error correcting code or the Allison Bulk guys come in with a new type of qubit with a cat qubit and or any other technique right. This industry changes so quickly or even your own research on the qubit aside that anything that you did six months ago, it has to be revisited continuously, right? So the only common denominator is the problem itself. So to answer your question, I think you rely on those factors. How capitalized the company is, how serious they are, have they been fulfilling their promises? But all in all, we don’t know yet whether these beta versus VHS war is going to be East or West.
Yuval: – For a company like Moody’s, doesn’t have to be specifically Moody’s, maybe you don’t know, maybe you can’t talk about that. If a computer existed that could actually solve these problems, do you need it on-premises or is cloud just fine?
Sergio: – So I wouldn’t be able to talk about Moody’s anymore, but many, many companies are using cloud computers today. there is some hyper scalers continuously. And that requires these companies, whether it’s AWS, Google Cloud, Azure, to have certain controls on data privacy and cybersecurity and so on, and liabilities effectively. If these companies rely on these providers, and it can be beyond, right? It can be talking about Salesforce, it can be talking about any other third-party provider that is also on the cloud, why wouldn’t you use? a computer in the cloud. Now, certain institutions, certain banks in certain regions do require a lot of on-prem workloads, and they will not be able to go into the cloud. Even public institutions are now very heavy on the on-shoring clouds, being able, the owner of your own destiny. So in those cases is where these companies will require certain on-prem capacity. Now here’s the trick. When you talk about classical compute, all the infrastructure that we have created, say you wanna build an AI cluster with GPUs and storage and the like, so you can run your own LM, you are the owner of your own destiny, you control everything, you don’t rely on the prices that any of the hyperscalers chains and accesses and so on. That’s fantastic, but all the work that we have been putting in the last decades on that infrastructure is a one such that we run on pure commodity hardware. You put together a cluster of a thousand nodes, five nodes die, and it doesn’t matter because five other nodes take the load immediately. They are exactly the same. Now with quantum computers, it’s not exactly like that. It’s everything but commodity hardware, right? It’s very, very specialized, very, very specific. So when you go into the data center, the first question that you will get is, how about your high availability and redundancy strategies that you have on your big data considerations? And you don’t have that on quantum today, right? So when you’re thinking about bringing a quantum device on-prem today is more on the experimentation side so that you don’t rely on the cycles or time windows that the cloud providers give you. And it’s actually a pain, right? You wanna launch a specific problem, an algorithm, and you have to wait three hours, which is not at all uncommon in today’s quantum world. So look, if you have the budget and you have a large enough quantum team, You just buy one for your on-premises and you keep all your data safe and you build your algorithms, you keep your IP safe and work with that. My take is that the vast majority of say, 40 and 500 companies will not require on-prem quantum devices.
Yuval: – You spoke about credit card transactions and I think some people will say that that’s probably not the ideal application for quantum computers. billions of transactions, super high speed. You could think about mortgage transactions, maybe fewer of them, or maybe not as time sensitive. But if you survey the applications in the financial services world, what do you think would be the first ones that would benefit from a quantum computing, both because of the requirements and also because of the, we don’t have a good enough solution classically today?
Sergio: You’re just hitting the nail in there. The vast majority of the canonical use cases fall into that very category. Could we do better on fraud transaction classification? Yes, but today we do pretty well. You mentioned mortgage credit calculation. That was actually the title of my thesis, calculating consumer credit risk for mortgages. The way you do that is by running Monte Carlo calculations over millions and millions of potential people defaulting, right, and all their correlations. What you see is that you can model more correlations on the quantum computing because of entanglement and the way you run the algorithm than on classical devices. The more correlations you put, the more constraints you put, the more simulations you have to run, and at some point you want to have your calculation in less than a week. So in that case is more like energy consumption, effectively the time you need to run that calculation, but then you hit one little problem. That is for the most part, credit risk, especially with a basal like compliance regulations, you need to have explainable algorithms and interpretable algorithms. And a quantum computing doesn’t have yet any simulation explainable model. Whereas when you run your classical algorithm, you can explain everything on the long tail, everything on your probability distribution and so on. The canonical use case, the one that most people start with, especially with annealers is on the combinatorial side, portfolio optimization. It’s a beautiful problem because it’s easy to grasp and it has the promise of making a lot of money. But imagine I can use a quantum computing to build for you the perfect portfolio on stocks or bonds or a combination of cryptocurrencies, you name it. That would be amazing, right? If a quantum computing could give me an edge just to rebalance my portfolios and maybe get a 1% improvement or a 0.1%, that would make me a lot of money. The reality is that in the industry, rebalancing portfolios, especially in index funds, doesn’t happen that often. And it doesn’t take that much time because really the most financial institutions work at the cluster of assets level. So they run that in minutes. So I remember once when I was talking to a potential partner I was saying, look, we have this algorithm, this possibility, and they said, I don’t need that. I run that every quarter and it takes five minutes. So the reality is that in some cases the industry does not need it. I think about this like the circus elephant allegory, right, there’s this circus that has a baby elephant tied to a pole and the elephant cannot leave the place because it’s tied to the pole. When the years go by and now the elephant, it’s a big elephant, it’s still tied to the pole, could leave anytime and break the pole, but because he doesn’t know he can, is still tied to that. So in that sense, many financial institutions are still running credit risk, combinatorial problems, fraud problems, like 20 years ago. So they haven’t even reached the peak of what’s possible classically, let alone with advanced machine learning models or neural networks with clusters of GPUs and so on. I have met many financial institutions in the banking sector, insurance sector, asset management that wanted to go into quantum. But when I asked them, “What are you doing today with GPUs?” They had nothing. Have you ever tried anything with FPGAs or any array system like that? They had nothing. So the promise of quantum is beautiful, but there’s a lot of homework that you have to do in order to have a true benchmark with the top of the line. And that takes us back to your previous question. When someone claims quantum advantage, I always ask is quantum advantage against what? Against a toy model that you have created with a cherry picked data set, or against the state of the art of what is really possible with a cluster of CPUs or even HPC, already traditional supercomputing power. And many companies that claim quantum advantage, when you ask that, they go silent or hide their benchmarks.
Yuval: – I wanted to ask you about education. I think that it moodies you and other people on your team, spend a lot of time educating some of your customers and partners about quantum. And so the first question is why, why did you do that? And the second thing is how much time do you have to spend educating internally? And if you have limited resources, who would you focus on for the internal education?
Sergio: – Right, I think we focused a lot both on the internal and the external education. Quantum is a nascent technology, right? and it will take years for it to become mainstream. Even years after we got true quantum advantage in a handful of use cases. So getting people to understand, not necessarily how it works, you don’t necessarily need to teach what entanglement and interference is and how you run Rabi oscillations to create quantum gates, you don’t need all that, but you definitely need to teach people where exactly to use this and why they should care. Right, specifically on their use cases. So at Moody’s we found something really interesting. A big part of the company are quants, or what we call quantitative analysts. Effectively, people doing math to build and run and calculate mathematical models for credit risk mainly, or economic scenarios and so on. And as it happens, most of these people are either mathematicians or physicists. Maybe trained in the 80s and 90s, because in the 80s that was the way of making money, right? Going into finance. And when we created the quantum team, many of them came to me and say, “Hey, are you telling me that this is real now? This was my thesis in the 90s. And do quantum computers exist?” And we were, “Yes, they exist. Here, let me give you some books or some training or some education.” So the only way for people to understand the value of such a technology, and by the way, the same applies to generative AI, right, there’s nothing specific to quantum, you need to spend a lot of time communicating and convincing people that there is something here. Quantum is especially difficult because it is kind of esoteric to some extent and requires a bit more proactive understanding. So getting people who are good at communicating, at linking the use cases, finding the potential of future ROI is fundamental. And thanks to that, we actually convinced many banks and financial institutions to start investing and researching into quantum because they saw the value of that with or without quantum scientists. So yes, we spend a lot of time and I would do it again because that is how then you bring domain experts into the field. And one of the things that we did pretty well was we had experts in specific areas of credit risk, domain expertise and so on. We had in our team, the quantum experts, and then we would partner with an external client to build something jointly. You need the domain knowledge. The same happens in pharma, for example, or in logistics, without the knowledge on how the problem is is actually used in the field, you’re only playing with mathematics that are fantastic, interesting, but they’re not gonna get you an ROI.
Yuval: – Do you think there are too many quantum companies? I mean, you read the news and oh, yet another superconducting computer is born. Or conversely, do you think there are areas that are ripe for new companies and new ideas in the quantum field?
Sergio: – I think there should be many more quantum companies. It is, the more people, the more and better we evolve. There are many ideas to be researched. There are many gaps in our knowledge still. Many, many opportunities. I think that as we move into the future, companies will start finding specific niches, like domain-ness or in the case of hardware, specific focus on one type of hardware. And at some point, even some would argue that we are already in that stage today on consolidation. We started seeing already some of the first acquisitions and mergers and so on, which I think is beautiful. We’re in the second or third era of quantum startups, if you’re depending on how you look at that. But it’s a normal cycle. And the quantum industry sometimes is a little bit endogamic. Now that I’m slightly outside, I can say that. We like to think that we are huge and doing a lot of stuff and all that. But if you compare the size of the quantum industry with any other industry, where there is AI, obviously, it’s huge. And we’re all seeing the sizes of funding rounds that tiny AI companies are having today, with no assets, no nothing. And then quantum companies have been building actual devices that you can touch with very expensive equipment for a fraction of that money. So there’s still a lot of work to do, both on the private sector, on the investment sector. Different countries are investing more than ever. And that’s also really good news. So, so the money keeps flowing. Uh, we need to make sure that the money starts flowing from the private sector as well, rather than subsidy of supports, grants, and so on.
Yuval: You, you’re familiar with the B2 plane, right? The B2 bomber. And, um, I think 20 of those have been built, maybe 22. And I think the reason was that originally someone, U.S. government ordered 70 of these, then they found out it’s $2 billion a piece, and so they had to cut down. And on the flip side, a triple seven is 300 million, is an order of magnitude less, and of course there are cheaper planes to do that. Do you think it matters? I mean, we see quantum computing companies that say, yeah, we’re gonna reach a million qubits, but it’s gonna take a football field, 100 megawatts of power and a billion dollars. Does it matter or if it really works, then it doesn’t matter and people will use it anyway?
Sergio: – I mean, think about it. If you had a cryptographically relevant quantum device, the cryptographically relevant part is important, but we would probably be protected by then. The amount of problems that you can solve with that on the proactive side, on the competitive advantage side, is massive. So many companies would pay a billion dollars and much more for that device. The value that that brings to the business is huge. Now, the question is, in the moment every company has one of such devices, obviously the value goes down. So there’s a commoditization piece that is relevant. I will not dare to say how many quantum computers will exist in the world. I think that that claim was made for classical computers in the past and it did not go well. I think there’s one difference though. Quantum devices are not generic devices for absolutely everything. Like generative AI is, for example. Right, so you use ChatGPT, you can use it for absolutely everything. A quantum device, you will use it for solving this specific problem. And even some of those big data center, whether it’s photonic data centers and all that that we see today, they’re still niche, right? Sure, they are universal computers, but still they’re gonna be niche for very specific data centers, for very specific use cases. So I imagine that more like HPC really. Like every country has their handful of, a few super computer data centers that will be hybridized with quantum devices and other stuff. Maybe at some point we’ll have a little quantum chip in our phones and laptops for very specific computations. but the big ones, the big millions and millions of qubits, and at some point we will need billions. We think today about billions in the same way that we thought about megabytes in the past. So I’m thinking, I’m sure you have seen this picture of an IBM hard disk getting into an office that was a megabyte, right? And people was taking that from a track and putting that in an office, and that was one megabyte. I think that the way we talk about the million qubit goal is in the same way. At some point in a few years people will look back and will say, “What are you talking about? A million qubits? We have, you know, tera qubits or something like that.”
Yuval: Forgive me for mixing metaphors here, but now that the quantum pirate is sailing away from Fantasy Island. You know, what advice do you have for those that remain on the island?
Sergio: – Well, I still intend to keep the newsletter and keep sending it, is my way of staying in the industry. I love quantum computing. I think at some point, even in Cloudera, we will have something to say about that. Not yet, it’s not yet the right moment. But at Cloudera, we’re a big data platform and we link data centers and cloud providers and so on. So there’s a lot to say in that area. So for me, the Quantum Pilots newsletter is number one, my way of staying in the industry, of stay connected with people like you and the likes, right? And make sure that this little family, that I am still inside, because it’s a little family. We’re not that many people working on this. Growing, but still a handful. The same ethos that I had when I created the newsletter still remains. There is more, even more hype and smoke and mirrors and bullshit than ever before. And that makes sense. The more specific technology becomes mainstream, the more people come into the field and talk without any specific knowledge. And don’t get me wrong, I am not a scientist, I’m not a physicist, even though I did my master’s and so on. But the community keeps growing, and I keep learning from it, and learning from my readers, and my readers keep learning from me. And I think it’s a fundamental part of what everyone needs to do. You need to keep reading continuously on a real balance between science and research, industry and applications. If you focus only on one of those things, you’re going to be missing out on, on, on a lot of the things that the world has to offer. So yeah, like, and subscribe.
Yuval: And last, you know, it’s coming and I forget what you answered the last couple of times I asked you that. Hypothetically, if you could have dinner with one of the quantum greats, dead or alive, who would that be?
Sergio: So my guess, I would have to go back and take who I said, but probably I say the Feynman, right? He has been my longtime hero for his combination of physics and music, arguably. I’m not sure if I would change it now. I became more and more of a fan of Von Neumann lately and well, lately and forever. But I think he was the together with some other big ones like Newton and so on. But but working on so many different areas of knowledge, writing information, mathematics, physics, electronics, everything. It’s not that saying that the man was a genius is an understatement. Because a genius is someone who excels from a knowledge perspective at something, but he needed absolutely everything. So yeah, I would have a hard time. I would say probably with von Neumann for this year.
Yuval: Sergio, thank you so much for joining me today.
Sergio: Thanks for having me again and thank you everyone for listening again to this the Quantum Pirate.