Charles Roques-Carmes, a Science Fellow at Stanford University, is interviewed by Yuval Boger. Charles discusses his work on using optical parametric oscillators as a form of random number generator with controllable bias. He elaborates on the potential applications of this technology in trainable randomness for Bayesian neural networks and logistics planning, previews the next steps for this research, and much more.

## Full Transcript

**Yuval Boger:** Hello Charles, and thank you for joining me today.

**Charles Roques-Carmes:** Hello Yuval.

**Yuval:** So who are you and what do you do?

**Charles:** So, my name is Charles Roques-Carmes, and I’m currently a Science Fellow at Stanford University in the Department of Applied Physics. I did my Ph.D. at MIT, which I finished in 2022, and I’ve been studying various aspects of light-matter interaction, quantum optics, and what we call nanophotonics, which is essentially nanoscience applied to the control of light, so we make tiny structures that can control the flow of light incident on them.

**Yuval:** How does that connect to quantum or to quantum computing for instance?

**Charles:** So I think the most recent thing we’ve been doing which is related to quantum optics is using actually bulk optics, so fairly conventional and old-fashioned optics, in an optical parametric oscillator. So that’s essentially a nonlinear optical device that converts some frequency into another. What we discovered is that it can be used as a random number generator. So I would not necessarily call this, I would not call this quantum computing because it’s still very classical. It can be used for some forms of photonic computing, but to the best of my knowledge, there is no demonstration of any quantum advantage with those. People have certainly used quantum for random number generators, right, and the assumption is that distribution is uniform and therefore you can get true random numbers, it’s not that you have to bake in the seed and hope that you have something that’s sufficiently random.

**Yuval:** So are you able to control the distribution of the randomness? What is the difference sort of before and after the work that you’ve been doing?

**Charles:** Yeah, we are. I would say that’s the main thing we’ve demonstrated. Now we’re working on a lot more directions that are enabled by this original demonstration, but that’s the main thing we’ve discovered. So essentially, we saw that if you add what we call a small bias field, essentially on the order of magnitude of the quantum fluctuations of the system, you can slightly nudge the system to go towards one state or another. So if you think of a binary outcome, essentially you can push the system to go a little bit more often towards zero or a little bit more often towards one, depending on the phase of that bias. And so yeah, you’re right in saying that quantum random number generators have been around for a long time. There’s an entire zoo of them. They use different forms of quantum noise, and some of them have different advantages. I would say that the one we’re building upon, which is optical parametric oscillators, is fairly robust because even though they’re using quantum noise, the signal that encodes the information is microscopic. So it’s like the steady state of a bulk optical system, it can be microwatts or more, so it’s very easy to measure. You don’t need to measure vacuum level signals for this.

**Yuval:** Why would I want to nudge the distribution more towards one or more towards zero? And related to that, what level of control do I have? I mean, could I basically say, oh, I want this particular distribution, and I can sort of select it? Or is it just introducing a little bit of bias?

**Charles:** So that’s a great question, and you’re not the first one to ask. So you can think of many applications in computer science where you need trainable randomness. And perhaps the simplest example is what some people call Bayesian neural networks. So for instance, you want to have some metric of uncertainty when you do classification, and that can be done by training a neural network that doesn’t have fixed weights, but rather weights that are stochastic random variables. So they are encoded into some probability distribution. In the simplest case, you have a bunch of Gaussian weights and you want to train their mean and variances or covariance matrix of that neural network in order to both learn the mean outcome, which is going to be this picture is a cat or a dog, but also with some amount of certainty or uncertainty on whether it’s a cat or a dog. And to finish with that example, let’s say you’re training a model that’s only seeing cat and dog pictures and now you’re sending the picture of an elephant. What a human would say is that it’s probably neither, but what a deterministic neural network is going to do is say it’s one of those two. But if you have uncertainty that’s baked in, it can say, well, it’s probably a cat or a dog, but I have complete uncertainty on the outcome. So it could be either, which means that probably you should check that new example.

**Yuval:** And doing better randomness or tilted randomness would help me realize that an elephant is neither a cat nor a dog?

**Charles:** Well, there are examples that would be a bit more important than those. For instance, when you want to do planning and logistics, you want to predict likely outcomes of some scenarios also with neural networks. But in all of those cases, you need some reconfigurable or trainable probability distribution, which is the building block that we have demonstrated with quantum optics.

**Yuval:** How much control do I have over the random number generator or how close do I need to be to it? In other words, could I maliciously tilt a third-party random number generator if I knew how it was working inside from a distance?

**Charles:** Yeah, you probably could. But I guess we would have a way to detect it. So for instance, if you configure your random number generator to be exactly 50/50 and then you measure it and it’s not 50/50, you know that there is someone sending another bias signal that’s biasing the random outcome of the appeal. So yeah, in principle, you could definitely hack this type of system by sending essentially another external bias. The thing is that for our system, it’s not so easy because you need to be in sync in time, you need to be phase-locked. So for our case, we actually had to work quite a bit to build the bias signal that would bias our probability distribution. So it’s not like you can just come with a flashlight and skew the random number generator.

**Yuval:** How did this research come about? Did you set out to create biased random number generators or did you start from a completely different place?

**Charles:** We started from a fairly different place. So I would say in the first few years of my PhD, I happened to work on integrated photonics for neuromorphic computing and combinatorial optimization. So my PhD advisor, Martin Soljichic, had this paper in Nature Photonics in 2017, I think, which demonstrated deep neural networks with integrated photonic circuits. Back then, and even today, people were very excited about it, and we started to think of possible other applications of those integrated photonic systems. And one that we came up with back then was combinatorial optimization. So typically something that people do in physics under the hood of Ising problems, because they’re nice toy models for combinatorial optimization, they’re NP-hard. So in principle, they’re as hard as it gets. And if you can solve those in principle, you can solve most other NP-hard problems efficiently. So we found a way to do this with integrated photonics. One of the interesting things we saw is that since it was a heuristic algorithm, essentially, we needed to have some noise on the chip for this algorithm to work. Otherwise, we would get stuck in local minima. And well, back then we had to, we thought we would have to add the noise artificially, but we realized that there was always some noise on the chip that would help us. Usually, noise in physics is not something you work with, or how to put it, it’s always there, whether you like it or not. So we realized that and we thought, oh, okay, maybe there is something a bit deeper here. Is there a way to control that noise and is there a way in general to use noise as a computational resource? Because that’s already what was happening in that experiment, but just in a fairly unpredictable way, and that’s not something we had planned for originally. So that’s where the original idea came from, and as you can tell, we’re not doing integrated photonics for this, and we’re using a very, very different system. So a lot of work was put into finding the ideal system to demonstrate that original idea. We went for optical parametric oscillators because there’s a lot of work from the photonic community in combinatorial optimization in making Ising machines with a couple of networks of OPOs. There’s work by Yoshihisa Yamamoto from Stanford, Alireza Marandi, and many of their colleagues, such as Peter McMahon from Cornell, and Hideo Mabuchi from Stanford. And so we were really inspired by all of this work, especially because there were also demonstrations of using them as random number generators.

**Yuval:** Could you control the amount of noise to create the equivalent of an annealing schedule?

**Charles:** Yeah, so in principle you can. There are different ways to control the intrinsic amount of noise in a system. So for instance in quantum optics, one way to do this is to control the effective volume of the mode that you’re looking at. So that would be a way to just control the baseline for the amount of noise you need to put in. That would be a way to, for instance, have an effective temperature, which is what people use for typical annealing schedules in combinator optimization as well.

**Yuval:** What’s the next step for this research? Does it end with this paper or is there a commercial application that you’re working on or someone is working to implement this research?

**Charles:** So I would say overall there’s quite some work on photonic computing at the moment, not necessarily probabilistic stuff, but there are a few companies that are working on integrated photonic for computing. There’s one that came out of my group quite a few years ago and they have also quite a few competitors on the market. So I would say there are already quite a few players in that field in terms of markets. In terms of research, what we’re currently pursuing is, well, first of all, there will be some follow-ups because it took us quite some time to build that experiment, so we want to use it for other things. But so, one of the things we’re excited about is to integrate this photonic probabilistic bit into a computing system. So that’s what we’re doing at the moment, and we want to do some proof of concept for Bayesian neural networks and other probabilistic neural network models. In terms of more fundamental research, we think it’s an interesting platform to look at how very small coherent fields can drive complicated dynamics in those dissipative systems and non-linear quantum optics. And so we have a few ideas along these lines, for instance, do a demographic reconstruction of certain statistics of the quantum fields. And so that’s another area we’re working on at the moment.

**Yuval:** As we come to a close of our conversation today, I wanted to ask you a hypothetical. If you could have dinner with one of the quantum greats, dead or alive, or maybe even one of the photonics greats, dead or alive, who would that person be?

**Charles:** I think I would go for Richard Feynman because, I mean, of course, he was a great physicist, he seemed to also be a great person to have a fun dinner with.

**Yuval:** Wonderful. Well, Charles, thank you so much for joining me today.

**Charles:** Thank you very much for having me.