Summary
In this week’s episode, Anna chats with Jens Groth and Daniel Marin from Nexus. They catch up on all things Groth16 with the author himself before diving into a variety topics, such as formal verification in the context of ZKPs, the Nexus architecture, the benefits and challenges of building a system from the ground up, folding and IVC plus the properties these offer in a zkVM context and much more.
Here’s some additional links for this episode:
- ZKProof Conference in Berlin
- Nova: Recursive Zero-Knowledge Arguments from Folding Schemes by Kothapalli, Setty, and Tzialla
- Nexus zkVM
- Episode 284: Using Formal Verification on ZK Systems with Jon Stephens
- Jens Groth Publication List
- Nexus Docs
- Nexus 1.0 Machine
- Enabling General-Purpose Verifiable Computing | Daniel Marin (Oct 2023) on YouTube
- Nexus 2.0
- SETI@home
zkSummit12 is happening in Lisbon on Oct 8th! Applications to speak or attend are now open at zksummit.com, speaker applications close TODAY (Aug 14th) and early bird tickets for attendance are limited!
Launching soon, Namada is a proof-of-stake L1 blockchain focused on multichain, asset-agnostic privacy, via a unified shielded set. Namada is natively interoperable with fast-finality chains via IBC, and with Ethereum using a trust-minimized bridge.
Follow Namada on Twitter @namada for more information and join the community on Discord.
Aleo is a new Layer-1 blockchain that achieves the programmability of Ethereum, the privacy of Zcash, and the scalability of a rollup.
As Aleo is gearing up for their mainnet launch in Q1, this is an invitation to be part of a transformational ZK journey.
Dive deeper and discover more about Aleo at http://aleo.org/.
If you like what we do:
- Find all our links here! @ZeroKnowledge | Linktree
- Subscribe to our podcast newsletter
- Follow us on Twitter @zeroknowledgefm
- Join us on Telegram
- Catch us on YouTube
Transcript
Anna Rose:
Today I'm here with Jens Groth and Daniel Marin, both from Nexus. Welcome to the show.
Daniel Marin:
Thank you, Anna.
Jens Groth:
Hi, very nice to be here.
Anna Rose:
So I want to start with you, Jens. I want to do a bit of a catch up, folks might be familiar with you. Your last name Groth, or Groth, I'm assuming will be very familiar to a lot of our listeners. And yes, you are indeed the author of the Groth16 proving system. When we did our initial episode many episodes ago, we actually spoke in depth about the Groth16 system and we talked about what had happened in leading up to that, as well as what happened after. But I think it might be fun to catch up with you on that topic. I'm kind of wondering, are you tracking all of the implementations of Groth16? Yeah, is there anything maybe new you can share?
Jens Groth:
Yeah, so I'm not following all the implementations of Groth16. I focus more on the science side than the implementation side. But we were recently at the ZKProof Conference in Berlin, and ZKProof is this standardization process where we're trying to standardize zero-knowledge proofs. And as part of that there's now an effort to standardize Groth16. So I think that's very exciting. That's the first standard that ZKProof wants to get going.
Anna Rose:
What would that mean to standardize it? Is it to choose one iteration or is it to make it a standard itself? Like just the research part.
Jens Groth:
So I think it's still open what exactly it means to standardize. That's sort of like also part of the standardization process to decide. But the goal is to be very clear about what is a Groth16 verification, and also set this standard so people can adopt it in good faith, they know they're getting the right thing. Right? So the standard is a lot about confidence building in zero-knowledge proofs.
Daniel Marin:
Yeah. Also formal verification of the verifier to know that exists. Yeah, that was one of the efforts.
Anna Rose:
Cool. Other than the Groth16 world, what has happened for you since we last spoke?
Jens Groth:
o Nexus. So I joined December:Anna Rose:
Nice. And what are you working on there? Are you creating a new Groth16 like proving system? Are you working on a different part of the research? Yeah, what kind of thing are you doing?
Jens Groth:
Yeah, so the goal really is to work on zero-knowledge proofs, and that's why I joined Nexus because I really wanted to work on zero-knowledge proofs. And we are building a zkVM, which we hope to have working at really massive scale so we can prove trillion CPU cycles per second. So it's basically getting all of the technologies together. And of course this happened a lot in the space in the recent years. It's really moving very fast. So it's about picking a lot of different techniques and putting it all together and choosing some points in the design space that we hope will get us there to this very high throughput.
Anna Rose:
That's cool. I want to come back to exactly the types of thing you're working on when we talk a bit more about Nexus and the architecture there. But Daniel, welcome to the show for the first time. Tell us a bit about what you were working on before this and what your role is at Nexus.
Daniel Marin:
Before that, I'll just say that probably when you asked Jens if he was working on a new Groth16, you wanted to hear like, oh yes, we're working on Groth24 or something like that.
Anna Rose:
Maybe.
Daniel Marin:
But interestingly, it's not the case. But maybe in the future, maybe in the future. I actually do track very closely all of the implementation of Groth16, but that's something else we can discuss later.
Anna Rose:
Oh, cool.
Daniel Marin:
Yeah, so a little bit about my background. Studied at Stanford cryptography, was part of the Stanford Blockchain Club, and, well, essentially while being at school started Nexus. And the goal is just to provide a hyperscale prover for verifiable computation. That's it. To enable verifiable computation and bring it to life in a really sort of large scale, really professional sort of fashion. But you can consider just verifiable cloud computing. And our approach is to essentially combine scientific efforts and breakthroughs from mathematics, cryptography and engineering, as well as crypto-economic efforts, which we can discuss later as well, to really build this super powerful machine, which we call the Nexus zkVM.
Anna Rose:
What was it originally? Was that the original plan when you first created this project, or did it have a different focus?
Daniel Marin:
Well, it started about two years ago, but more formally about a year and a half. And the goal was to enable verifiable computation. But we were in fact taking a different approach for a short period of time of the company, which was essentially crypto-economic security, actually a topic that is pretty popular nowadays. And the idea there was to use something like WebAssembly, as well as staking mechanisms to provide a stateless off-chain computational system that would allow smart contracts to outsource compute to economically incentivize off-chain agents.
Anna Rose:
Oh, do you put this in the category of ZK coprocessors? Was it more like ZK coprocessors and now it's changed?
Daniel Marin:
So we usually don't call it like ‘coprocessor’. So usually a ZK coprocessor is like you use some general purpose verifiable computation system to outsource compute from a smart contract, but it's only one application of verifiable computation. Right? Our approach has always been fully general, general-purpose from day one. And so our goal is to allow developers to build things like ZK coprocessors of different flavors, actually, and that is the primary goal.
Anna Rose:
When you first started, though, when you were talking about this cryptographic security, did you have a more narrow focus then?
Daniel Marin:
paper, which was published in:Anna Rose:
I feel like I hear the term verifiable compute a lot, or verifiable computation, but can you walk us through what that actually is?
Daniel Marin:
Yeah, absolutely. So I think when people mention verifiable computation, I actually think about a traditional cloud computing service, just that it comes with a proof. That's simple.
Anna Rose:
Okay. And what does the proof represent, though? That is that instead of the cloud? Is it instead of a third party kind of guaranteeing it, or is it just -- it never was guaranteed and now it can be? It can be verifiable.
Daniel Marin:
Well, it was never a guarantee. Right? Now the proof is something that provides you as a consumer a guarantee, but that you can also share with anyone in order to convince them about something. It's always been sort of like part of our philosophy that developers shouldn't know whether they are computing in a traditional environment or in a verifiable environment, and the user experience should be exactly the same. And the Nexus zkVM was designed exactly with this user experience in mind. It's like you shouldn't know that this is a verifiable machine, that this is a verifiable world. It should be essentially taken for granted. Of course that's not the case, because performance, right? But that's exactly what we're working on.
Anna Rose:
Let's define that verifiable here. I'm trying to picture a use case of this. A developer creates a Rust application. What kind of application would need to be verified or verifiable, or what element of the application?
Daniel Marin:
So, yeah. So first of all, verifiability is about verifiability of the execution, and we verify that the output actually corresponds to running the program on a given input. This differs from formal verification, where verifiability is static. Here it's a dynamic process, it goes after execution.
Anna Rose:
Okay.
Daniel Marin:
Essentially, you're trying to convince someone that some algorithm being computed is being computed correctly. Or more abstractly, you're trying to convince someone that something is true. And so there are obviously a lot of examples in the blockchain space. We you also find super interesting outside of the blockchain space, what the possibilities there are, even though the market is less developed. But answering your question, just a simple thing that you can prove is, for example, properties about your Google account without revealing your password. For example, I could prove that I have 100 emails from Jens sent last night to me without giving you...
Anna Rose:
What happened?
Daniel Marin:
Maybe he was too worried about the ZK Podcast. What are we going to talk about? So, identity, high security computing, of course, all of private computation falls into a realm of verifiable computation. There is also the field of re-execution. So this is quite an interesting property that has never existed in the history of humanity, right? Which is the fact that if you compute a proof for some computation, then no one in the whole future of humanity has to rerun that computation, because someone already spent a lot of resources in producing this proof.
Jens Groth:
To add on this, right? So there has been this notion of double execution. Right? If you have to have high confidence in a computation, well, how would you build that high confidence? You would execute it on multiple machines, maybe with different pieces of software developed by different teams of developers and so forth. Right? And you would then compare those outputs and see do they match. In that case, you would believe the computation. And it's really cool that -- even though it's much more expensive to just do a double computation, but it's really cool that you can now create a proof, and now everybody has a guarantee that this is the right result and you don't need to worry about was there some random bug during the execution or whatever that otherwise could happen.
And then to add on that, if you just do two executions, then you are not able to go to anybody else and convince them that this was the correct result. But now when you have a proof that's verifiable by anybody, then you can take this piece of data and you can go around and show, here is a proof that it's correctly computed and everybody will believe that data. So you can send it around to other systems and can compute on this data and trust the data because it has a proof.
Daniel Marin:
And everybody in the future as well. So the proof is theoretically valid for an infinite amount of time. And that's why actually going back to our conversation of standards, standards are important because if we all agree on a standard for a final, let's say, proof system like Groth16, then my Groth16 proof hopefully will also convince people living 100 years into the future.
Anna Rose:
So, like, in this case, you can create a proof, but then people need to trust that the proving system you used is correct -- is working correctly. And you're saying if there's a standard around Groth16 or something, and that is sort of an accepted strong proving system that people accept as correct, then they wouldn't also have to audit the proving system because I guess otherwise they would potentially.
Daniel Marin:
Exactly, exactly. No, you're totally right. And I think this is something that people don't talk about too often, which is like, people talk about proofs and proving systems, but no one talks about how do we convince ourselves that any of this makes sense, right? Like you can tell me this is a proof, here's a million bytes of data, and the proof system is, I don't know, like Halo2, so to say. Right? Well, I don't know if the verifier that I'm using is correct, if Halo2 is sound, if the prover for Halo2 is not cheating. So, I mean, it just doesn't mean to me anything. So hopefully formal verification can help us with this, as well as more concrete estimates of soundness and completeness properties of proof systems. But just like this is a social problem, how do I convince to you that this proof actually proves what I'm trying to prove? It's very hard.
Anna Rose:
And so far with Groth16, the proof of that basically comes because a lot of teams have audited and adopted it. It's sort of like they checked it, they implemented it, those implementations, we assume, have been extremely carefully checked and battle tested, and then a lot of people use them. So they've made lots of -- kind of there's lots of exposure of that system. So if it was attackable the assumption goes, someone would have already done it. I always find that so odd that that's how we get our trust in a weird way. It's just sort of like it hasn't broken yet, therefore we believe.
Jens Groth:
I mean, in some sense, this is how the field of cryptography works, right?
Anna Rose:
Yeah.
Jens Groth:
We built our systems based on some assumptions about cryptographic primitives, that the discrete logarithm problem is hard, or you cannot break this lattice assumption or something like that. And the basis for that trust is essentially, people have thought hard about those problems and not found a way to attack them. So there's a social process in getting to the trust. Right? We don't have a process by which we can get trust that does not rely at all on human effort. So I do think this is something that is part of the system. And it's interesting, actually, when people talk about proof systems, they say, well, you get mathematical guarantees of truth and so forth. And that is true. You've sort of made some part of the trust process mathematical, but you still do go back to some underlying assumptions, and you do need to have some human trust, especially if you're not an expert in the field. Right? You sort of have to take some cryptographers at the word that you know, yes, this is a secure proof system, this is a secure assumption to rely on.
Daniel Marin:
So this is actually part of, like our company strategy, so to say, as well, which is like, we recognize that a lot of this math is pretty unparsable for a lot of people. And so a lot of trust in a proof system and its implementation, being sound -- you know complete and sound and being secure, comes down to social proof. And for that, it's important that the people developing it are, well, world experts across multiple fields, whether that's compilers, cryptography, obviously, to other realms, like formal verification. Because otherwise, how will you as a user know that you're buying into a system that actually provides security, which is the core product of a system like verifiable computation?
If the people that are developing are not cryptographers, or at least don't have expertise on this, how do you convince yourself? My hope is that there will be a way in the future in which trust in experts, such as the people that we have in our team, is not fully necessary because of new advances as well, and formal verification, which can provide undebatable proof of soundness. But that is too futuristic nowadays. Today we just have to trust that the people that implemented a proof system didn't miss a constraint here and there. And that's pretty hard to do.
Anna Rose:
I feel like we should define formal verification. I've done two episodes on this at least. I know it's come up a lot, but can you explain what that actually is? Because it does seem like it's a step further than just the social. It's not just building something, putting it out into the wild, seeing if someone can break it. It's actually almost like reverse engineering, creating almost like a mirror project that just makes sure that this executes the way it's supposed to. Yeah, but let's explore that a little bit.
Daniel Marin:
Yeah. I'll just say that I was recently seeing -- there was a talk that Vitalik gave at the Google event, and he was saying, we want to move out from just ‘trust me bro’ scenario. And that's exactly what I'm trying to say. A lot of the systems are like, just trust me bro. This is sound. I've written all the constraints that I've promised to write. The verifier is correct. No one actually has a time or knowledge to go and check it out themselves. Like I could build a prover today and say that it runs at a billion hertz. It's going to be really hard for anyone to contradict that claim because they would need to find a soundness exploit. And so the hope is, with formal verification, that that process can actually become automated and that it doesn't rely on social proof.
Jens Groth:
I think there are several layers of formal verification. So if you look at papers in cryptography, they will often come with a security theorem, there will be a written proof, and this is something that's developed by hand, but it is a formal verification process. Right? It's just processed by humans that develop that formal verification of the proof. And we have seen in some cases that there are subtle details and the proofs are flawed and so forth, right? So it's not a failed proof procedure, but it certainly is much better than not having a proof. Right? If you take a cryptographic paper that publishes some new scheme and it doesn't have a proof, then it's typically a sign that it's broken.
Anna Rose:
Okay.
Jens Groth:
That is my experience.
Anna Rose:
Like, they haven't been able to provide the proof that it works.
Jens Groth:
Exactly. So either it's a sign that it's developed by somebody who does not have the technical competence to it, or somebody who was too sloppy to do it or something else. Right? But really is, I think, more than 50% of times, if there's no security proof, it's just because it's broken.
Now, if you look at the papers that do have security proofs, then they're typically good, especially if it's been peer-reviewed and so forth. But we do have exceptions to that as well. Either few cases there are really horrible flaws, and it's just completely broken. Quite often there's subtle details and there's some corner case where things are not quite as you expected from the security theorem. So if you go a step further than that, right, so then you would say humans are flawed processors, right? Humans do make mistakes. Humans do have a hard time keeping track of a lot of small details. So you could say, well, can we have a computer system that can help us? And you do have formal verification tools like Lean and Coq and Isabelle/HOL that can help you really check in detail that a proof is correct.
ly not so many lines of code,:Anna Rose:
I always thought of formal verification as more on the engineering side of things, like that you've implemented it, and then you create formal verification to guarantee that the implementation is acting correctly. But is it more on the research side?
Jens Groth:
So I would think of it as these tools allow you to prove theorems, and they prove theorems about some model, right? So you have to model what it is that you want to prove something. So you would have to write a model of what are sort of like the system that you're doing and provide some theorems about that system. And then you can prove that, and you can actually go from direction of like, if you have proof of that. There are some tools that allow you to extract code from that that implements these systems. But that's sort of like the opposite direction.
People are also interested in, can you tie an implementation to the model that you're formally verifying, right? And that would be really useful, because one thing that happens is that you spend a lot of effort in creating this model and proving some theorems, and now you have an implementation. That implementation goes out of sync with the model you have, because while you're updating something, you come up with some performance trick, but now it's actually a different system that you have than what you have in the model. So it's a very hard thing to keep those two in sync with each other, and people also thinking about how do you try to make sure that they actually do stay in sync with each other.
Anna Rose:
That's interesting. I remember like I've spoken with some auditors. I think auditors will sometimes do formal verification tools, and there's almost like this spectrum of the way things get checked. Rust, in a way, as a language, is also doing some sort of check that things are correct, that's like a built in -- I don't know if you'd call it like debugger, but it's doing something to make sure that your programs execute -- You've written it in a way that it's executing correctly and it will kind of correct you if it's wrong. Then there were things like fuzzing, and then on the sort of far end there was formal verification. And I guess that's why I thought of it more in the coding, in like the engineering implementation side of things.
Jens Groth:
Yes. So I wanted to point out that the type checking system in Rust indeed gives you some tools to formally verify things. It doesn't allow you to express everything, but it does allow you to express some security properties. And that's exactly the strength of Rust, that there are particular some memory safety issues that don't occur in Rust because you have this type checking system. There are also attempts to build in more expressivity of that type checking. What you can prove into Rust. So there are several projects that have tried to build sort of like advanced type checking, so you can annotate in the code that these are the post and preconditions I expect from this function or method. And then you can actually, if you have built this system and you can also use the type checker to check that those properties hold.
Daniel Marin:
Yeah, I will say that here at Nexus, we're like huge fans of Rust. I guess at this point it's not very unique, but we really are. So, for example, formal verification tools in Rust to verify Rust code. We've been experimenting with some of those. All of the Nexus zkVM is built in Rust. We're working -- we recently launched, of course, like a Rust SDK that works with async Rust to allow you to compute multiple proofs in parallel. We're working on a system that allows you to write this precompiled extensions to the instruction set purely in Rust. And it's a little bit mind-boggling to think what you have is a piece of Rust code that verifies arbitrary Rust code and might be formally verified with Rust code. So, I don't know if the --
Anna Rose:
Rust in Rust on Rust, somehow. Okay.
Daniel Marin:
Exactly. Actually, our team is composed of Mozillians, people that came out of Mozilla, the project. So, for example, one of our team members, actually, his name is Sean, Sean Martell, he designed the Rust logo itself. But yeah, we're huge believers into that. And the reasons are multiple, not only from the memory safety and the compiler, but also just the developer experience. It's just amazing.
Anna Rose:
Nice.
Daniel Marin:
And I also wanted to say, Anna, on the point of formal verification, what you said, this is what we see a few teams doing, that it's like we implement, then we formally verify some subset of the constraints. Right? Interestingly, the approach that we've taken since day one is we first essentially formally define the system. And you will see that in the Nexus whitepaper, we provide a mathematical description of the machine. So quite literally, the machine is represented as a Turing machine, so to say, as an algorithm -- as a non interactive, succinct SNARK, etcetera. But it is actually represented as an algorithm, and all of the algorithms are described therein. And how we do it is we compose algorithms from multiple open papers. Just now, I was mentioning Nova, Hypernova, incrementally verifiable computation systems, so on and so forth. And we actually combined this building blocks in a very modular way, so to say, to build a mathematical definition of the machine using these papers.
And this allows us as well to prove that the construction is complete and sound, because the proofs of soundness, so to say, come from the respective papers. And then once the machine is mathematically defined, we implement it. Once it's implemented, because it's well documented from a mathematical point of view, this gives us essentially two benefits. Number one, formal verification is simpler because we know exactly what we're formally verifying. We didn't go out and implement something and then try to do it instead. We already knew what we wanted to implement, and then we implemented it, number one. And number two, we hope that it will allow us to build multiple clients. So not only a Rust-based zkVM, but just as with Ethereum's client diversity, for which there are like Rust and Go clients.
Anna Rose:
So I want to dive into sort of the proving system and the Nexus system. But before I do that, I just want one last thought on sort of the formal verification in this auditing sort of space. Has there been any proposals for AI auditing? Like, if you think of going back into that social model that you described of just, like you put it out of, you make it as strong as you can. You wait for someone to break it. Can't they just -- and maybe not yet, but wouldn't it make sense to just run AI agents at this thing, just throw as many hacker brains at it? Like has anyone done that?
Jens Groth:
So, people have used AI tools to find bugs, and there are many different tools that you can use to find bugs. So it's also a question like, how do they stack up against those other approaches? And I think I'm a little hazy here. I don't know this field very well, but my understanding is that AI is not quite as proficient as other tools in finding bugs. So you would probably find more bugs by other testing methods, fast testing and so forth, than throwing an AI at a system.
But they do find some bugs, and they might be different types of bugs. So it does give you a bit more coverage in the bug finding approach. What it doesn't give you and what formal verification gives you, so the elimination of all bugs, assuming your model is correct. Right? Again, there's an assumption here, right? But under that assumption, I don't know of any other tool than formal verification by a machine that can give you, so close to 100% guarantee that there are no bugs.
Anna Rose:
That's cool. All right, let's make the move over to the Nexus system. I feel like we got a little, you know, we got into the verification auditing world, but now I want to kind of dig down into what you are building. The proving system that you are talking about, or even that mathematical model. Now, I'm really curious what that looks like, because you mentioned IVC and Nova. When I think of a zKVM, I feel like usually there is sort of a central proving system. Sometimes there's some sort of recursion built into it. But tell me what yours -- yeah, the Nexus zkVM or the Nexus machine really looks like and what proving systems are in there.
Daniel Marin:
Yeah. Yeah. So, first, I will say that we have built essentially everything from scratch.
Anna Rose:
Okay.
Daniel Marin:
You know, it's been essentially taking zero amount of codes from anyone else except for Arkworks. Pratyush and, you know, all of those guys, the folks from Aleo, have contributed greatly to the Arkworks ecosystem. So we provided the first open source implementation of the Nova proof system after Microsoft's own prototype from Srinath Setty. So that was great. We decided to essentially re-implement it, productionize using Arkworks tooling. We've always wanted to have a very solid foundation for the zkVM up on which we can iterate very fast. And actually, we did. We provided an implementation of SuperNova, actually very easily after the implementation of Nova was completed.
Implementation of Nova took quite a bit. There were a lot of engineering complications that I can talk about as well, especially around two cycles of elliptic curves. We implemented the cycle fold, essentially techniques which allow for more efficient verification on two cycle of curves. And then we recently announced and released an open sourcing of the HyperNova proof system. It's more generalized, it has multiple benefits as well as the arithmetization of the machine. So we wrote in R1CS, the full RISC-V instruction set.
Anna Rose:
Wow.
Daniel Marin:
As well as the memory checking mechanism, which is based on Merkle trees. All of this is part of R1CS constraints. We also have a part of the zkVM that has, what do we call the proof compression mechanism, which is essentially after we've computed multiple rounds of IVC, and IVC means incrementally verifiable computation for the audience here, we compress the proofs because the Nova proofs are quite large. And so that as well, we essentially, we've done it from scratch.
Anna Rose:
What do you use there? Is it a STARK? Because isn't it usually like -- I feel like there's usually like a SNARK and a STARK. Do you do that, or do you just take another SNARK?
Daniel Marin:
Yes, people sometimes do like a STARK and a SNARK. We have done what you could consider -- this is not entirely precisely true, but what you consider like I'm neither a SNARK nor a STARK, more like a NARK.
Anna Rose:
Okay.
Daniel Marin:
So without the S. So we're not SNARK, not STARK, more like NARK without the succinct part. And that is folding. One can be more precise there. But essentially we make the trade-off on the prover side of things to not be succinct. The falling proof is actually the size of the circuit that is being proven, which is pretty big, but it allows us to do highly efficient proof aggregation. That's what folding schemes in Nova, SuperNova and HyperNova are good at. And then we grab that really large proof and actually compress it with a SNARK. In fact, actually three composed, regressive SNARKs. The first one of which is a modified version of Spartan developed by Srinath Setty, and then implementation by Zhenfei Zhang in Arkworks. And then we apply a couple of other rounds, the last one being, well, Groth16.
Anna Rose:
Okay.
Daniel Marin:
And this is what allows us to get to the three-group element proof size.
Anna Rose:
Would you say then, are you SuperNova or HyperNova under the hood, and then a Groth16?
Daniel Marin:
Yes. Right now it's HyperNova.
Anna Rose:
HyperNova. Okay.
Daniel Marin:
HyperNova, and then the proof -- what we call the proof compression sequence, out of which the last part is Groth16. So we don't go directly to Groth16. We need to do some mental gymnastics here and there to compress the proofs through multiple rounds of recursion.
Anna Rose:
Okay.
Daniel Marin:
Yeah. Because otherwise it's too much for Groth16 to handle. There would be like a gigantic circuit.
Anna Rose:
Is this recursion done in HyperNova itself, or is there like another proof in between there?
Daniel Marin:
Exactly, exactly. So HyperNova, Nova, SuperNova and so on and so forth are exactly the same for recursion. Right?
Anna Rose:
Okay.
Daniel Marin:
And this is actually super important. For example, our machine is able to do IVC. Just keep computing, keep computing, keep computing, keep computing within a bounded and reasonably small amount of memory. Even your browser, for example. So for example, you can do recursion on the browser. And this is important because for monolithic SNARKs or STARKs, sure, maybe you can prove a little bit on the browser if your computer is pretty powerful. But to do recursion, you do need to essentially use a lot of memory. Here, that's not the case. But the trade-off that we're doing is that the proofs are large, and so they need to be compressed before they can be submitted to Ethereum.
Anna Rose:
I see. I want to understand the comparison here with some of the other zkVM teams that have come out recently. So the first one I've heard of that had the RISC-V instruction set was RISC Zero. And then SP1 came out earlier this year. Now we have Jolt. All of those are using STARKs, as far as I understand, or some like in the case of SP1 --
Daniel Marin:
Except Jolt.
Anna Rose:
Oh, Jolt is not a STARK.
Daniel Marin:
No, no --
Anna Rose:
Jolt is a sumcheck-based SNARK.
Daniel Marin:
It uses a Spartan. Yeah.
Anna Rose:
Oh, it uses Spartan. Okay. So is that -- would you say of those three then, would you -- are you closer in spirit to Jolt?
Daniel Marin:
Because of the large field universe, yes. So right now, the Nexus zkVM, exactly. It's based on 256-bit field SNARKs. So, yeah, actually, we're pretty close there in terms of the actual field size, but we don't have any allegiance to large fields. In fact, we have been exploring quite actively small field universe. It just has many downsides, but it also has a lot of upsides.
Anna Rose:
So what kind of research are you doing into the small field stuff? It sounds like -- would you be able to use something like Binius somewhere, or is everything too built so you wouldn't be able to sort of hot swap these things?
Jens Groth:
So indeed we are looking into that, right? We are very agnostic about the techniques and there are many techniques out in the field that we are looking at. So this would be one of them. So I see this as a question of which kind of commitment scheme do you want to use to commit to field elements and then the trade-offs between large field elements and small field elements and the options you have for commitment schemes, because Pedersen commitments do require large fields, but then they have nice homomorphic properties and so forth
So that could -- again, on the small field side, I don't know quite where the landscape will settle yet. So Binius seems promising, but is that what is ultimately going to win or not? That's not clear. People are also looking at lattice-based schemes for instance. But we're definitely looking at that as an option. And I think the question you could ask is sort like what is the cost per bit that we are committing? At some point we have to look at what is the raw efficiency we can get out of the proof system, and we want to maximize that raw efficiency. And one metric is perhaps to say what is the cost per bit that you are committing? Another metric would be then, okay, within the proof system, how many bits do we actually need to commit to at all? And there are also techniques that try to reduce the number of bits that are committed. So GKR has been successful in reducing the overall commitment surface.
Daniel Marin:
Answering your question, Anna, I would say that one of the core properties of the Nexus zkVM in our philosophy from day one has been essentially the observation that the zkVM can be thought of as a modular stack of components, each of which can be individually optimized. And so for example, Jens and his theory team here at Nexus and others, we have been working extensively on what we call the arithmetization system that is inspired by Jolt-like techniques based on lookup arguments. And so what that essentially does is work into how to arithmetize a full CPU like RISC-V in a set of constraints such that we can pack as many CPU cycles per recursion batch. Another direction of optimization is the prover itself, which as Jens just mentioned, we're agnostic to whatever prover we use. We have built our own prover, right, as mentioned Nova, SuperNova, HyperNova. We've built those from scratch and they couple directly with the circuits being implemented.
In addition to that, we also have what we call the memory checking subsystem, which is essentially like an independent module that does not just CPU verification, but memory checking. And there's multiple techniques to memory checking. The Nexus 1 machine that we released back in September sort of silently was using a naive memory checking mechanism that was based on Merkle trees. Right now, that's still the case for the Nexus 2, but we are about to upgrade as well. And then there's multiple other independent components like the proof compression mechanism, right? So that is completely decoupled and independent of all of this.
And then there's many other components like what we call the Nexus Compiler is a transpilation from RISC-V to just more and better and simpler suitable instruction set architecture, which we call the Nexus Virtual Machine. And for example, the Nexus Virtual Machine was actually inspired by the Ethereum Virtual Machine, and it allows us to further to do two things. Number one, essentially you simplify RISC-V in a way that is just better suited for proving. And number two, to simplify it from a human perspective for it to make it easier to audit. There are some discussions here and there about whether that makes sense or not. Right now, the Nexus Virtual Machine is essentially RISC-V with just tiny modifications.
Next iterations of the Nexus Virtual Machine will be most likely supersets of RISC-V. And this gives us many, many benefits. But all of these components are modular, from the compiler, to the CPU architecture, to the memory checking mechanism, to the prover, to the arithmetization system, to the commitment scheme to the compression system. And how we work on this, we have multiple versions of those in our library. All of it is open source. We independently optimize them, we couple them together, and that is what gives you a given choice of a zkVM that might be optimized for either proving in large computer clusters with a lot of memory usage, or proving on constrained environments like a browser with small memory usage, but perhaps more CPU intense. This has been challenging because not taking code from scratch obviously gives you a lot of slowdown initially. But it has been very helpful to us to be able to iterate fast now that it's been built.
Anna Rose:
And it sounds like you had a lot of control. But I want to -- so I think on the proving side, proving system side, what it sounded like was that you have this somewhat modular, you have components and you can use these optimization techniques. So you wouldn't necessarily implement a new proving system, you would just use the technique that some new proving system has showcased and then include it in what you've built.
Daniel Marin:
Potentially, yeah.
Anna Rose:
If you can. And sort of, you mentioned like lookup tables, and I'm assuming there's like other techniques, maybe if you decide to do this small field thing, I'm guessing you wouldn't just re-implement something, but rather you just try to extract what they've done and add it to your system.
Daniel Marin:
Exactly. So it just gets better and better and better. Right?
Anna Rose:
Yeah.
Daniel Marin:
So from a business perspective, so to say, our hope is that this -- like the Nexus zkVM, can be thought of more as an ecosystem for cryptography, where multiple implementations of multiple different schemes and components can be found. And then one is a developer using the SDK can choose the optimal configuration for a given program. And the good thing is that as the codebase matures and the system progresses, essentially all of the applications that are compatible with the Nexus zkVM just get better and better and better. Right? So that's all Rust programs for now, right? That's good, because it gives the developer like zero friction, right? This has been also one of our core philosophy is that you should just be able to sell it, run it and get approved, right?
Anna Rose:
Yeah.
Daniel Marin:
That simple. And that modularity has always been top of mind. In fact I would say, Anna, that our story as a company has been fairly distinct from that of others in the sense that unlike other folks that have systems that go like super fast, perhaps initially when they launched, when we launched, we were not ashamed to say that the zkVM ran at one CPU cycle proofed per second.
Anna Rose:
Okay.
Daniel Marin:
One hertz.
Anna Rose:
I guess this is slow.
Daniel Marin:
nd this was back in September:And so it ran at 100 hertz, reason being -- there's multiple reasons, but the sizes of the circuits that we were proving are actually super large in the Nexus 1. There were like 30,000 constraints, which it happens just to be a lot per CPU side. Here at the Nexus 2, we have essentially incorporated advancements from the Jolt techniques. And this is actually something that we were looking into since more than half a year ago, which is to incorporate essentially Jolt's arithmetization techniques to reduce the size of the circuits so that we could pack more cycles per recursion step. This is exactly what we've done. And so that was the Nexus 2 announcement that we did probably three weeks ago or something like that. And our hope is to continue going in that direction across many verticals of optimization.
Anna Rose:
Okay.
Daniel Marin:
And collaboration with other teams as well.
Anna Rose:
But how fast is that one? You sort of said the 100 hertz was 1.
Daniel Marin:
Oh, yeah. So, okay -- so with Jolt, it gets about 100 times faster than the Nexus 1. So probably like --
Anna Rose:
10,000?
Daniel Marin:
10,000 hertz in our system. But the problem is that being fully transparent, it also has some downsides, like doing Ethereum verifiability is significantly more complex. So it's tricky to just talk about hertz without it being a full end-to-end system. So for example, there are some systems, that -- like you can make them go super, super fast, but they're just extremely hard to recurse, extremely hard to compress. So it's like, okay, sure, you can go super fast, but it's not useful to anyone. And then there are systems that are probably slower, but at least are practical, and you can recurse them efficiently and you can compress them efficiently. That's what we had for the Nexus 1 machine.
So our hope is like how can we combine all of this together so that you have a system that is fast, compressible, recursion friendly, so on and so forth. Let me give you another example. Something that people don't talk about is you want a universal verifier. And this has always been the goal with Nexus, is that a single smart contract implementing the verifier for the machine can verify any program execution. And some systems out there have the problem that you need a different verifier for different --
Anna Rose:
Types, okay.
Daniel Marin:
Exactly. And that is huge. If you're a developer, you would have to deploy a different smart contract for each different program. A lot of non-zkVM as well in the past had exactly that problem. I myself faced this personally because there was some time when I was a student there that I was trying any possible ZK system out there, like Circom, Noir, I think it was in its initial days right there. Also Leo from the Aleo team. Great products, great tool chains, etcetera, but the universality is a very important factor in practice for any deployed system. You just want to have one verifier to rule them all, so to say.
Anna Rose:
I see.
Daniel Marin:
And this has always been part of our core philosophy, that is like, how do we maintain that while improving performance?
Anna Rose:
When you talk about the universal verifier, though, I guess you're talking here about the zkVMs that are non RISC-V based, right? Like in the RISC-V based, do those also have universal verifiers, usually like the single smart contract, like SP1, RISC Zero and stuff like that?
Daniel Marin:
Yes and no. For example, the verifier may change when adding precompiles to the architecture because the circuit that is being verified actually changes. So if you're a developer that uses machine architecture that's different from other machine architecture, your verifier will be different and you will need to deploy a different smart contract, which is a big deal. So right now, the best system, so to say, have a universal verifier for a given machine architecture. There might be a way in the future to even have universality across machine architectures, but that's not the case right now.
What I would say is the universality property is not always maintained by RISC-V machines. Sometimes, for example, the verifier needs to know some kind of data from the program, and that's just a huge trouble. With the specifics of RISC Zero or SP1, honestly, we've spent a lot of time obviously here being in this space and reading the code --
Anna Rose:
Looking at it.
Daniel Marin:
And honestly we're not sure. We're not sure about how the systems operate, but what we know is how our system operates. And so with us it is like that.
Anna Rose:
It's single.
Daniel Marin:
So for a given machine architecture -- exactly, for a given machine architecture, the verifier is universal.
Anna Rose:
And this means if someone deployed an application on the Nexus VM, they wouldn't have to deploy a smart contract themselves. It would just use yours. Right?
Daniel Marin:
Exactly. Just one universal smart contract.
Anna Rose:
Got it.
Daniel Marin:
And that's pretty much it.
Anna Rose:
Going back to your -- the speed up that you're talking about, what are other areas -- you sort of mentioned the Nexus 2 comes out, but there's a few -- it's like it's faster, but there's some other drawbacks. What's next on your roadmap in that regard, in terms of adding new techniques? Are there any areas you're paying attention to?
Daniel Marin:
Well, a lot of exciting things are coming on many, many fronts. So, not only is the zkVM, as mentioned, like modular, so we're working on essentially every part of the modular zkVM stack. So our vision has always been to do large scale proof aggregation for the zkVM and to essentially build an ecosystem of applications that consume compute and supply compute. So to allow people to essentially lend their compute to participate in a really large scale computation. So going back to our discussion that Jens brought up about re-execution, Anna, are you familiar with the project called 'SETI at home'?
Anna Rose:
SETI at home.
Daniel Marin:
ASA and Berkeley in the years:What our zkVM is optimized to do, our goal is just to prove really, really, really large statements. And so what our zkVM is optimized to do is essentially to do large scale joint collaborative proving. And that is where the recursion part comes in as a core part of the system, which is, can we essentially grab multiple proofs from multiple different people and bring them together into a single proof that verifies a whole large computation? So for the Nexus 3, we'll see what we'll have. There are improvements from the developer experience to multiple parts of the proving stack, but there will be components potentially as well on the what you can call the large scale proof aggregation system. We'll have more details there, but I prefer to keep those secret.
Anna Rose:
Yeah, I want to hear -- so when you say proof aggregation system, I'm wondering, do you have in mind also like a prover network, a prover marketplace kind of scenario? We've seen a lot of the zkVM sort of point in that direction, so I'm wondering if that's also on your roadmap.
Daniel Marin:
We've seen a few of those conversations. It has always been our goal here at Nexus. Essentially, we want to be able to do the largest verifiable computation ever performed. That's what we want to do. Let's say you look ten backs into the future, we would want to say Nexus was able to run an extremely, extremely large computation that has never been seen before. And while people talk about ZK marketplaces, ZK networks here and there, we are concerned about the problem of building a really, really, really large scale machine, a single machine, having many people work together as one. So we don't call it like market or something like that. We'll see. I don't want to spoil a lot of things in this ZK Podcast, but let's say that we are concerned about the problem of making the zkVM as powerful as possible through any means as possible.
Anna Rose:
That's your focus. So you're not focused right now on building a marketplace. Could you potentially work with a marketplace like, you talk about this sort of largest verifiable computation possible, a joint collaborative proof, but I guess the proof itself would still ideally be smallish.
Daniel Marin:
Exactly.
Anna Rose:
Would like, you need a special actor to actually create that.
Daniel Marin:
Exactly. A special actor. And so this actor, we call it the orchestrator.
Anna Rose:
The orchestrator, okay.
Daniel Marin:
Yeah. So this is a system that orchestrates compute. Quite interestingly -- so in our whitepaper, actually, Anna, this is all public. In our whitepaper with define the zkVM as mentioned, mathematically using multiple mathematical components. And one of those components is called PCD, which stands for Proof Carrying Data, which is a generalization of IVC, which as mentioned stands for incrementally verifiable computation. So PCD is essentially a distributed incrementally verifiable computation.
Anna Rose:
Distributed. What was it again? What's the I?
Daniel Marin:
Distributed --
Anna Rose:
Incremental verifiable computation.
Daniel Marin:
Incrementally verifiable computation.
Anna Rose:
Okay.
Daniel Marin:
Exciting. Just like you and Jens and me and maybe a few other friends, perhaps we can do a computation in pieces, right? And essentially use our pieces to combine together to build a proof and then keep computing on it. Keep computing, keep computing.
Anna Rose:
That's the folding itself is that like each one of us would do one of the folds or like whatever, one of these iterations, and then they're combined?
Daniel Marin:
Exactly. And then they're combined and then we can keep combining them and combining them ad infinitum, so to say. So actually, Anna, this is something else that we've been super focused on, because I feel like this is something that people don't talk about is the ability to prove unbounded computations. So what this means is that the zkVM is able, like if you have enough patience and time to prove a really, really, really large computation. In other systems, you have to set up a static bound on the computation. You say, we're going to prove any program up to 100,000 steps. If you want to do that, you can do it. But if you want to prove like 101 --
Anna Rose:
It doesn't work.
Daniel Marin:
You need to either -- Exactly. It won't work. You need either a new verifier or two proofs or something like that. What we've been concerned of is, can we have a verifier that is not only universal for any program, but also universal for any length of execution. And the folding property, as well as the recursion, exactly, allows us to do that. We can keep folding and folding and folding, and this is the whole premise of incrementally verifiable computation.
Anna Rose:
Interesting.
Daniel Marin:
You might see that zk-Rollups actually fall very nicely in this category. I think zk-Rollups are naturally incrementally computable machines.
Anna Rose:
Are they unbounded, though? Do they need unbounded computation?
Daniel Marin:
Well, you're going to run your zk-Rollup for -- hopefully for the whole future of humanity. So you can use traditional recursion to essentially combine zk-Rollup proofs. There are some challenges. Maybe they recurse, aggregated proof needs a different verifier than the proof for the actual execution, which is a challenge. It would be really nice if zk-Rollups were essentially just one proof. And that proof was tiny, and it just got updated, updated, updated, updated on every block. You just have a new proof, you take the old proof to combine them, and that's your new proof. That's the premise of incrementally verifiable computation, right? It's that you need just one verifier and you can combine the proofs over and over and over. Yeah, that is sort of like our goal to enable this incrementally machine -- Incrementally computable machine.
Anna Rose:
I want to go back to the unbounded computation because it caught my interest, which is like, what is an example of that? Maybe that -- and we can also talk about this re-execution. I kind of want to dig into what that would even look like. What's a use case there? What's the use case for unbounded computation? What would that -- yeah, what kind of application needs that?
Daniel Marin:
Yeah, absolutely. Ethereum. Ethereum is an incrementally computable machine. The EVM or any other virtual machine based blockchain system. And it would be very nice, for example, if -- actually Vitalik has been talking about it. He asked at some point, what if we all achieved consensus on a proof system? Let's just say there's a million out there, what if we decided we synchronize as humans into choosing one? Well, perhaps if we wanted to choose one, it would be one that is potentially number one, very simple, number two, widely understood as a scientific breakthrough, so to say, and number three, that is optimized for recursion and incrementally verifiable computation. We would want to have a proof for a block. For this proof to be convincing, hopefully this proof system is very simple, and for this proof to be updatable, we would want that if we have essentially a proof for block 1 and a proof for block 2, we can just cheaply merge them together and have a proof for block 1 and block 2. So I would say that our main goal has been particularly proving really large computations, like rollups, so to say.
Anna Rose:
Okay. But sorry, Daniel, I keep trying -- it's the unbounded computation that I'm trying to understand, though. It's not the IVC so much. So what is that? Is Ethereum unbounded computation?
Daniel Marin:
Exactly, exactly.
Anna Rose:
Okay. That's what you're saying. Or rollups would need unbounded computation.
Daniel Marin:
Exactly, exactly. So if you're launching a rollup, an L2, an L3, even an L1, these machines are essentially running an unbounded computation that will keep computing, and computing every day, more and more and more and more. One would want to have a proof system that is able to prove unbounded computations without changes to the universality of the verifier. The verifier should remain the same, and the proofs should be easily aggregatable. So the unboundedness also helps in another thing, which is on collaborative proving. Collaborative proving makes a lot of sense if you're proving an unbounded computation, because you can keep doing it and keep doing it, keep doing it and use computational resources from multiple people. And these people, you know, they might connect, they might disconnect, but you can just reallocate computational resources and keep computing. So it doesn't matter who is doing the computation. It can be, let's say, stopped for a moment, but restarted by any other agent. This is a discussion about liveness, for example.
Jens Groth:
I'm wondering if it's useful also to give sort of like the original motivation for incrementally verifiable computation that was for an unbounded computation. And the example was, suppose humanity wants to compute something that will take 100 years, or whatever it will take, you don't know. That's essentially an unbounded computation. And the question is then, how do you sort of trust what happened 100 years ago, and why would you trust that the computation has been correct up to this point in time? And in incrementally verifiable computation, you sort of do the computation in chunks, and once a chunk has completed, you spit out a proof, and then you do another chunk, and you spit proof, and those proofs aggregate all the previous proofs. So now you know that all the previous chunks of computation are correct, and that led to this point where we are now and what the state of the computation is at this time.
Anna Rose:
I want to actually, just to clarify there, because what it sounds like is more recursion, just like recursive proofs, like Mina, the Mina system, where it always had a proof, and then it made a new proof with new information and the old proof. But in the case of IVC, it also sounds like you're kind of you're adding lots of different pieces of almost like a partial computation of a proof with an existing proof. Is that what it actually looks like, or is it straight up recursion in the same way?
Daniel Marin:
So it's simpler recursion, actually. So one can achieve recursion in IVC using monolithic STARKs. There are just some downsides. Falling systems are optimized for essentially enabling IVC in a very cheap fashion. There are other technical details there as well, but the folks from Mina were the original ones who proposed. They didn't call it that way. They didn't call it IVC, they just called recursion. But essentially, they essentially proposed the first blockchain, succinct blockchain IVC system.
Anna Rose:
Yeah, that's nice you say that, actually. Yeah, I feel like during the IVC boom of last year, I felt they kind of got a little bit forgotten. People needed to remind them that there was a system that did something like this. Yeah.
Daniel Marin:
Yeah, I remember reading the Mina whitepaper when I was a student, and I had a panel some months ago with the Mina folks, and, yeah, this is what I was telling them. It was like, guys, you were the original ones to propose this. And I think this is going to be the future for a long amount of time. We'll see how people call it, but now it's incrementally verifiable computation. Right? And technically, what we have is an incrementally verifiable zero-knowledge virtual machine.
Anna Rose:
Hmm. Cool.
Daniel Marin:
Yeah. I would say that another topic is just like, will this proves that we compute for Ethereum, convince someone a year into the future or two years into the future. We would want this proofs to be fairly simple, mathematically speaking, that it could convince anyone. For example, when you say, oh, I use this random proof system, it's like the proof system could be sound, etcetera, etcetera. But it's like, how do you convince other people that this is correct? Such as we had discussed before. We believe that a system that uses very simple mathematical primitives, it's very simple to understand from a mathematical perspective. Maybe it's not as efficient, but it's simple, leads to many advantages in terms of human synchronization.
A lot of Ethereum is about synchronizing people. We believe that something like that is important in terms of standards, so to say. Not standards, but more universally accepted standards for what a proof is. And so this is important for the point that I was going to make before about liveness, right? So if for some reason there's a company that is computing a lot of proofs, but then they die or whatever, hopefully the proofs are, number one, is easy to understand, easy to produce, and then anyone can continue producing those proof. So just as people are computing blocks, and there's like block proposers and all of that, the same, right, people would compute/make proofs and they would propose a proof, the proof would get accepted. And there's a way to decide on a proof. This is a different topic of conversation that I am particularly very passionate about, which is sort of like the future of Ethereum. Like Ethereum wasn't designed for zero-knowledge proofing.
Anna Rose:
Yeah.
Daniel Marin:
Right. So what will happen in the next five years of Ethereum? Well, we'll see. But Ethereum is certainly very here and there. Very ZK unfriendly, I would say, on the side of proving and on the side of verification.
Anna Rose:
Yep.
Daniel Marin:
So. Well, we'll see. I'm very optimistic that the Ethereum Foundation folks --
Anna Rose:
Will make changes.
Daniel Marin:
Yeah.
Anna Rose:
It's funny, because I was going to ask you if you are solely in the Ethereum world or if as a zkVM, you could be on any chain. So, yeah. Have you ever thought of deploying elsewhere that's more ZK friendly? It's so -- what is the word? It's heresy to suggest such a thing, I realize, but yeah.
Daniel Marin:
Ethereum probably since like:For example, we would want precompiles for BLS12-381 so that we could do BLS verification of Ethereum, of BLS signatures, but that just hasn't been enabled. There's been an EIP that has not been enabled. We would want other precompiles to be able to do other different things. This would massively simplify our proof compression process, because for now we're bounded to essentially have a system that makes the proofs compatible with BN254, which is not nice.
Anna Rose:
Yeah, yeah, yeah, I know. I've heard this from lots of teams that have worked with the Ethereum stack.
Daniel Marin:
Exactly, exactly. Other blockchains, they have their problems as well, just because sometimes they just don't have BN254 precompiles. So it's just like worse. So you have to deploy a full Rust verifier. But sometimes those blockchains just happen to be so cheap that maybe doing that is fine. There were recently some innovations on the side of Bitcoin and Dogecoin.
Anna Rose:
True.
Daniel Marin:
Right. You saw that, right? To add an opcode for a Groth16 verifier. And all of those innovations on the execution environment is good. So, answering your question, our system, even though like in my heart Ethereum occupies a special place, our system is fully general purpose. Our verifier can be deployed like anywhere. And in fact, one of the core goals of the Nexus project, so to say, is to, what we call, democratize Ethereum's execution environment. And by that I mean to allow developers to have rollups or other execution environments. They're just distinct through the use of precompiles and other virtual machine architectures. For example, this limitation that we've had on precompiles on the EVM would be solved if the EVM was a zkVM built with the Nexus zkVM.
Anna Rose:
Oh, so you would almost like triple -- like you'd put yourself on the third level or something?
Daniel Marin:
No, what I mean -- sorry, what I mean is that people have not been able to develop their own precompiles to the EVM and then merge them into the Ethereum protocol.
Anna Rose:
So maybe they have to move it to the L2.
Daniel Marin:
Exactly. Innovation will probably be faster in an L2, where you can enable new precompiles, because you don't have to wait for the Ethereum Foundation to pass something. So innovation will likely improve on that side by just allowing people to write their own precompiles.
Anna Rose:
Cool.
Jens Groth:
I wanted to add to what Daniel described for us, like the blockchain space, both Ethereum, but also potentially other chains. I think we should also think beyond the blockchain space. So our goal is to enable a verifiable computation to be everywhere. So if you made verifiability really cheap, so it was almost at the cost of computing, then you could spread verifiability everywhere. So you could imagine the Internet and all the data sort of floating around, and the Internet came with proofs. So it was all based on verifiable information rather than just whatever you believe that the other computer is giving you. So I think the end goal, and that's probably going to be many years into the future, is really that you sort of increase the trustworthiness of everything through verifiable computation.
Anna Rose:
Yeah, actually, Jens, there's a few other. There's Pluto, who's doing sort of the TLS web proof thing. There's a few teams -- they're kind of like we've been describing them closer. They're not quite hardcore, they're infrastructure, but they're a bit higher up the stack towards the use case. And I do think Mina, in a way, going back to Mina, I think they also had that vision of being able to prove some -- like the Internet, which is a much bigger picture. So I like that you're also thinking about that, because I think there's something important that we do think past blockchain stuff. I think interaction -- the intersection between ZK and blockchain, has been epic for the acceleration of ZK. It's been the funding, I think it's also brought a lot to blockchain, but, yeah, I love the idea that it even transcends that.
Jens Groth:
I mean, I think there's a lot of things happening in the application space because what people are doing, they're going out and finding these trust anchors. You need to sort of prove something about something.
Anna Rose:
Yeah, exactly.
Jens Groth:
Right. And you have people that are going out and so like stripping out passport data using the TLS connection or so like Google account --
Anna Rose:
Or Twitter. Yeah, yeah. For sure.
Jens Groth:
So some information, so forth. Right? And proving things about that. And it's interesting because this is exactly something that they can do because we have improved the efficiency of zero-knowledge proofs. Right? If you tried to do that ten years ago, you would just failed because you couldn't do it. You didn't have the performance to do it. But now we have the performance to do it. And even that they can take general purpose proofs and throw them at it. So it's much faster innovation and prototyping. You don't need to sort of handcraft a proof system that is efficient enough for this application, but you can just go out and take a general purpose. And I think it's sort of very interesting what the application space and the matching zero-knowledge techniques are going to look like. I think we are really seeing very rapid improvements in performance, which is going to unlock a lot of things.
Daniel Marin:
Pretty much everybody in our team, we are primarily excited about the real-world applications of ZK and to bring it into industry. That actually goes back to our initial conversation about verifiable cloud computing. We care about making it very, very professional, high degrees of observability, what you can consider an elastic cloud that is highly available, highly optimized, very professional, that just computes proofs in connection with ecosystem partners and so on and so forth, which is where the network discussion comes into the future. But bringing verifiable computation to full generality to the Internet is our primary goal as a company.
Anna Rose:
Cool. Well, thank you Daniel, thank you Jens, for coming on the show and sharing with us all these things. I mean, we covered formal verification, audits and security all the way to sort of the Nexus system and this verifiable compute, and what that actually means, this largest verifiable computation… Yeh thanks for coming on.
Daniel Marin:
Yeah, thank you.
Jens Groth:
Yeah, thanks, Anna. A pleasure as always.
Anna Rose:
All right, I want to say thank you to the podcast team, Rachel, Henrik and Tanya, and to our listeners, thanks for listening.
SHARE
Transcript
Anna Rose:
Today I'm here with Jens Groth and Daniel Marin, both from Nexus. Welcome to the show.
Daniel Marin:
Thank you, Anna.
Jens Groth:
Hi, very nice to be here.
Anna Rose:
So I want to start with you, Jens. I want to do a bit of a catch up, folks might be familiar with you. Your last name Groth, or Groth, I'm assuming will be very familiar to a lot of our listeners. And yes, you are indeed the author of the Groth16 proving system. When we did our initial episode many episodes ago, we actually spoke in depth about the Groth16 system and we talked about what had happened in leading up to that, as well as what happened after. But I think it might be fun to catch up with you on that topic. I'm kind of wondering, are you tracking all of the implementations of Groth16? Yeah, is there anything maybe new you can share?
Jens Groth:
Yeah, so I'm not following all the implementations of Groth16. I focus more on the science side than the implementation side. But we were recently at the ZKProof Conference in Berlin, and ZKProof is this standardization process where we're trying to standardize zero-knowledge proofs. And as part of that there's now an effort to standardize Groth16. So I think that's very exciting. That's the first standard that ZKProof wants to get going.
Anna Rose:
What would that mean to standardize it? Is it to choose one iteration or is it to make it a standard itself? Like just the research part.
Jens Groth:
So I think it's still open what exactly it means to standardize. That's sort of like also part of the standardization process to decide. But the goal is to be very clear about what is a Groth16 verification, and also set this standard so people can adopt it in good faith, they know they're getting the right thing. Right? So the standard is a lot about confidence building in zero-knowledge proofs.
Daniel Marin:
Yeah. Also formal verification of the verifier to know that exists. Yeah, that was one of the efforts.
Anna Rose:
Cool. Other than the Groth16 world, what has happened for you since we last spoke?
Jens Groth:
o Nexus. So I joined December:Anna Rose:
Nice. And what are you working on there? Are you creating a new Groth16 like proving system? Are you working on a different part of the research? Yeah, what kind of thing are you doing?
Jens Groth:
Yeah, so the goal really is to work on zero-knowledge proofs, and that's why I joined Nexus because I really wanted to work on zero-knowledge proofs. And we are building a zkVM, which we hope to have working at really massive scale so we can prove trillion CPU cycles per second. So it's basically getting all of the technologies together. And of course this happened a lot in the space in the recent years. It's really moving very fast. So it's about picking a lot of different techniques and putting it all together and choosing some points in the design space that we hope will get us there to this very high throughput.
Anna Rose:
That's cool. I want to come back to exactly the types of thing you're working on when we talk a bit more about Nexus and the architecture there. But Daniel, welcome to the show for the first time. Tell us a bit about what you were working on before this and what your role is at Nexus.
Daniel Marin:
Before that, I'll just say that probably when you asked Jens if he was working on a new Groth16, you wanted to hear like, oh yes, we're working on Groth24 or something like that.
Anna Rose:
Maybe.
Daniel Marin:
But interestingly, it's not the case. But maybe in the future, maybe in the future. I actually do track very closely all of the implementation of Groth16, but that's something else we can discuss later.
Anna Rose:
Oh, cool.
Daniel Marin:
Yeah, so a little bit about my background. Studied at Stanford cryptography, was part of the Stanford Blockchain Club, and, well, essentially while being at school started Nexus. And the goal is just to provide a hyperscale prover for verifiable computation. That's it. To enable verifiable computation and bring it to life in a really sort of large scale, really professional sort of fashion. But you can consider just verifiable cloud computing. And our approach is to essentially combine scientific efforts and breakthroughs from mathematics, cryptography and engineering, as well as crypto-economic efforts, which we can discuss later as well, to really build this super powerful machine, which we call the Nexus zkVM.
Anna Rose:
What was it originally? Was that the original plan when you first created this project, or did it have a different focus?
Daniel Marin:
Well, it started about two years ago, but more formally about a year and a half. And the goal was to enable verifiable computation. But we were in fact taking a different approach for a short period of time of the company, which was essentially crypto-economic security, actually a topic that is pretty popular nowadays. And the idea there was to use something like WebAssembly, as well as staking mechanisms to provide a stateless off-chain computational system that would allow smart contracts to outsource compute to economically incentivize off-chain agents.
Anna Rose:
Oh, do you put this in the category of ZK coprocessors? Was it more like ZK coprocessors and now it's changed?
Daniel Marin:
So we usually don't call it like ‘coprocessor’. So usually a ZK coprocessor is like you use some general purpose verifiable computation system to outsource compute from a smart contract, but it's only one application of verifiable computation. Right? Our approach has always been fully general, general-purpose from day one. And so our goal is to allow developers to build things like ZK coprocessors of different flavors, actually, and that is the primary goal.
Anna Rose:
When you first started, though, when you were talking about this cryptographic security, did you have a more narrow focus then?
Daniel Marin:
paper, which was published in:Anna Rose:
I feel like I hear the term verifiable compute a lot, or verifiable computation, but can you walk us through what that actually is?
Daniel Marin:
Yeah, absolutely. So I think when people mention verifiable computation, I actually think about a traditional cloud computing service, just that it comes with a proof. That's simple.
Anna Rose:
Okay. And what does the proof represent, though? That is that instead of the cloud? Is it instead of a third party kind of guaranteeing it, or is it just -- it never was guaranteed and now it can be? It can be verifiable.
Daniel Marin:
Well, it was never a guarantee. Right? Now the proof is something that provides you as a consumer a guarantee, but that you can also share with anyone in order to convince them about something. It's always been sort of like part of our philosophy that developers shouldn't know whether they are computing in a traditional environment or in a verifiable environment, and the user experience should be exactly the same. And the Nexus zkVM was designed exactly with this user experience in mind. It's like you shouldn't know that this is a verifiable machine, that this is a verifiable world. It should be essentially taken for granted. Of course that's not the case, because performance, right? But that's exactly what we're working on.
Anna Rose:
Let's define that verifiable here. I'm trying to picture a use case of this. A developer creates a Rust application. What kind of application would need to be verified or verifiable, or what element of the application?
Daniel Marin:
So, yeah. So first of all, verifiability is about verifiability of the execution, and we verify that the output actually corresponds to running the program on a given input. This differs from formal verification, where verifiability is static. Here it's a dynamic process, it goes after execution.
Anna Rose:
Okay.
Daniel Marin:
Essentially, you're trying to convince someone that some algorithm being computed is being computed correctly. Or more abstractly, you're trying to convince someone that something is true. And so there are obviously a lot of examples in the blockchain space. We you also find super interesting outside of the blockchain space, what the possibilities there are, even though the market is less developed. But answering your question, just a simple thing that you can prove is, for example, properties about your Google account without revealing your password. For example, I could prove that I have 100 emails from Jens sent last night to me without giving you...
Anna Rose:
What happened?
Daniel Marin:
Maybe he was too worried about the ZK Podcast. What are we going to talk about? So, identity, high security computing, of course, all of private computation falls into a realm of verifiable computation. There is also the field of re-execution. So this is quite an interesting property that has never existed in the history of humanity, right? Which is the fact that if you compute a proof for some computation, then no one in the whole future of humanity has to rerun that computation, because someone already spent a lot of resources in producing this proof.
Jens Groth:
To add on this, right? So there has been this notion of double execution. Right? If you have to have high confidence in a computation, well, how would you build that high confidence? You would execute it on multiple machines, maybe with different pieces of software developed by different teams of developers and so forth. Right? And you would then compare those outputs and see do they match. In that case, you would believe the computation. And it's really cool that -- even though it's much more expensive to just do a double computation, but it's really cool that you can now create a proof, and now everybody has a guarantee that this is the right result and you don't need to worry about was there some random bug during the execution or whatever that otherwise could happen.
And then to add on that, if you just do two executions, then you are not able to go to anybody else and convince them that this was the correct result. But now when you have a proof that's verifiable by anybody, then you can take this piece of data and you can go around and show, here is a proof that it's correctly computed and everybody will believe that data. So you can send it around to other systems and can compute on this data and trust the data because it has a proof.
Daniel Marin:
And everybody in the future as well. So the proof is theoretically valid for an infinite amount of time. And that's why actually going back to our conversation of standards, standards are important because if we all agree on a standard for a final, let's say, proof system like Groth16, then my Groth16 proof hopefully will also convince people living 100 years into the future.
Anna Rose:
So, like, in this case, you can create a proof, but then people need to trust that the proving system you used is correct -- is working correctly. And you're saying if there's a standard around Groth16 or something, and that is sort of an accepted strong proving system that people accept as correct, then they wouldn't also have to audit the proving system because I guess otherwise they would potentially.
Daniel Marin:
Exactly, exactly. No, you're totally right. And I think this is something that people don't talk about too often, which is like, people talk about proofs and proving systems, but no one talks about how do we convince ourselves that any of this makes sense, right? Like you can tell me this is a proof, here's a million bytes of data, and the proof system is, I don't know, like Halo2, so to say. Right? Well, I don't know if the verifier that I'm using is correct, if Halo2 is sound, if the prover for Halo2 is not cheating. So, I mean, it just doesn't mean to me anything. So hopefully formal verification can help us with this, as well as more concrete estimates of soundness and completeness properties of proof systems. But just like this is a social problem, how do I convince to you that this proof actually proves what I'm trying to prove? It's very hard.
Anna Rose:
And so far with Groth16, the proof of that basically comes because a lot of teams have audited and adopted it. It's sort of like they checked it, they implemented it, those implementations, we assume, have been extremely carefully checked and battle tested, and then a lot of people use them. So they've made lots of -- kind of there's lots of exposure of that system. So if it was attackable the assumption goes, someone would have already done it. I always find that so odd that that's how we get our trust in a weird way. It's just sort of like it hasn't broken yet, therefore we believe.
Jens Groth:
I mean, in some sense, this is how the field of cryptography works, right?
Anna Rose:
Yeah.
Jens Groth:
We built our systems based on some assumptions about cryptographic primitives, that the discrete logarithm problem is hard, or you cannot break this lattice assumption or something like that. And the basis for that trust is essentially, people have thought hard about those problems and not found a way to attack them. So there's a social process in getting to the trust. Right? We don't have a process by which we can get trust that does not rely at all on human effort. So I do think this is something that is part of the system. And it's interesting, actually, when people talk about proof systems, they say, well, you get mathematical guarantees of truth and so forth. And that is true. You've sort of made some part of the trust process mathematical, but you still do go back to some underlying assumptions, and you do need to have some human trust, especially if you're not an expert in the field. Right? You sort of have to take some cryptographers at the word that you know, yes, this is a secure proof system, this is a secure assumption to rely on.
Daniel Marin:
So this is actually part of, like our company strategy, so to say, as well, which is like, we recognize that a lot of this math is pretty unparsable for a lot of people. And so a lot of trust in a proof system and its implementation, being sound -- you know complete and sound and being secure, comes down to social proof. And for that, it's important that the people developing it are, well, world experts across multiple fields, whether that's compilers, cryptography, obviously, to other realms, like formal verification. Because otherwise, how will you as a user know that you're buying into a system that actually provides security, which is the core product of a system like verifiable computation?
If the people that are developing are not cryptographers, or at least don't have expertise on this, how do you convince yourself? My hope is that there will be a way in the future in which trust in experts, such as the people that we have in our team, is not fully necessary because of new advances as well, and formal verification, which can provide undebatable proof of soundness. But that is too futuristic nowadays. Today we just have to trust that the people that implemented a proof system didn't miss a constraint here and there. And that's pretty hard to do.
Anna Rose:
I feel like we should define formal verification. I've done two episodes on this at least. I know it's come up a lot, but can you explain what that actually is? Because it does seem like it's a step further than just the social. It's not just building something, putting it out into the wild, seeing if someone can break it. It's actually almost like reverse engineering, creating almost like a mirror project that just makes sure that this executes the way it's supposed to. Yeah, but let's explore that a little bit.
Daniel Marin:
Yeah. I'll just say that I was recently seeing -- there was a talk that Vitalik gave at the Google event, and he was saying, we want to move out from just ‘trust me bro’ scenario. And that's exactly what I'm trying to say. A lot of the systems are like, just trust me bro. This is sound. I've written all the constraints that I've promised to write. The verifier is correct. No one actually has a time or knowledge to go and check it out themselves. Like I could build a prover today and say that it runs at a billion hertz. It's going to be really hard for anyone to contradict that claim because they would need to find a soundness exploit. And so the hope is, with formal verification, that that process can actually become automated and that it doesn't rely on social proof.
Jens Groth:
I think there are several layers of formal verification. So if you look at papers in cryptography, they will often come with a security theorem, there will be a written proof, and this is something that's developed by hand, but it is a formal verification process. Right? It's just processed by humans that develop that formal verification of the proof. And we have seen in some cases that there are subtle details and the proofs are flawed and so forth, right? So it's not a failed proof procedure, but it certainly is much better than not having a proof. Right? If you take a cryptographic paper that publishes some new scheme and it doesn't have a proof, then it's typically a sign that it's broken.
Anna Rose:
Okay.
Jens Groth:
That is my experience.
Anna Rose:
Like, they haven't been able to provide the proof that it works.
Jens Groth:
Exactly. So either it's a sign that it's developed by somebody who does not have the technical competence to it, or somebody who was too sloppy to do it or something else. Right? But really is, I think, more than 50% of times, if there's no security proof, it's just because it's broken.
Now, if you look at the papers that do have security proofs, then they're typically good, especially if it's been peer-reviewed and so forth. But we do have exceptions to that as well. Either few cases there are really horrible flaws, and it's just completely broken. Quite often there's subtle details and there's some corner case where things are not quite as you expected from the security theorem. So if you go a step further than that, right, so then you would say humans are flawed processors, right? Humans do make mistakes. Humans do have a hard time keeping track of a lot of small details. So you could say, well, can we have a computer system that can help us? And you do have formal verification tools like Lean and Coq and Isabelle/HOL that can help you really check in detail that a proof is correct.
ly not so many lines of code,:Anna Rose:
I always thought of formal verification as more on the engineering side of things, like that you've implemented it, and then you create formal verification to guarantee that the implementation is acting correctly. But is it more on the research side?
Jens Groth:
So I would think of it as these tools allow you to prove theorems, and they prove theorems about some model, right? So you have to model what it is that you want to prove something. So you would have to write a model of what are sort of like the system that you're doing and provide some theorems about that system. And then you can prove that, and you can actually go from direction of like, if you have proof of that. There are some tools that allow you to extract code from that that implements these systems. But that's sort of like the opposite direction.
People are also interested in, can you tie an implementation to the model that you're formally verifying, right? And that would be really useful, because one thing that happens is that you spend a lot of effort in creating this model and proving some theorems, and now you have an implementation. That implementation goes out of sync with the model you have, because while you're updating something, you come up with some performance trick, but now it's actually a different system that you have than what you have in the model. So it's a very hard thing to keep those two in sync with each other, and people also thinking about how do you try to make sure that they actually do stay in sync with each other.
Anna Rose:
That's interesting. I remember like I've spoken with some auditors. I think auditors will sometimes do formal verification tools, and there's almost like this spectrum of the way things get checked. Rust, in a way, as a language, is also doing some sort of check that things are correct, that's like a built in -- I don't know if you'd call it like debugger, but it's doing something to make sure that your programs execute -- You've written it in a way that it's executing correctly and it will kind of correct you if it's wrong. Then there were things like fuzzing, and then on the sort of far end there was formal verification. And I guess that's why I thought of it more in the coding, in like the engineering implementation side of things.
Jens Groth:
Yes. So I wanted to point out that the type checking system in Rust indeed gives you some tools to formally verify things. It doesn't allow you to express everything, but it does allow you to express some security properties. And that's exactly the strength of Rust, that there are particular some memory safety issues that don't occur in Rust because you have this type checking system. There are also attempts to build in more expressivity of that type checking. What you can prove into Rust. So there are several projects that have tried to build sort of like advanced type checking, so you can annotate in the code that these are the post and preconditions I expect from this function or method. And then you can actually, if you have built this system and you can also use the type checker to check that those properties hold.
Daniel Marin:
Yeah, I will say that here at Nexus, we're like huge fans of Rust. I guess at this point it's not very unique, but we really are. So, for example, formal verification tools in Rust to verify Rust code. We've been experimenting with some of those. All of the Nexus zkVM is built in Rust. We're working -- we recently launched, of course, like a Rust SDK that works with async Rust to allow you to compute multiple proofs in parallel. We're working on a system that allows you to write this precompiled extensions to the instruction set purely in Rust. And it's a little bit mind-boggling to think what you have is a piece of Rust code that verifies arbitrary Rust code and might be formally verified with Rust code. So, I don't know if the --
Anna Rose:
Rust in Rust on Rust, somehow. Okay.
Daniel Marin:
Exactly. Actually, our team is composed of Mozillians, people that came out of Mozilla, the project. So, for example, one of our team members, actually, his name is Sean, Sean Martell, he designed the Rust logo itself. But yeah, we're huge believers into that. And the reasons are multiple, not only from the memory safety and the compiler, but also just the developer experience. It's just amazing.
Anna Rose:
Nice.
Daniel Marin:
And I also wanted to say, Anna, on the point of formal verification, what you said, this is what we see a few teams doing, that it's like we implement, then we formally verify some subset of the constraints. Right? Interestingly, the approach that we've taken since day one is we first essentially formally define the system. And you will see that in the Nexus whitepaper, we provide a mathematical description of the machine. So quite literally, the machine is represented as a Turing machine, so to say, as an algorithm -- as a non interactive, succinct SNARK, etcetera. But it is actually represented as an algorithm, and all of the algorithms are described therein. And how we do it is we compose algorithms from multiple open papers. Just now, I was mentioning Nova, Hypernova, incrementally verifiable computation systems, so on and so forth. And we actually combined this building blocks in a very modular way, so to say, to build a mathematical definition of the machine using these papers.
And this allows us as well to prove that the construction is complete and sound, because the proofs of soundness, so to say, come from the respective papers. And then once the machine is mathematically defined, we implement it. Once it's implemented, because it's well documented from a mathematical point of view, this gives us essentially two benefits. Number one, formal verification is simpler because we know exactly what we're formally verifying. We didn't go out and implement something and then try to do it instead. We already knew what we wanted to implement, and then we implemented it, number one. And number two, we hope that it will allow us to build multiple clients. So not only a Rust-based zkVM, but just as with Ethereum's client diversity, for which there are like Rust and Go clients.
Anna Rose:
So I want to dive into sort of the proving system and the Nexus system. But before I do that, I just want one last thought on sort of the formal verification in this auditing sort of space. Has there been any proposals for AI auditing? Like, if you think of going back into that social model that you described of just, like you put it out of, you make it as strong as you can. You wait for someone to break it. Can't they just -- and maybe not yet, but wouldn't it make sense to just run AI agents at this thing, just throw as many hacker brains at it? Like has anyone done that?
Jens Groth:
So, people have used AI tools to find bugs, and there are many different tools that you can use to find bugs. So it's also a question like, how do they stack up against those other approaches? And I think I'm a little hazy here. I don't know this field very well, but my understanding is that AI is not quite as proficient as other tools in finding bugs. So you would probably find more bugs by other testing methods, fast testing and so forth, than throwing an AI at a system.
But they do find some bugs, and they might be different types of bugs. So it does give you a bit more coverage in the bug finding approach. What it doesn't give you and what formal verification gives you, so the elimination of all bugs, assuming your model is correct. Right? Again, there's an assumption here, right? But under that assumption, I don't know of any other tool than formal verification by a machine that can give you, so close to 100% guarantee that there are no bugs.
Anna Rose:
That's cool. All right, let's make the move over to the Nexus system. I feel like we got a little, you know, we got into the verification auditing world, but now I want to kind of dig down into what you are building. The proving system that you are talking about, or even that mathematical model. Now, I'm really curious what that looks like, because you mentioned IVC and Nova. When I think of a zKVM, I feel like usually there is sort of a central proving system. Sometimes there's some sort of recursion built into it. But tell me what yours -- yeah, the Nexus zkVM or the Nexus machine really looks like and what proving systems are in there.
Daniel Marin:
Yeah. Yeah. So, first, I will say that we have built essentially everything from scratch.
Anna Rose:
Okay.
Daniel Marin:
You know, it's been essentially taking zero amount of codes from anyone else except for Arkworks. Pratyush and, you know, all of those guys, the folks from Aleo, have contributed greatly to the Arkworks ecosystem. So we provided the first open source implementation of the Nova proof system after Microsoft's own prototype from Srinath Setty. So that was great. We decided to essentially re-implement it, productionize using Arkworks tooling. We've always wanted to have a very solid foundation for the zkVM up on which we can iterate very fast. And actually, we did. We provided an implementation of SuperNova, actually very easily after the implementation of Nova was completed.
Implementation of Nova took quite a bit. There were a lot of engineering complications that I can talk about as well, especially around two cycles of elliptic curves. We implemented the cycle fold, essentially techniques which allow for more efficient verification on two cycle of curves. And then we recently announced and released an open sourcing of the HyperNova proof system. It's more generalized, it has multiple benefits as well as the arithmetization of the machine. So we wrote in R1CS, the full RISC-V instruction set.
Anna Rose:
Wow.
Daniel Marin:
As well as the memory checking mechanism, which is based on Merkle trees. All of this is part of R1CS constraints. We also have a part of the zkVM that has, what do we call the proof compression mechanism, which is essentially after we've computed multiple rounds of IVC, and IVC means incrementally verifiable computation for the audience here, we compress the proofs because the Nova proofs are quite large. And so that as well, we essentially, we've done it from scratch.
Anna Rose:
What do you use there? Is it a STARK? Because isn't it usually like -- I feel like there's usually like a SNARK and a STARK. Do you do that, or do you just take another SNARK?
Daniel Marin:
Yes, people sometimes do like a STARK and a SNARK. We have done what you could consider -- this is not entirely precisely true, but what you consider like I'm neither a SNARK nor a STARK, more like a NARK.
Anna Rose:
Okay.
Daniel Marin:
So without the S. So we're not SNARK, not STARK, more like NARK without the succinct part. And that is folding. One can be more precise there. But essentially we make the trade-off on the prover side of things to not be succinct. The falling proof is actually the size of the circuit that is being proven, which is pretty big, but it allows us to do highly efficient proof aggregation. That's what folding schemes in Nova, SuperNova and HyperNova are good at. And then we grab that really large proof and actually compress it with a SNARK. In fact, actually three composed, regressive SNARKs. The first one of which is a modified version of Spartan developed by Srinath Setty, and then implementation by Zhenfei Zhang in Arkworks. And then we apply a couple of other rounds, the last one being, well, Groth16.
Anna Rose:
Okay.
Daniel Marin:
And this is what allows us to get to the three-group element proof size.
Anna Rose:
Would you say then, are you SuperNova or HyperNova under the hood, and then a Groth16?
Daniel Marin:
Yes. Right now it's HyperNova.
Anna Rose:
HyperNova. Okay.
Daniel Marin:
HyperNova, and then the proof -- what we call the proof compression sequence, out of which the last part is Groth16. So we don't go directly to Groth16. We need to do some mental gymnastics here and there to compress the proofs through multiple rounds of recursion.
Anna Rose:
Okay.
Daniel Marin:
Yeah. Because otherwise it's too much for Groth16 to handle. There would be like a gigantic circuit.
Anna Rose:
Is this recursion done in HyperNova itself, or is there like another proof in between there?
Daniel Marin:
Exactly, exactly. So HyperNova, Nova, SuperNova and so on and so forth are exactly the same for recursion. Right?
Anna Rose:
Okay.
Daniel Marin:
And this is actually super important. For example, our machine is able to do IVC. Just keep computing, keep computing, keep computing, keep computing within a bounded and reasonably small amount of memory. Even your browser, for example. So for example, you can do recursion on the browser. And this is important because for monolithic SNARKs or STARKs, sure, maybe you can prove a little bit on the browser if your computer is pretty powerful. But to do recursion, you do need to essentially use a lot of memory. Here, that's not the case. But the trade-off that we're doing is that the proofs are large, and so they need to be compressed before they can be submitted to Ethereum.
Anna Rose:
I see. I want to understand the comparison here with some of the other zkVM teams that have come out recently. So the first one I've heard of that had the RISC-V instruction set was RISC Zero. And then SP1 came out earlier this year. Now we have Jolt. All of those are using STARKs, as far as I understand, or some like in the case of SP1 --
Daniel Marin:
Except Jolt.
Anna Rose:
Oh, Jolt is not a STARK.
Daniel Marin:
No, no --
Anna Rose:
Jolt is a sumcheck-based SNARK.
Daniel Marin:
It uses a Spartan. Yeah.
Anna Rose:
Oh, it uses Spartan. Okay. So is that -- would you say of those three then, would you -- are you closer in spirit to Jolt?
Daniel Marin:
Because of the large field universe, yes. So right now, the Nexus zkVM, exactly. It's based on 256-bit field SNARKs. So, yeah, actually, we're pretty close there in terms of the actual field size, but we don't have any allegiance to large fields. In fact, we have been exploring quite actively small field universe. It just has many downsides, but it also has a lot of upsides.
Anna Rose:
So what kind of research are you doing into the small field stuff? It sounds like -- would you be able to use something like Binius somewhere, or is everything too built so you wouldn't be able to sort of hot swap these things?
Jens Groth:
So indeed we are looking into that, right? We are very agnostic about the techniques and there are many techniques out in the field that we are looking at. So this would be one of them. So I see this as a question of which kind of commitment scheme do you want to use to commit to field elements and then the trade-offs between large field elements and small field elements and the options you have for commitment schemes, because Pedersen commitments do require large fields, but then they have nice homomorphic properties and so forth
So that could -- again, on the small field side, I don't know quite where the landscape will settle yet. So Binius seems promising, but is that what is ultimately going to win or not? That's not clear. People are also looking at lattice-based schemes for instance. But we're definitely looking at that as an option. And I think the question you could ask is sort like what is the cost per bit that we are committing? At some point we have to look at what is the raw efficiency we can get out of the proof system, and we want to maximize that raw efficiency. And one metric is perhaps to say what is the cost per bit that you are committing? Another metric would be then, okay, within the proof system, how many bits do we actually need to commit to at all? And there are also techniques that try to reduce the number of bits that are committed. So GKR has been successful in reducing the overall commitment surface.
Daniel Marin:
Answering your question, Anna, I would say that one of the core properties of the Nexus zkVM in our philosophy from day one has been essentially the observation that the zkVM can be thought of as a modular stack of components, each of which can be individually optimized. And so for example, Jens and his theory team here at Nexus and others, we have been working extensively on what we call the arithmetization system that is inspired by Jolt-like techniques based on lookup arguments. And so what that essentially does is work into how to arithmetize a full CPU like RISC-V in a set of constraints such that we can pack as many CPU cycles per recursion batch. Another direction of optimization is the prover itself, which as Jens just mentioned, we're agnostic to whatever prover we use. We have built our own prover, right, as mentioned Nova, SuperNova, HyperNova. We've built those from scratch and they couple directly with the circuits being implemented.
In addition to that, we also have what we call the memory checking subsystem, which is essentially like an independent module that does not just CPU verification, but memory checking. And there's multiple techniques to memory checking. The Nexus 1 machine that we released back in September sort of silently was using a naive memory checking mechanism that was based on Merkle trees. Right now, that's still the case for the Nexus 2, but we are about to upgrade as well. And then there's multiple other independent components like the proof compression mechanism, right? So that is completely decoupled and independent of all of this.
And then there's many other components like what we call the Nexus Compiler is a transpilation from RISC-V to just more and better and simpler suitable instruction set architecture, which we call the Nexus Virtual Machine. And for example, the Nexus Virtual Machine was actually inspired by the Ethereum Virtual Machine, and it allows us to further to do two things. Number one, essentially you simplify RISC-V in a way that is just better suited for proving. And number two, to simplify it from a human perspective for it to make it easier to audit. There are some discussions here and there about whether that makes sense or not. Right now, the Nexus Virtual Machine is essentially RISC-V with just tiny modifications.
Next iterations of the Nexus Virtual Machine will be most likely supersets of RISC-V. And this gives us many, many benefits. But all of these components are modular, from the compiler, to the CPU architecture, to the memory checking mechanism, to the prover, to the arithmetization system, to the commitment scheme to the compression system. And how we work on this, we have multiple versions of those in our library. All of it is open source. We independently optimize them, we couple them together, and that is what gives you a given choice of a zkVM that might be optimized for either proving in large computer clusters with a lot of memory usage, or proving on constrained environments like a browser with small memory usage, but perhaps more CPU intense. This has been challenging because not taking code from scratch obviously gives you a lot of slowdown initially. But it has been very helpful to us to be able to iterate fast now that it's been built.
Anna Rose:
And it sounds like you had a lot of control. But I want to -- so I think on the proving side, proving system side, what it sounded like was that you have this somewhat modular, you have components and you can use these optimization techniques. So you wouldn't necessarily implement a new proving system, you would just use the technique that some new proving system has showcased and then include it in what you've built.
Daniel Marin:
Potentially, yeah.
Anna Rose:
If you can. And sort of, you mentioned like lookup tables, and I'm assuming there's like other techniques, maybe if you decide to do this small field thing, I'm guessing you wouldn't just re-implement something, but rather you just try to extract what they've done and add it to your system.
Daniel Marin:
Exactly. So it just gets better and better and better. Right?
Anna Rose:
Yeah.
Daniel Marin:
So from a business perspective, so to say, our hope is that this -- like the Nexus zkVM, can be thought of more as an ecosystem for cryptography, where multiple implementations of multiple different schemes and components can be found. And then one is a developer using the SDK can choose the optimal configuration for a given program. And the good thing is that as the codebase matures and the system progresses, essentially all of the applications that are compatible with the Nexus zkVM just get better and better and better. Right? So that's all Rust programs for now, right? That's good, because it gives the developer like zero friction, right? This has been also one of our core philosophy is that you should just be able to sell it, run it and get approved, right?
Anna Rose:
Yeah.
Daniel Marin:
That simple. And that modularity has always been top of mind. In fact I would say, Anna, that our story as a company has been fairly distinct from that of others in the sense that unlike other folks that have systems that go like super fast, perhaps initially when they launched, when we launched, we were not ashamed to say that the zkVM ran at one CPU cycle proofed per second.
Anna Rose:
Okay.
Daniel Marin:
One hertz.
Anna Rose:
I guess this is slow.
Daniel Marin:
nd this was back in September:And so it ran at 100 hertz, reason being -- there's multiple reasons, but the sizes of the circuits that we were proving are actually super large in the Nexus 1. There were like 30,000 constraints, which it happens just to be a lot per CPU side. Here at the Nexus 2, we have essentially incorporated advancements from the Jolt techniques. And this is actually something that we were looking into since more than half a year ago, which is to incorporate essentially Jolt's arithmetization techniques to reduce the size of the circuits so that we could pack more cycles per recursion step. This is exactly what we've done. And so that was the Nexus 2 announcement that we did probably three weeks ago or something like that. And our hope is to continue going in that direction across many verticals of optimization.
Anna Rose:
Okay.
Daniel Marin:
And collaboration with other teams as well.
Anna Rose:
But how fast is that one? You sort of said the 100 hertz was 1.
Daniel Marin:
Oh, yeah. So, okay -- so with Jolt, it gets about 100 times faster than the Nexus 1. So probably like --
Anna Rose:
10,000?
Daniel Marin:
10,000 hertz in our system. But the problem is that being fully transparent, it also has some downsides, like doing Ethereum verifiability is significantly more complex. So it's tricky to just talk about hertz without it being a full end-to-end system. So for example, there are some systems, that -- like you can make them go super, super fast, but they're just extremely hard to recurse, extremely hard to compress. So it's like, okay, sure, you can go super fast, but it's not useful to anyone. And then there are systems that are probably slower, but at least are practical, and you can recurse them efficiently and you can compress them efficiently. That's what we had for the Nexus 1 machine.
So our hope is like how can we combine all of this together so that you have a system that is fast, compressible, recursion friendly, so on and so forth. Let me give you another example. Something that people don't talk about is you want a universal verifier. And this has always been the goal with Nexus, is that a single smart contract implementing the verifier for the machine can verify any program execution. And some systems out there have the problem that you need a different verifier for different --
Anna Rose:
Types, okay.
Daniel Marin:
Exactly. And that is huge. If you're a developer, you would have to deploy a different smart contract for each different program. A lot of non-zkVM as well in the past had exactly that problem. I myself faced this personally because there was some time when I was a student there that I was trying any possible ZK system out there, like Circom, Noir, I think it was in its initial days right there. Also Leo from the Aleo team. Great products, great tool chains, etcetera, but the universality is a very important factor in practice for any deployed system. You just want to have one verifier to rule them all, so to say.
Anna Rose:
I see.
Daniel Marin:
And this has always been part of our core philosophy, that is like, how do we maintain that while improving performance?
Anna Rose:
When you talk about the universal verifier, though, I guess you're talking here about the zkVMs that are non RISC-V based, right? Like in the RISC-V based, do those also have universal verifiers, usually like the single smart contract, like SP1, RISC Zero and stuff like that?
Daniel Marin:
Yes and no. For example, the verifier may change when adding precompiles to the architecture because the circuit that is being verified actually changes. So if you're a developer that uses machine architecture that's different from other machine architecture, your verifier will be different and you will need to deploy a different smart contract, which is a big deal. So right now, the best system, so to say, have a universal verifier for a given machine architecture. There might be a way in the future to even have universality across machine architectures, but that's not the case right now.
What I would say is the universality property is not always maintained by RISC-V machines. Sometimes, for example, the verifier needs to know some kind of data from the program, and that's just a huge trouble. With the specifics of RISC Zero or SP1, honestly, we've spent a lot of time obviously here being in this space and reading the code --
Anna Rose:
Looking at it.
Daniel Marin:
And honestly we're not sure. We're not sure about how the systems operate, but what we know is how our system operates. And so with us it is like that.
Anna Rose:
It's single.
Daniel Marin:
So for a given machine architecture -- exactly, for a given machine architecture, the verifier is universal.
Anna Rose:
And this means if someone deployed an application on the Nexus VM, they wouldn't have to deploy a smart contract themselves. It would just use yours. Right?
Daniel Marin:
Exactly. Just one universal smart contract.
Anna Rose:
Got it.
Daniel Marin:
And that's pretty much it.
Anna Rose:
Going back to your -- the speed up that you're talking about, what are other areas -- you sort of mentioned the Nexus 2 comes out, but there's a few -- it's like it's faster, but there's some other drawbacks. What's next on your roadmap in that regard, in terms of adding new techniques? Are there any areas you're paying attention to?
Daniel Marin:
Well, a lot of exciting things are coming on many, many fronts. So, not only is the zkVM, as mentioned, like modular, so we're working on essentially every part of the modular zkVM stack. So our vision has always been to do large scale proof aggregation for the zkVM and to essentially build an ecosystem of applications that consume compute and supply compute. So to allow people to essentially lend their compute to participate in a really large scale computation. So going back to our discussion that Jens brought up about re-execution, Anna, are you familiar with the project called 'SETI at home'?
Anna Rose:
SETI at home.
Daniel Marin:
ASA and Berkeley in the years:What our zkVM is optimized to do, our goal is just to prove really, really, really large statements. And so what our zkVM is optimized to do is essentially to do large scale joint collaborative proving. And that is where the recursion part comes in as a core part of the system, which is, can we essentially grab multiple proofs from multiple different people and bring them together into a single proof that verifies a whole large computation? So for the Nexus 3, we'll see what we'll have. There are improvements from the developer experience to multiple parts of the proving stack, but there will be components potentially as well on the what you can call the large scale proof aggregation system. We'll have more details there, but I prefer to keep those secret.
Anna Rose:
Yeah, I want to hear -- so when you say proof aggregation system, I'm wondering, do you have in mind also like a prover network, a prover marketplace kind of scenario? We've seen a lot of the zkVM sort of point in that direction, so I'm wondering if that's also on your roadmap.
Daniel Marin:
We've seen a few of those conversations. It has always been our goal here at Nexus. Essentially, we want to be able to do the largest verifiable computation ever performed. That's what we want to do. Let's say you look ten backs into the future, we would want to say Nexus was able to run an extremely, extremely large computation that has never been seen before. And while people talk about ZK marketplaces, ZK networks here and there, we are concerned about the problem of building a really, really, really large scale machine, a single machine, having many people work together as one. So we don't call it like market or something like that. We'll see. I don't want to spoil a lot of things in this ZK Podcast, but let's say that we are concerned about the problem of making the zkVM as powerful as possible through any means as possible.
Anna Rose:
That's your focus. So you're not focused right now on building a marketplace. Could you potentially work with a marketplace like, you talk about this sort of largest verifiable computation possible, a joint collaborative proof, but I guess the proof itself would still ideally be smallish.
Daniel Marin:
Exactly.
Anna Rose:
Would like, you need a special actor to actually create that.
Daniel Marin:
Exactly. A special actor. And so this actor, we call it the orchestrator.
Anna Rose:
The orchestrator, okay.
Daniel Marin:
Yeah. So this is a system that orchestrates compute. Quite interestingly -- so in our whitepaper, actually, Anna, this is all public. In our whitepaper with define the zkVM as mentioned, mathematically using multiple mathematical components. And one of those components is called PCD, which stands for Proof Carrying Data, which is a generalization of IVC, which as mentioned stands for incrementally verifiable computation. So PCD is essentially a distributed incrementally verifiable computation.
Anna Rose:
Distributed. What was it again? What's the I?
Daniel Marin:
Distributed --
Anna Rose:
Incremental verifiable computation.
Daniel Marin:
Incrementally verifiable computation.
Anna Rose:
Okay.
Daniel Marin:
Exciting. Just like you and Jens and me and maybe a few other friends, perhaps we can do a computation in pieces, right? And essentially use our pieces to combine together to build a proof and then keep computing on it. Keep computing, keep computing.
Anna Rose:
That's the folding itself is that like each one of us would do one of the folds or like whatever, one of these iterations, and then they're combined?
Daniel Marin:
Exactly. And then they're combined and then we can keep combining them and combining them ad infinitum, so to say. So actually, Anna, this is something else that we've been super focused on, because I feel like this is something that people don't talk about is the ability to prove unbounded computations. So what this means is that the zkVM is able, like if you have enough patience and time to prove a really, really, really large computation. In other systems, you have to set up a static bound on the computation. You say, we're going to prove any program up to 100,000 steps. If you want to do that, you can do it. But if you want to prove like 101 --
Anna Rose:
It doesn't work.
Daniel Marin:
You need to either -- Exactly. It won't work. You need either a new verifier or two proofs or something like that. What we've been concerned of is, can we have a verifier that is not only universal for any program, but also universal for any length of execution. And the folding property, as well as the recursion, exactly, allows us to do that. We can keep folding and folding and folding, and this is the whole premise of incrementally verifiable computation.
Anna Rose:
Interesting.
Daniel Marin:
You might see that zk-Rollups actually fall very nicely in this category. I think zk-Rollups are naturally incrementally computable machines.
Anna Rose:
Are they unbounded, though? Do they need unbounded computation?
Daniel Marin:
Well, you're going to run your zk-Rollup for -- hopefully for the whole future of humanity. So you can use traditional recursion to essentially combine zk-Rollup proofs. There are some challenges. Maybe they recurse, aggregated proof needs a different verifier than the proof for the actual execution, which is a challenge. It would be really nice if zk-Rollups were essentially just one proof. And that proof was tiny, and it just got updated, updated, updated, updated on every block. You just have a new proof, you take the old proof to combine them, and that's your new proof. That's the premise of incrementally verifiable computation, right? It's that you need just one verifier and you can combine the proofs over and over and over. Yeah, that is sort of like our goal to enable this incrementally machine -- Incrementally computable machine.
Anna Rose:
I want to go back to the unbounded computation because it caught my interest, which is like, what is an example of that? Maybe that -- and we can also talk about this re-execution. I kind of want to dig into what that would even look like. What's a use case there? What's the use case for unbounded computation? What would that -- yeah, what kind of application needs that?
Daniel Marin:
Yeah, absolutely. Ethereum. Ethereum is an incrementally computable machine. The EVM or any other virtual machine based blockchain system. And it would be very nice, for example, if -- actually Vitalik has been talking about it. He asked at some point, what if we all achieved consensus on a proof system? Let's just say there's a million out there, what if we decided we synchronize as humans into choosing one? Well, perhaps if we wanted to choose one, it would be one that is potentially number one, very simple, number two, widely understood as a scientific breakthrough, so to say, and number three, that is optimized for recursion and incrementally verifiable computation. We would want to have a proof for a block. For this proof to be convincing, hopefully this proof system is very simple, and for this proof to be updatable, we would want that if we have essentially a proof for block 1 and a proof for block 2, we can just cheaply merge them together and have a proof for block 1 and block 2. So I would say that our main goal has been particularly proving really large computations, like rollups, so to say.
Anna Rose:
Okay. But sorry, Daniel, I keep trying -- it's the unbounded computation that I'm trying to understand, though. It's not the IVC so much. So what is that? Is Ethereum unbounded computation?
Daniel Marin:
Exactly, exactly.
Anna Rose:
Okay. That's what you're saying. Or rollups would need unbounded computation.
Daniel Marin:
Exactly, exactly. So if you're launching a rollup, an L2, an L3, even an L1, these machines are essentially running an unbounded computation that will keep computing, and computing every day, more and more and more and more. One would want to have a proof system that is able to prove unbounded computations without changes to the universality of the verifier. The verifier should remain the same, and the proofs should be easily aggregatable. So the unboundedness also helps in another thing, which is on collaborative proving. Collaborative proving makes a lot of sense if you're proving an unbounded computation, because you can keep doing it and keep doing it, keep doing it and use computational resources from multiple people. And these people, you know, they might connect, they might disconnect, but you can just reallocate computational resources and keep computing. So it doesn't matter who is doing the computation. It can be, let's say, stopped for a moment, but restarted by any other agent. This is a discussion about liveness, for example.
Jens Groth:
I'm wondering if it's useful also to give sort of like the original motivation for incrementally verifiable computation that was for an unbounded computation. And the example was, suppose humanity wants to compute something that will take 100 years, or whatever it will take, you don't know. That's essentially an unbounded computation. And the question is then, how do you sort of trust what happened 100 years ago, and why would you trust that the computation has been correct up to this point in time? And in incrementally verifiable computation, you sort of do the computation in chunks, and once a chunk has completed, you spit out a proof, and then you do another chunk, and you spit proof, and those proofs aggregate all the previous proofs. So now you know that all the previous chunks of computation are correct, and that led to this point where we are now and what the state of the computation is at this time.
Anna Rose:
I want to actually, just to clarify there, because what it sounds like is more recursion, just like recursive proofs, like Mina, the Mina system, where it always had a proof, and then it made a new proof with new information and the old proof. But in the case of IVC, it also sounds like you're kind of you're adding lots of different pieces of almost like a partial computation of a proof with an existing proof. Is that what it actually looks like, or is it straight up recursion in the same way?
Daniel Marin:
So it's simpler recursion, actually. So one can achieve recursion in IVC using monolithic STARKs. There are just some downsides. Falling systems are optimized for essentially enabling IVC in a very cheap fashion. There are other technical details there as well, but the folks from Mina were the original ones who proposed. They didn't call it that way. They didn't call it IVC, they just called recursion. But essentially, they essentially proposed the first blockchain, succinct blockchain IVC system.
Anna Rose:
Yeah, that's nice you say that, actually. Yeah, I feel like during the IVC boom of last year, I felt they kind of got a little bit forgotten. People needed to remind them that there was a system that did something like this. Yeah.
Daniel Marin:
Yeah, I remember reading the Mina whitepaper when I was a student, and I had a panel some months ago with the Mina folks, and, yeah, this is what I was telling them. It was like, guys, you were the original ones to propose this. And I think this is going to be the future for a long amount of time. We'll see how people call it, but now it's incrementally verifiable computation. Right? And technically, what we have is an incrementally verifiable zero-knowledge virtual machine.
Anna Rose:
Hmm. Cool.
Daniel Marin:
Yeah. I would say that another topic is just like, will this proves that we compute for Ethereum, convince someone a year into the future or two years into the future. We would want this proofs to be fairly simple, mathematically speaking, that it could convince anyone. For example, when you say, oh, I use this random proof system, it's like the proof system could be sound, etcetera, etcetera. But it's like, how do you convince other people that this is correct? Such as we had discussed before. We believe that a system that uses very simple mathematical primitives, it's very simple to understand from a mathematical perspective. Maybe it's not as efficient, but it's simple, leads to many advantages in terms of human synchronization.
A lot of Ethereum is about synchronizing people. We believe that something like that is important in terms of standards, so to say. Not standards, but more universally accepted standards for what a proof is. And so this is important for the point that I was going to make before about liveness, right? So if for some reason there's a company that is computing a lot of proofs, but then they die or whatever, hopefully the proofs are, number one, is easy to understand, easy to produce, and then anyone can continue producing those proof. So just as people are computing blocks, and there's like block proposers and all of that, the same, right, people would compute/make proofs and they would propose a proof, the proof would get accepted. And there's a way to decide on a proof. This is a different topic of conversation that I am particularly very passionate about, which is sort of like the future of Ethereum. Like Ethereum wasn't designed for zero-knowledge proofing.
Anna Rose:
Yeah.
Daniel Marin:
Right. So what will happen in the next five years of Ethereum? Well, we'll see. But Ethereum is certainly very here and there. Very ZK unfriendly, I would say, on the side of proving and on the side of verification.
Anna Rose:
Yep.
Daniel Marin:
So. Well, we'll see. I'm very optimistic that the Ethereum Foundation folks --
Anna Rose:
Will make changes.
Daniel Marin:
Yeah.
Anna Rose:
It's funny, because I was going to ask you if you are solely in the Ethereum world or if as a zkVM, you could be on any chain. So, yeah. Have you ever thought of deploying elsewhere that's more ZK friendly? It's so -- what is the word? It's heresy to suggest such a thing, I realize, but yeah.
Daniel Marin:
Ethereum probably since like:For example, we would want precompiles for BLS12-381 so that we could do BLS verification of Ethereum, of BLS signatures, but that just hasn't been enabled. There's been an EIP that has not been enabled. We would want other precompiles to be able to do other different things. This would massively simplify our proof compression process, because for now we're bounded to essentially have a system that makes the proofs compatible with BN254, which is not nice.
Anna Rose:
Yeah, yeah, yeah, I know. I've heard this from lots of teams that have worked with the Ethereum stack.
Daniel Marin:
Exactly, exactly. Other blockchains, they have their problems as well, just because sometimes they just don't have BN254 precompiles. So it's just like worse. So you have to deploy a full Rust verifier. But sometimes those blockchains just happen to be so cheap that maybe doing that is fine. There were recently some innovations on the side of Bitcoin and Dogecoin.
Anna Rose:
True.
Daniel Marin:
Right. You saw that, right? To add an opcode for a Groth16 verifier. And all of those innovations on the execution environment is good. So, answering your question, our system, even though like in my heart Ethereum occupies a special place, our system is fully general purpose. Our verifier can be deployed like anywhere. And in fact, one of the core goals of the Nexus project, so to say, is to, what we call, democratize Ethereum's execution environment. And by that I mean to allow developers to have rollups or other execution environments. They're just distinct through the use of precompiles and other virtual machine architectures. For example, this limitation that we've had on precompiles on the EVM would be solved if the EVM was a zkVM built with the Nexus zkVM.
Anna Rose:
Oh, so you would almost like triple -- like you'd put yourself on the third level or something?
Daniel Marin:
No, what I mean -- sorry, what I mean is that people have not been able to develop their own precompiles to the EVM and then merge them into the Ethereum protocol.
Anna Rose:
So maybe they have to move it to the L2.
Daniel Marin:
Exactly. Innovation will probably be faster in an L2, where you can enable new precompiles, because you don't have to wait for the Ethereum Foundation to pass something. So innovation will likely improve on that side by just allowing people to write their own precompiles.
Anna Rose:
Cool.
Jens Groth:
I wanted to add to what Daniel described for us, like the blockchain space, both Ethereum, but also potentially other chains. I think we should also think beyond the blockchain space. So our goal is to enable a verifiable computation to be everywhere. So if you made verifiability really cheap, so it was almost at the cost of computing, then you could spread verifiability everywhere. So you could imagine the Internet and all the data sort of floating around, and the Internet came with proofs. So it was all based on verifiable information rather than just whatever you believe that the other computer is giving you. So I think the end goal, and that's probably going to be many years into the future, is really that you sort of increase the trustworthiness of everything through verifiable computation.
Anna Rose:
Yeah, actually, Jens, there's a few other. There's Pluto, who's doing sort of the TLS web proof thing. There's a few teams -- they're kind of like we've been describing them closer. They're not quite hardcore, they're infrastructure, but they're a bit higher up the stack towards the use case. And I do think Mina, in a way, going back to Mina, I think they also had that vision of being able to prove some -- like the Internet, which is a much bigger picture. So I like that you're also thinking about that, because I think there's something important that we do think past blockchain stuff. I think interaction -- the intersection between ZK and blockchain, has been epic for the acceleration of ZK. It's been the funding, I think it's also brought a lot to blockchain, but, yeah, I love the idea that it even transcends that.
Jens Groth:
I mean, I think there's a lot of things happening in the application space because what people are doing, they're going out and finding these trust anchors. You need to sort of prove something about something.
Anna Rose:
Yeah, exactly.
Jens Groth:
Right. And you have people that are going out and so like stripping out passport data using the TLS connection or so like Google account --
Anna Rose:
Or Twitter. Yeah, yeah. For sure.
Jens Groth:
So some information, so forth. Right? And proving things about that. And it's interesting because this is exactly something that they can do because we have improved the efficiency of zero-knowledge proofs. Right? If you tried to do that ten years ago, you would just failed because you couldn't do it. You didn't have the performance to do it. But now we have the performance to do it. And even that they can take general purpose proofs and throw them at it. So it's much faster innovation and prototyping. You don't need to sort of handcraft a proof system that is efficient enough for this application, but you can just go out and take a general purpose. And I think it's sort of very interesting what the application space and the matching zero-knowledge techniques are going to look like. I think we are really seeing very rapid improvements in performance, which is going to unlock a lot of things.
Daniel Marin:
Pretty much everybody in our team, we are primarily excited about the real-world applications of ZK and to bring it into industry. That actually goes back to our initial conversation about verifiable cloud computing. We care about making it very, very professional, high degrees of observability, what you can consider an elastic cloud that is highly available, highly optimized, very professional, that just computes proofs in connection with ecosystem partners and so on and so forth, which is where the network discussion comes into the future. But bringing verifiable computation to full generality to the Internet is our primary goal as a company.
Anna Rose:
Cool. Well, thank you Daniel, thank you Jens, for coming on the show and sharing with us all these things. I mean, we covered formal verification, audits and security all the way to sort of the Nexus system and this verifiable compute, and what that actually means, this largest verifiable computation… Yeh thanks for coming on.
Daniel Marin:
Yeah, thank you.
Jens Groth:
Yeah, thanks, Anna. A pleasure as always.
Anna Rose:
All right, I want to say thank you to the podcast team, Rachel, Henrik and Tanya, and to our listeners, thanks for listening.