Summary
In this week’s episode Anna and Kobi chat with Muthu Venkitasubramaniam and Carmit Hazay from Ligero. They discuss their work on MPC and ZK for the last 20 years and how the research has evolved. They then dive into a nuanced conversation on how MPC & ZK are interrelated. The discuss Ligero, what led to the project and the early phases, as well as the new Ligetron system and how they plan on getting this technology into the wild.
Here’s some additional links for this episode:
- Ligero
- Ligero: Lightweight Sublinear Arguments Without a Trusted Setup by Ames, Hazay, Ishai and Venkitasubramaniam
- Ligetron by Ligero
- Ligetron: Lightweight Scalable End-to-End Zero-Knowledge Proofs. Post-Quantum ZK-SNARKs on a Browser by Wang, Hazay and Venkitasubramaniam
- ℓ-Diversity: Privacy Beyond k-Anonymity by Machanavajjhala, Gehrke, Kifer and Venkitasubramaniam
- Efficient RSA Key Generation and Threshold Paillier in the Two-Party Setting by Hazay, Mikkelsen, Rabin, Toft and Nicolosi
- MeshCal.com
- Zero-Knowledge from Secure Multiparty Computation by Ishai, Kushilevitz, Ostrovsky and Sahai
- Introduction to MPC-in-the-Head by Carmit Hazay
- ZKBoo: Faster Zero-Knowledge for Boolean Circuits by Giacomelli, Madsen and Orlandi
- Episode 322: Definitions, Security and Sumcheck in ZK Systems with Justin Thaler
- Communication complexity of secure computation by Franklin and Yung
ZK Hack Montreal has been announced for Aug 9 – 11! Apply to join the hackathon here.
Aleo is a new Layer-1 blockchain that achieves the programmability of Ethereum, the privacy of Zcash, and the scalability of a rollup.
As Aleo is gearing up for their mainnet launch in Q1, this is an invitation to be part of a transformational ZK journey.
Dive deeper and discover more about Aleo at http://aleo.org/
If you like what we do:
- Find all our links here! @ZeroKnowledge | Linktree
- Subscribe to our podcast newsletter
- Follow us on Twitter @zeroknowledgefm
- Join us on Telegram
- Catch us on YouTube
Transcript
Welcome to Zero Knowledge. I'm your host, Anna Rose. In this podcast, we will be exploring the latest in zero knowledge research and the decentralized web, as well as new paradigms that promise to change the way we interact and transact online.
This week I chat with Shumo and Yi from NEBRA. We discussed the ways that the high price of putting zero-knowledge proofs on-chain could be mitigated. We then talk about their proposed solution to this the NEBRA Universal Proof Aggregation product, or NEBRA UPA. Our conversation covers prover marketplaces, verification aggregation systems, and the design space that this all opens up. We discuss what it takes to incorporate extra proving systems into NEBRA UPA, the benefits that these systems will bring, how developers are meant to interact with them, and the future works they are doing at NEBRA to enable seamless cross zkRollup applications. This is something that a number of different groups are working on, but often approaching it from a different perspective. As a quick disclosure, I'm an investor in NEBRA and invested pretty early on. As you can hear, they have evolved a lot since then and so it was really fun to catch up on what NEBRA is all about today and the direction that they're headed.
Now, before we kick off, I want to share a message from one of our recent ZK Summit11 sponsors, the Web3 Foundation. They are bringing back the legendary conference series the Web3 Summit. This next edition will be happening in Berlin on August 19th to 21st. And it's programmed by the community, so if you have a groundbreaking talk or workshop or session, then apply to be part of it. We're also helping to program a ZK track at the Web3 Summit, so you can head over to web3summit.com and grab your tickets today. Now Tanya will share a little bit about this week's sponsors.
[:Namada is the shielded asset hub rewarding you to protect the multichain. Built to give you full control over sharing your personal information, Namada brings data protection to existing assets, applications, and networks. Namada ends the era of transparency by default, enabling shielded transfers and shielded cross-chain actions to protect your data even when interacting with transparent chains. Namada also introduces a novel ‘shielding rewards’ system: by holding your assets in the shielded set, you help strengthen Namada’s data protection guarantees and collect NAM rewards in return. Namada will initially support IBC & Ethereum-based assets, but will ultimately serve as a single shielded hub for all assets across the multichain. Learn more and follow Namada mainnet launch at namada.net
Aleo is a new Layer 1 blockchain that achieves the programmability of Ethereum, the privacy of Zcash, and the scalability of a rollup. Driven by a mission for a truly secure internet, Aleo has interwoven zero-knowledge proofs into every facet of their stack, resulting in a vertically integrated Layer 1 blockchain that's unparalleled in its approach. Aleo is ZK by design. Dive into their programming language, Leo, and see what permissionless development looks like, offering boundless opportunities for developers and innovators to build ZK apps. This is an invitation to be part of a transformational ZK journey. Dive deeper and discover more about Aleo at aleo.org.
And now here's our episode.
[:Today I'm here with Shumo Chu and Yi from NEBRA. Welcome to the show, both of you.
[:Thank you, Anna.
[:Thank you, Anna. Great to be on.
[:Shumo, you've been on the show before. Last time you were here you were talking about another project that you were part of, Manta. But I guess since then you've started this new project NEBRA, and I'd love to hear a little bit about what prompted the change, what inspired you to start on this project.
[:Yeah. So to the audience who is not familiar with Manta, Manta previously is Polkadot Parachain and then still a Polkadot Parachain right now also pivoted into Ethereum L2, currently using OP Stack. It's moving to ZK Stack. I think for Manta previously the vision was building the privacy layer for Polkadot. I think this privacy is still kind of dear in my heart, the situation is that the Polkadot ecosystem gets a lot of downturn and don't get, in my view, still in my view, the attention it deserve. And then basically Manta is pivoting to Ethereum L2. And for the Ethereum L2 side, I think Manta is building a great community using this similar strategy to Blast, which is this native yield strategy. By the way, I mean, I can talk about that, but probably shouldn't be the major focus on that. But the TLDR is that Manta is becoming more sort of ecosystem building phase and still dear in my heart as a researcher, I want to do more exploring of the cutting edge ZK cryptography technology side. Maybe that's why I'm here today. And also I'm super, super glad Manta is doing very, very well after I left. So which means I'm probably not that important to the team.
[:All right, well, I see. But I guess what you're saying here is that research is where your heart is, and they didn't need research anymore because they were using existing out of the box stack.
[:I think they still need research. It's just the kind of research they need to do is more about the protocol side and more about developer tooling side. I wouldn't call that not research. It's just like the kind of research... I'm kind of more want to standing on the absolute bleeding edge. That's just my personal preference.
[:Got it. Cool. Yi, this is the first time you're on the show, so why don't you share a little bit about your background and what led you to work on NEBRA?
[:into the tech itself. But in:[:For those people, yeah.
[:Yes. But I think for the rest of us, I think that's the first exposure to register on your back of the radar and then to understand, oh, there's something new, there's a digital currency that's not fiat. And then so fast forward, after a couple years in traditional finance, I joined Coinbase and started building consumer tooling and then focused on building the trading engine. And while there, I was always looking for like-minded builder, and I wanted to build something new. And then after that I joined Galaxy for their investment team and then started looking for ideas on a high level to see what's on the table, what's on the landscape. And then the hope there is that I want to find a project that's actually multi-cycle, very sustainable, very innovative, and then that's building something completely undefined. And then we don't even know if we can build it, and I think that's how ZK caught my interest. And then I met Shumo in Zuzalu last year, and then that's where I initially got ZK pilled and started to understand what ZK can do, not only to scale Layer 1s that we're familiar with, but also there are a lot of use cases extending beyond blockchain.
[:Nice.
[:Yeah. By the way, I was also get pilled by Brian Armstrong's 100 BTC airdrop when I was at graduate school. I never told people the actual story, but I guess kudos to Brian Armstrong.
[:Wow.
[:For this, getting so many people into cryptocurrencies. Yeah.
[:The crypto user acquisition cost is exactly $100.
[:Tell me more about that actual story because you both have experienced this. Actually, I'm vaguely familiar. I feel like it hit my radar because I was in tech at the time, so I vaguely remember this kind of thing happening. But what exactly was it? Was it to grad students? Was it only certain schools? Could you sign up from anywhere? What was this exactly?
[:The one I was talking about, I think was only for MIT undergrad students. And then it was done by Dan Elitzer and also Jeremy Rubin because they founded the MIT Bitcoin Club. I actually did not receive the money myself, but I have a lot of friends who just used it on boba tea and, yeah, they wish they held onto it.
[:So my experience was that's... That was like a two-tier airdrop from Coinbase. It's very similar to how Mark Zuckerberg found Facebook. Basically, he did the first kind of speed dating thing at Harvard. So I think all the MIT students get $100 airdrop.
[:Yeah.
[:And then there's a second wave airdropping all the major US universities. I was a graduate student at University of Washington, then I received, I forget the exact amount, maybe $20, maybe less than $100, and then the Bitcoin price went straight up and that turned into, I think, $2,000 ish Bitcoin. I mean, as a poor graduate student, kind of, that was a big deal. Right? So every single of my fellow graduate students started talking about that.
[:Yeah, this is the best marketing ever. Especially if by chance it also just went up really quickly afterwards so you could see the potential. Nice. All right, so I think now would be a great time for us to introduce NEBRA. I met with you, Shumo, a long while ago to talk about NEBRA. And actually back then, NEBRA, at least when we were talking, was sort of presented as a prover marketplace. It was one of the first times I was kind of talking about this idea. But I know that since then you've moved a little bit away from this idea. Why don't we talk a bit about what it was, what was NEBRA originally pitched as, and what has it become?
[:So I think previously we were talking the NEBRA vision was two piece Lego. One is doing the proof aggregation and the other piece is doing a prover marketplace. And I think we actually never think about like prover marketplace is our primary goal. Take a step back. Prover marketplace is important in the proof supply chain, but we've always been focused on, hey, ZK proof is so expensive on-chain. For example, you need $20 today to verify a single proof on Ethereum mainnet today, and probably 1 to 2 dollars even on the Layer 2s. So we want to reduce this cost to $0.10 or even less. And that we think this is a ecosystem wide unlock for all the ZK project. And in our view, in the next ten years, every single project will become a ZK project. You may even not aware that you are using some kind of ZK technology, might be invisible to you, but it's very important because ZK is the only way can trustlessly scale computation on-chain so that everyone can share the same coordination layer, which is blockchain.
And then there is an interesting inter-relationship of the proof aggregation layer and prover marketplace because proof aggregation layer requires some off-chain computation, which is a recursive proving to basically compute the recursive, or we call it aggregated proofs, in that regard. Right? So, it's totally possible for us to build a vertical integrated stack both doing proof aggregation and prover marketplace. So in our previous kind of thought, we are planning to first build the prover aggregation layer and then build a prover marketplace. But now we don't think we want to build prover marketplace anymore because there are so many great people are building prover marketplace. I can name a few, like Succinct, Gevulot and so many others. And also from our own point of view, like our value capture is always not on the prover market side. Our value capture is always on the on-chain side. So to us, it's glad someone else can build that, and we can just like use that, yeah.
[:Okay. I really liked how you just put the like proof supply chain concept out there. And you've now talked about the prover marketplaces and a prover aggregator. But I want to define this a little bit more clearly. So, and we... I mean we have talked about it a few times on the show, but let's start with prover marketplace, which is a little bit more understood. I mean, I've definitely talked more about it. So yeah, let's start there. What is a prover marketplace? I know there's different versions of this out there.
[:Yeah, great. I think my definition of prover marketplace is very simple. Basically, if you want to prove something and put it on-chain, and for example, you don't have a beefy AWS cluster or very great GPU-accelerated machines, you basically delegate your proving work to third parties. And that's the beauty of ZK proof. You can do so if you don't care about privacy. For example, you can just delegate the public verifiable workload to a third party and they can compute for you and you may pay some money for the computation power, then you put the proof on-chain. Right? So then after this prover marketplace compute the proof, then you still need to put it on-chain to get its usage. Either to say you want to attest some historical data, which is like a ZK coprocessor, or you want to do a zkRollup and there are many other use cases. And you can see the proof supply chain is, first, user getting the demand, second, someone need to compute the proof and the third you need to put the proof on-chain. The first product we called NEBRA UPA, we primarily focus on the third part.
[:I see.
[:And also I want to add a little bit here. We don't do the proving part. Maybe in the future we don't need to do the proving part ourselves, is that we want to have a most universal interface for the third step, which is putting the proof on-chain. For example, we only have a portion of our users or customers can delegate their proof. A huge chunk of our users cannot delegate their proof because of the privacy. One example is Worldcoin. For example, every single Worldcoin proof have to be generated on the consumer device, which is their cell phone. And we are talking to many other privacy project as well because we can lower the cost of privacy on this privacy-preserving proofs or through ZK proofs. You cannot delegate your proof generation to a third party. So we want to find the common denominators for the interfaces of people putting their proof on-chain, that's why we designed NEBRA UPA's interface as a purely interface for people verifying their proofs.
[:All right. I think I want to dig down into what that actually looks like, though. So, from what I've gathered is the proving marketplaces could use a proof aggregation tool like yourselves, like that they would be potentially a partner. But also the ZK applications, like the people where you're actually creating proofs, and I'm curious about this, maybe even client-side, would they then use this proof aggregator to get it on-chain?
[:Yeah, yeah, precisely.
[:Okay.
[:That's precisely what it is. So basically we find the most generic interface for people to put their proof on-chain, and we try to be as un-opinionated as possible. So both this like prover marketplace or someone who run their beefy servers can put their proof on-chain, and also we can support client side ZKP as well.
[:Nice.
[:And we're putting them in the same pool.
[:Would rollups also use you or they use something else?
[:Yes. So maybe that's a good timing for me to give a sort of a... Take a step back to give a big picture, or give a more generic view of what is NEBRA UPA. Right? So NEBRA UPA is the first universal proof aggregation protocol live on-chain. So we're on-chain protocol. We're not an off-chain protocol.
[:You are a set of smart contracts that I guess are operating on an L1 or an L2?
[:Yeah, precisely. And have some off-chain component. And our primary goal is to reduce the cost of people's ZK verification costs. Like I said today in Ethereum Layer 1, it costs $20, on L2, maybe cost $2 ish. So this is too much for ZK go to mainstream. So our approach here is that you send your proof to us and then we aggregate your proof and possibly with all different parties. That's why the universal is important. By universal we mean we can aggregate proof from all kind of different parties, ZK coprocessor, client-side proof, zkRollup, zkVM, you name it. It's kind of a carpooling. It's a great way of talking about, thinking about that, because we can aggregate the proofs from different parties into the same aggregated proof. And the beauty of ZK proof is that the verification cost is kind of irrelevant of how many proof you're aggregating.
[:And when you say aggregate, do you mean like recurse? Like you'll pool together a lot of these different proofs and then do a proof of all of them into a single proof, so the outcome is just like a smaller proof.
[:Precisely, precisely. So from the high level point of view, aggregation proof just means we are generating proof of proofs. That's actually one of the twitter tag of one of our engineers. I generate proof of proofs.
[:Nice.
[:But practically speaking, on the tech side, it's not... There are more clever innovation to help us to do that more efficiently. It's not just like, hey, we just write a verifier, then we run this circuit. Right? There are many, many clever technical innovation we're doing to make it fast. But from the high level, it's like proof of proofs.
[:Okay.
[:It's using ZK to beat ZK.
[:Yeah, yeah.
[:Do you live on an L1 or an L2?
[:We have a testnet running on Sepolia, which is Ethereum testnet, and we're going to deploy on Ethereum mainnet pretty soon. And we're very likely to deploy on all the major Ethereum L2s, like Optimism, Arbitrum, Base, etcetera. Our philosophy is we follow our users. Whenever our users want us to deploy, we'll deploy.
[:You'll deploy there.
[:Yeah.
[:If you're deploying in all these different places, does that make your system in any way inefficient? I know it's not pools or assets, but if you're working on the L1, but you're also working on the L2, you're not aggregating the same proofs on both of these, I'm assuming. So some proofs are going to come to the L1, you'll aggregate those there, and then other proofs, you're aggregating those on the L2. But is there any connection between these two things, or are they just separate tools?
[:That's a very good question. Currently, each settlement layer or chain, or L1 or L2 has its own pool, if we deploy that. That's kind of inevitable. It's not like we want to, it is more like the current architecture of the L1 and L2, you have to have in separate pools. And we are working on some interesting designs to see, hey, in the future, can we combine them, but that's in the outlooking. And practically speaking, there are some more technicalities of this. Our protocol on the Ethereum L2 would be a little bit different because the exact gas cost of core data and computation, the ratio is a little bit different. So if you want to have an optimized protocol, you need to design the protocol a little bit differently. Not too much on the L1 and L2 side.
[:Got it. I noticed at some point you also kind of talked about supporting different languages, like Honk. You made an announcement where you're like, we're supporting Honk, therefore we're supporting Noir. But I was pretty confused by what that even means, because if you're aggregating the proofs, what does that have anything to do with the ZK DSL?
[:Oh, so here is a general picture of how to think about this. So currently, our NEBRA UPA v1, we only support Groth16 proof system. Groth16 takes 70% of the on-chain verification right now. There are a few people like Worldcoin, like Succinct, like RISC Zero, they all use Groth16. So moving forward, our NEBRA UPA v2 will have multi-proof system support. Then we want to support as many proof system as possible. So on that regard, we need to have a sequence of the proof system we are support. So we see a lot of people are very excited about Noir, and we want to support Noir. So, but to be able to support Noir, the crucial part is to support Honk. To the audience who is not super familiar with Honk, Honk is kind of evolution of HyperPlonk and with some Aztec-only secret sauce of making it more efficient. So by supporting Honk, so we'll be able to support Noir, which is like a growing ecosystem of people who are doing exciting things on ZK. That's the context.
[:That was the context. So really the question should be about the proving systems. So let's actually do that. So you mentioned you're supporting Groth16, actually, that was one of my other questions, and now you're going to be supporting Honk. But I also, like you mentioned Succinct, but actually they're using Plonky3. So would you be adding Plonky3 support?
[:Possibly, but right now, I had a pretty deep conversation with Succinct team, including Uma. Right now, so they have a two-layer system. Their top layer is Plonky3, their bottom layer is Plonk or Groth16. Right? So we will be supporting SP1's either Plonk or Groth16, depending on whatever the proof system they chose on the second layer. So as far as I understand, they either are going to use Gnarc's Groth16 or using Gnarc's Plonk. So we will support either of these.
[:Got it. But do you have Plonk then already? Because you mentioned Honk, but not Plonk.
[:We have vanilla Plonk. Okay, so on the v2, it's still a prototype, but it's end-to-end working. Also, we have another flavor of Plonk called a fflonk...
[:The FF.
[:Which is... Yeah FF, and that was used by many zkRollups, including Polygon zkVM, and as far as I understand zkSync as well.
[:Okay, cool.
[:So you can see, multi-proof system support will make us more applicable to many, many more applications and zkRollups.
[:Nice. Part of your work is to implement these. What does implementing them even mean in your case? Like, what do you actually have to build to be able to interface with Groth16 proofs versus Plonk proofs?
[:I can write like a 100 page PhD thesis on that, but let me be succinct. So from the high level point of view, you want to build a verifier circuit for this proof, right? Because we're doing recursive proof, which means we're generating proof of a batch of proofs. The hard question here is, how do you make the verifier universal? So this is a great journey from us. We actually published a paper last year called UniPlonk. It's a universal verifier. The key question is, how can you make a universal verifier circuit? So in our first iteration, we're using Halo2 to make the universal verifier circuit for Groth16, which means we can aggregate proof no matter what your circuit is. It's a circuit-agnostic proof aggregation protocol.
So in our circuit second generation, that's where your question is, Anna, what does it mean to support a new proof system? So in our circuit second generation, we're actually moving to a zkVM plus precompile architecture, which means supporting a new proof system is relatively easy. Basically, we're just implementing a verifier of this new proof system in Rust. Of course there is more than that, is that because this is a zkVM plus precompile. Take a step back. The base case is that you can just use a zkVM to implement a verifier, then you can aggregate the proof already, but it's very, very slow. We're talking about 1 hour-ish. We definitely want to be moving faster than this. Then basically we take what we have learned in the NEBRA UPA v1 to how do we build the efficient verifiers in the circuit. We're adding proof aggregation specific precompile, or you can say we're adding ECC precompile or a hash precompile, that's to be more specific to the zkVM so that we can aggregate these more generic proofs from different proof system more efficiently. That's a high level. And specifically, adding Honk proof means we are adding a Honk verifier to our second generation architecture, which is the zkVM plus precompile.
[:This is leaving me with two questions. One is are you building your own zkVM? Are you going to use something out of the box?
[:So I think we are using something out of box. Basically we tried a few open source zkVM like Succinct SP1 and RISC Zero and also Jolt. We haven't decided which one to use in the end, but I think to be honest, their tech stack is very very similar. They're all having a very similar Rust compilation toolchain. So right now we are building a zkVM agnostic backend for this proof aggregation. So again, right, so our aim is that we're not trying to reinvent in the wheel, however, we want to be focused on where we want to improve on the technical side.
[:Got it. What you described though, the fact that you're actually building this universal verifier circuit makes me start to think like are you then becoming a verifier aggregation layer yourself? Because that does seem to be a new type of project that we're also seeing like Aligned Layer and Hyle. So they're doing more aggregation on the verification side. Would you then consider Aligned Layer or Hyle competitors to what you're doing?
[:I think by the way, my understanding of the Aligned Layer and also Hyle might be a little bit rusty. So I think my understanding for Aligned Layer is that they are using EigenLayer crypto economic security to do the verification. And I would say we have a similar interface to the developers by having different security guarantees. So basically their security guarantee is this EigenLayer crypto economic security to the users, and our security guarantees are this recursive ZKP guarantee, which is pure mathematic. For people who don't need a full Ethereum security, there might be some use cases for Aligned Layer, but we do see we kind of similar developer interface but a different security guarantee.
[:Got it.
[:And for Hyle, it's a very similar situation. I think they're using TEE, which is SGX as their security guarantee to do the verification. So we would love to see more and more people working on this verification side. Because to be honest, if you're looking at the cost structure of a project on ZK, especially if you talk to the rollup-as-a-service providers like AltLayer, Caldera, Conduit they have a lot of complaint from the user side. Hey, people say ZKP proving is a cost, but ZKP verification even costs more. Right? So we'd love to see more and more players join the space to help to reduce the cost of putting a proof on-chain.
[:Cool. But this is actually helpful also for folks who are trying to group together what projects they're working on. Some of the projects use different language, but as you've been going through this, I'm starting to hear echoes of what I understood they were doing. And it does sound like you're supplying a similar service to the end user, to the developer, but you're doing it in different ways.
[:Yeah, yeah, yeah.
[:Cool. I've also seen you sometimes refer to what you're building as a shared settlement layer, but so far in what we've talked about, it hasn't come up. So is that still what you're doing?
[:That's right. So we are kind of like finding the best, the most accurate way to describe the solution. I think for us it's more about the last mile that gets the proof on-chain and then, so at one point we call it shared settlement layer. But I think for us is that we want to be the last mile that's pulling everyone together to allow the proofs on-chain. And we don't care where the proof come from, we don't care about which zkVM generated them, rollups and even consumer application. So that's why we're interested in the carpool analogy where you can be very different people, very different groups, but if you're going to the same L1 or same L2, we're carpooling it together. Right now, as Shumo said earlier, if you're going to the same destination, we're still having separate pools, but ultimately we get the passengers to the destination faster. And then by faster, meaning that if we're able to aggregate more proofs, then the waiting window, the rolling window that we wait to aggregate the proof to amortize the cost will be much shorter. So if, let's say, a application or a VM themselves, or even the prover network themselves, they want to build the aggregation layer themselves, and I think that's totally fair, but they might have to wait, let's say an hour for 50 proofs. But if you use a third party, credibly neutral proof aggregation, then you will only have to wait, let's say ten minutes or five minutes, and then you get the same cost reduction. And that's essentially how we find the value prop will be very appealing for users.
[:So what I'm hearing is you're not actually using the shared settlement layer term, but when I read that, I was kind of curious if you positioned this in any way, like towards the AggLayer or, at first maybe I thought it had something to do with shared sequencers, but it sounds like it's pretty far away from shared sequencers. But in the case of the AggLayer and that sort of aggregation that they're talking about over mostly in the Polygon ecosystem, but I know they have a vision for this in a bigger way, they talk about settlement. And, yeah, I'm just curious and sort of what we talked about before, where you're deployed on various L2s and an L1, and you've kind of hinted that you might think of talking to each of these deployments. Yeah, is there any sort of connection then to something like the AggLayer?
[:Yes, I think we're a lot more similar to AggLayer than comparing to Aligned Layer or Hyle.
[:Okay.
[:It's just that we're taking a more ecosystem agnostic approach for that. I think Shumo and the team probably have the idea about building proof aggregation around the same time last year. And I think it's that the fact that people are all building in this space is extra validation, that it's kind of the direction that we see will make ZKP accessible for end user. And then eventually we'll have a killer app that's ZK enabled.
[:Yeah, I think one addition to that is that both us and AggLayer are still in the very early stage of the protocol development.
[:Got it.
[:And take a look at AggLayer documentation. It's pretty high level. So take what I said as a grain of thought. So my understanding right now is, so to solve this rollup interoperability problem, you need many things. You need proof aggregation. Proof aggregation is the most important technical piece in this puzzle. But you need many, many, many things else. For example, you want to have a fast finality. You need something like, I'm not sure you covered this, Anna, but Justin Drake was keep talking about this pre-confirmation. Right? So to get a fast finality, you need pre-confirmation. And also it's a very huge design space. I think both us and AggLayer is just like scratching the surface. There are many, many interesting problems to be solved. I can name a few. First, how do you even define cross-rollup transactions? We know how to define transactions in a single rollup. Like, what even is the right definition of the cross-rollup transaction? That's the first question.
[:And at the moment it's just like it feels like bridging all the time.
[:Exactly. But I think bridge is a wrong abstraction. I did a talk at DBA research today called Bridges are a Scam. I think people need to think about the things from a first principle in order to solve the question. It's really not about hey, AggLayer versus NEBRA. It's all about can we actually having solved the interoperability problem for the Ethereum rollups? I think I want to put a more like higher level view of this to everyone. Maybe that's helpful. Right? I think for Ethereum it's very important to have a rollup-centric roadmap. It's very different because Ethereum want to keep its L1 truly decentralized. It's not a performance optimized L1. In that case, rollup is very important. But think about if we have all these rollups but these rollups doesn't have interoperability, then it's becoming kind of separate kingdom. That's fine, but if you think about in the longer term, then that will have a winner-take-all situation. That's actually not healthy for the entire Ethereum ecosystem. Basically these rollup ecosystem cannot share network effect among each others. And I think both us and of course AggLayer is doing is we're trying to designing innovative solutions to help rollups interop with each other. I think I will get back to Yi on that, but Yi has a beautiful analogy, is that maybe in the future every home will run a rollup, every coffee shop will run a rollup. Right? In that case, we definitely need more interoperability in that regard.
[:So the original quote was actually from Bill Gates because they mentioned when they started Microsoft a couple of decades ago, they talked about a computer on every desk and in every home. And then I joked to Shumo about we need a zkRollup on every desk and in every home.
[:And every phone.
[:And every phone, every light client. The idea here is that we want to lower the barrier of entry for building a zkRollup. Our thesis is that once zkRollup have a stack that's performant and reach feature parity with OP Stack, we will have a lot more use cases for them and then a lot more consumer apps that haven't thought about building a rollup themselves, once they reach adoption, once they have enough volume, enough traffic using their application, they will consider building their own rollup. And then it will be a zkRollup or a ZK-enabled rollup. Or maybe in the future we'll just call it a rollup because we don't talk about, I'm building an internet startup now, it just I'm building a startup. And then so I think that's kind of how we envision ZK would be a very fundamental underlying infrastructure that you no longer will talk about it. Right now we treat ZK as a separate sector, but in reality it would just be an element that's useful for people to do a lot more off-chain compute, to be able to prove something that has happened, to be able to have native interoperability between rollup. Like it's a huge design space and have very different attack angle that's building by a lot of the innovator in this space.
[:I'm still a bit confused on how you go from the aggregation of proofs into what I've understood AggLayer as kind of proposing, which was a little bit more like settlements and movements. Maybe not movements around the chains, but even how you described it, Shumo, where you'd build an application that's actually interacting with two or more rollups at the same time, and a transaction between those rollups is actually happening. How do you go from just proof aggregation, which to me is sort of like there's all this proofing going on, then you do an operation and then you write it to chain. How do you go from that to actually interacting between the different rollups?
[:That's the right time to share a little bit more what we're building next. We previously called it a shared settlement layer, now we are calling it NEBRA OS, which OS stand for, you can view it as operating system or you can view it Open Stack. So the whole idea here is that if you look at how can you make rollups interoperable, our definition is you need to have cross-rollup transactions. And this cross-rollup transactions need to be the first class citizen. It needs to be as safe as possible. And how can you make a cross-rollup transaction as safe as possible? You have to make the transaction settle at the same time as normal transactions. And then getting back to how does a rollup transaction settle? So a rollup transaction settle when the validity proof, assuming it's a zkRollup, then a rollup transaction settle when its validity proof is verified by Ethereum Layer 1.
[:By the L1. You wouldn't say it's when the sequencer puts it in the sequence.
[:I mean, this opens a cans of worm here about all the semantics, the pros and cons of zkRollup and Optimistic rollup. Let's put it simple, so sequencer put it, it's a soft guarantee. And if you want to set a huge amount of asset, I think the better way is to having much safer guarantee, which is...
[:The final writing to the chain.
[:The final grating to the chain. And also as a coordination layer on top of Ethereum, but below all the zkRollups. Right? So basically we want to enforce the safety guarantee of the cross-rollup transactions. So, in my view at least, the only meaningful way of enforcing that is that together with all the validity proof coming from different rollups, we want to have safety proofs across rollup transactions. Together we basically aggregate all this proof into a same proof and settle on Ethereum. I'm actually not sure this is how AggLayer is doing this, but in our view this is sort of the most un-opinionated way of doing this because we are using Layer 1 security to ensure the safety of the cross-rollup transactions. And also you bring up shared sequencer, and I think that's a great angle as well. So you can think of a shared sequencer as a way of enforcing this atomic compostability at a much early stage, and we're enforcing this cross-rollup transaction at a much later stage. I think that's more un-opinionated and more... Give more flexibility and sovereignty to the rollups, so in the sense that rollups doesn't have to use a shared sequencer. We're happy for them to use a shared sequencer if they want to, but they don't have to. All they need to do is that the rollup give some guarantees or give... Sign some confirmations about, hey, I'm going to put this cross-rollup transaction in the next batch. So in this way you can think of it gives them more sovereignty, which I generally think is a good thing. You may know the Cosmos Thesis, and also in practice that's also getting from where I used to work with Manta. I know the rollup sequencing is one of their most important revenue source for all the rollups. So I think shared sequencing, the idea is great, but if you want to convince other rollups to actually get into shared sequencing, it's kind of an uphill battle. And to put an interesting analogy, it's somewhat similar to central planning, economics and the free market.
[:Oh wow.
[:In our view, we should facilitate the free market approach and building the infrastructure to net people forming this free market instead of we have sort of a centralized planner for these cross-rollup transactions.
[:Yeah, I think to some extent we think that by adopting this approach, we allow the integrity of the sovereignty that each rollup might want to maintain. And I think our general approach is we want to be... Like we want to cohabitate with existing infrastructure and people's design choices. So in the sense that the reason we're not double into the decentralized prover market is because we think that is a very important part for a zkVM to own, and that is the meaty part of their supply chain, and they were more focused on the last mile verification. And then here into building an Open Stack for zkRollups, we believe that sequencing is a lot of the memory retention that each rollup might want to maintain. And then our approach is more focused on offering native interoperability and potentially a canonical bridge to make the user experience and developer experience a lot more seamless.
[:By this Open Stack, though, are you also thinking about it a little bit like as ZK Stack or OP Stack? Would people use your stack to fully deploy end-to-end rollup? Or is it more like there's a piece that they would take from the NEBRA OS and then add it to one of those deployments?
[:It's still in the very early stage.
[:Oh yes, true, true.
[:So I think there are two things. First, it is an entire stack. People can deploy their appchain or zkRollup, however you call it. The big problem we're solving here is that by this integrated stack, people can really just click a few buttons to deploy a new zkRollup. We are kind of half there, but not really. The problem right now here, if you are talking about people who are building appchain, is that, yes, they can use Caldera, Conduit, AltLayer to do so, but they need to have a lot of business development negotiation with Chainlink, with Pyth, with all these oracles. And also they need to have a lot of business negotiation with other bridges like LayerZero, across, et cetera. So I think one of the benefit of using NEBRA OS is that they don't have to do so. They can truly do one click deployment, and even in the future, permissionless deployment of the zkRollups.
And secondly, what's our relationship with OP Stack, ZK Stack and Polygon AggLayer? I do think we actually want to collaborate with them and also make ourselves interoperable with them. So like I said before, we're in this early exploration phase of to figure out the design space of how does the cross-rollup transactions can work. I think everyone is just scratching the surface right now. We definitely want to be first... I think open means two things to us. First is open source. So the pure stack will be open source from day one. And second, be open to collaborate and interoperable, which means we will actively seek collaborations and cross-stack interoperability between us and other stacks. I think these are where we are coming from. And also, I don't view this as a zero-sum game, so I think we want to fulfill Yi's vision, which is put a rollup in a single desk. If you think about that, we're only in the 1%...
[:Not even.
[:...of the progress bar.
[:Fair. I want to bring it back to what you have today. So kind of bringing it back to the UPA version 2, can developers already use that?
[:UPA v1, people can already use that. UPA v2, we have a working prototype in-house and people probably can use that after one or two months, which is not too long. And we have some initial very close users that are already trying that. We also just want to say the UPA v1 and UPA v2 have the exact same interface. For example, you can just try to build a very small Circom circuit to try it today.
[:Cool. I'm recording this, it's the first episode I'm recording after wrapping up ZK Hack Kraków, our IRL hackathon, do you think that hackers building applications should already be experimenting with this stuff or do you think...
[:100%
[:Yeah. Okay.
[:Yeah. So we'd love to see if we can come to more ZK hacker houses and to help them. So I can tell you one thing I'm really, really happy about. So we are working with a partner called Project Galaxy. We just throw our developer doc and zhanghz didn't get back to us in one week. I was like, are they still interested in using us? Then just one week, boom they already integrate NEBRA UPA v1 in their SDK, no question asked. And I think that make me super happy, which means our developer documentation is actually good enough and people just like no question asked and did the integration.
[:Interesting.
[:To say a little bit about that, NEBRA UPA has a very simple developer interface. Basically three step. As a ZK developer, register your verification key, change your proof verification contract to NEBRA, and change your application logic to subscribing to NEBRA UPA event. Then that's it. So we try to be plug and play to all the ZK applications and we'd love to support ZK hacker houses in the future.
[:So that's for the sort of first part. This was the UPA v1 and the UPA v2. What is your timeline for the NEBRA OS?
[:We don't really have a timeline right now, and I think we probably want to first get a good design and ask as many people as possible to say, hey, how do you view about the design? And then we can have a more concrete timeline here. I think in my view, to getting the design right and building something that is meaningful for people to use is more important to sort of rushing to have something people can use, yeah.
[:Yeah, yeah, yeah.
[:And then I think one last bit of like call of action is that not only are we excited about going to more hacker houses and support more hackers, I think there are specific design space we're actively exploring. So aside from the all the amazing infra partners that we have, we're also looking into specific consumer application space where it could be in the DeFi space where there are private smart contract, there are decentralized voting, there are different kind of scaling solution, private transaction and even partial KYC. And then also in social application where it's in-person attestation, there is badging system passport and for people to have PVP interaction online and offline, or I guess on-chain and off-chain, and then be able to utilize that. I think these are all the things that were actively kind of expanding the scope and the utility function to allow other application not to be ZK first, but be ZK integrated. And then that's how UPA could be the backend for components that you don't want to touch.
[:Nice. Cool. Well, thank you both of you, for coming on the show and sharing with us the story of NEBRA, kind of the work that you were doing early on, how that's evolved and where you're going. Thanks so much.
[:Thanks for having us.
[:Thanks a lot, Anna. It's a great pleasure to be here, as always.
[:All right, and I want to say thank you to the podcast team, Rachel, Henrik, and Tanya, and to our listeners, thanks for listening.
Transcript
Welcome to Zero Knowledge. I'm your host, Anna Rose. In this podcast, we will be exploring the latest in zero knowledge research and the decentralized web, as well as new paradigms that promise to change the way we interact and transact online.
This week I chat with Shumo and Yi from NEBRA. We discussed the ways that the high price of putting zero-knowledge proofs on-chain could be mitigated. We then talk about their proposed solution to this the NEBRA Universal Proof Aggregation product, or NEBRA UPA. Our conversation covers prover marketplaces, verification aggregation systems, and the design space that this all opens up. We discuss what it takes to incorporate extra proving systems into NEBRA UPA, the benefits that these systems will bring, how developers are meant to interact with them, and the future works they are doing at NEBRA to enable seamless cross zkRollup applications. This is something that a number of different groups are working on, but often approaching it from a different perspective. As a quick disclosure, I'm an investor in NEBRA and invested pretty early on. As you can hear, they have evolved a lot since then and so it was really fun to catch up on what NEBRA is all about today and the direction that they're headed.
Now, before we kick off, I want to share a message from one of our recent ZK Summit11 sponsors, the Web3 Foundation. They are bringing back the legendary conference series the Web3 Summit. This next edition will be happening in Berlin on August 19th to 21st. And it's programmed by the community, so if you have a groundbreaking talk or workshop or session, then apply to be part of it. We're also helping to program a ZK track at the Web3 Summit, so you can head over to web3summit.com and grab your tickets today. Now Tanya will share a little bit about this week's sponsors.
[:Namada is the shielded asset hub rewarding you to protect the multichain. Built to give you full control over sharing your personal information, Namada brings data protection to existing assets, applications, and networks. Namada ends the era of transparency by default, enabling shielded transfers and shielded cross-chain actions to protect your data even when interacting with transparent chains. Namada also introduces a novel ‘shielding rewards’ system: by holding your assets in the shielded set, you help strengthen Namada’s data protection guarantees and collect NAM rewards in return. Namada will initially support IBC & Ethereum-based assets, but will ultimately serve as a single shielded hub for all assets across the multichain. Learn more and follow Namada mainnet launch at namada.net
Aleo is a new Layer 1 blockchain that achieves the programmability of Ethereum, the privacy of Zcash, and the scalability of a rollup. Driven by a mission for a truly secure internet, Aleo has interwoven zero-knowledge proofs into every facet of their stack, resulting in a vertically integrated Layer 1 blockchain that's unparalleled in its approach. Aleo is ZK by design. Dive into their programming language, Leo, and see what permissionless development looks like, offering boundless opportunities for developers and innovators to build ZK apps. This is an invitation to be part of a transformational ZK journey. Dive deeper and discover more about Aleo at aleo.org.
And now here's our episode.
[:Today I'm here with Shumo Chu and Yi from NEBRA. Welcome to the show, both of you.
[:Thank you, Anna.
[:Thank you, Anna. Great to be on.
[:Shumo, you've been on the show before. Last time you were here you were talking about another project that you were part of, Manta. But I guess since then you've started this new project NEBRA, and I'd love to hear a little bit about what prompted the change, what inspired you to start on this project.
[:Yeah. So to the audience who is not familiar with Manta, Manta previously is Polkadot Parachain and then still a Polkadot Parachain right now also pivoted into Ethereum L2, currently using OP Stack. It's moving to ZK Stack. I think for Manta previously the vision was building the privacy layer for Polkadot. I think this privacy is still kind of dear in my heart, the situation is that the Polkadot ecosystem gets a lot of downturn and don't get, in my view, still in my view, the attention it deserve. And then basically Manta is pivoting to Ethereum L2. And for the Ethereum L2 side, I think Manta is building a great community using this similar strategy to Blast, which is this native yield strategy. By the way, I mean, I can talk about that, but probably shouldn't be the major focus on that. But the TLDR is that Manta is becoming more sort of ecosystem building phase and still dear in my heart as a researcher, I want to do more exploring of the cutting edge ZK cryptography technology side. Maybe that's why I'm here today. And also I'm super, super glad Manta is doing very, very well after I left. So which means I'm probably not that important to the team.
[:All right, well, I see. But I guess what you're saying here is that research is where your heart is, and they didn't need research anymore because they were using existing out of the box stack.
[:I think they still need research. It's just the kind of research they need to do is more about the protocol side and more about developer tooling side. I wouldn't call that not research. It's just like the kind of research... I'm kind of more want to standing on the absolute bleeding edge. That's just my personal preference.
[:Got it. Cool. Yi, this is the first time you're on the show, so why don't you share a little bit about your background and what led you to work on NEBRA?
[:into the tech itself. But in:[:For those people, yeah.
[:Yes. But I think for the rest of us, I think that's the first exposure to register on your back of the radar and then to understand, oh, there's something new, there's a digital currency that's not fiat. And then so fast forward, after a couple years in traditional finance, I joined Coinbase and started building consumer tooling and then focused on building the trading engine. And while there, I was always looking for like-minded builder, and I wanted to build something new. And then after that I joined Galaxy for their investment team and then started looking for ideas on a high level to see what's on the table, what's on the landscape. And then the hope there is that I want to find a project that's actually multi-cycle, very sustainable, very innovative, and then that's building something completely undefined. And then we don't even know if we can build it, and I think that's how ZK caught my interest. And then I met Shumo in Zuzalu last year, and then that's where I initially got ZK pilled and started to understand what ZK can do, not only to scale Layer 1s that we're familiar with, but also there are a lot of use cases extending beyond blockchain.
[:Nice.
[:Yeah. By the way, I was also get pilled by Brian Armstrong's 100 BTC airdrop when I was at graduate school. I never told people the actual story, but I guess kudos to Brian Armstrong.
[:Wow.
[:For this, getting so many people into cryptocurrencies. Yeah.
[:The crypto user acquisition cost is exactly $100.
[:Tell me more about that actual story because you both have experienced this. Actually, I'm vaguely familiar. I feel like it hit my radar because I was in tech at the time, so I vaguely remember this kind of thing happening. But what exactly was it? Was it to grad students? Was it only certain schools? Could you sign up from anywhere? What was this exactly?
[:The one I was talking about, I think was only for MIT undergrad students. And then it was done by Dan Elitzer and also Jeremy Rubin because they founded the MIT Bitcoin Club. I actually did not receive the money myself, but I have a lot of friends who just used it on boba tea and, yeah, they wish they held onto it.
[:So my experience was that's... That was like a two-tier airdrop from Coinbase. It's very similar to how Mark Zuckerberg found Facebook. Basically, he did the first kind of speed dating thing at Harvard. So I think all the MIT students get $100 airdrop.
[:Yeah.
[:And then there's a second wave airdropping all the major US universities. I was a graduate student at University of Washington, then I received, I forget the exact amount, maybe $20, maybe less than $100, and then the Bitcoin price went straight up and that turned into, I think, $2,000 ish Bitcoin. I mean, as a poor graduate student, kind of, that was a big deal. Right? So every single of my fellow graduate students started talking about that.
[:Yeah, this is the best marketing ever. Especially if by chance it also just went up really quickly afterwards so you could see the potential. Nice. All right, so I think now would be a great time for us to introduce NEBRA. I met with you, Shumo, a long while ago to talk about NEBRA. And actually back then, NEBRA, at least when we were talking, was sort of presented as a prover marketplace. It was one of the first times I was kind of talking about this idea. But I know that since then you've moved a little bit away from this idea. Why don't we talk a bit about what it was, what was NEBRA originally pitched as, and what has it become?
[:So I think previously we were talking the NEBRA vision was two piece Lego. One is doing the proof aggregation and the other piece is doing a prover marketplace. And I think we actually never think about like prover marketplace is our primary goal. Take a step back. Prover marketplace is important in the proof supply chain, but we've always been focused on, hey, ZK proof is so expensive on-chain. For example, you need $20 today to verify a single proof on Ethereum mainnet today, and probably 1 to 2 dollars even on the Layer 2s. So we want to reduce this cost to $0.10 or even less. And that we think this is a ecosystem wide unlock for all the ZK project. And in our view, in the next ten years, every single project will become a ZK project. You may even not aware that you are using some kind of ZK technology, might be invisible to you, but it's very important because ZK is the only way can trustlessly scale computation on-chain so that everyone can share the same coordination layer, which is blockchain.
And then there is an interesting inter-relationship of the proof aggregation layer and prover marketplace because proof aggregation layer requires some off-chain computation, which is a recursive proving to basically compute the recursive, or we call it aggregated proofs, in that regard. Right? So, it's totally possible for us to build a vertical integrated stack both doing proof aggregation and prover marketplace. So in our previous kind of thought, we are planning to first build the prover aggregation layer and then build a prover marketplace. But now we don't think we want to build prover marketplace anymore because there are so many great people are building prover marketplace. I can name a few, like Succinct, Gevulot and so many others. And also from our own point of view, like our value capture is always not on the prover market side. Our value capture is always on the on-chain side. So to us, it's glad someone else can build that, and we can just like use that, yeah.
[:Okay. I really liked how you just put the like proof supply chain concept out there. And you've now talked about the prover marketplaces and a prover aggregator. But I want to define this a little bit more clearly. So, and we... I mean we have talked about it a few times on the show, but let's start with prover marketplace, which is a little bit more understood. I mean, I've definitely talked more about it. So yeah, let's start there. What is a prover marketplace? I know there's different versions of this out there.
[:Yeah, great. I think my definition of prover marketplace is very simple. Basically, if you want to prove something and put it on-chain, and for example, you don't have a beefy AWS cluster or very great GPU-accelerated machines, you basically delegate your proving work to third parties. And that's the beauty of ZK proof. You can do so if you don't care about privacy. For example, you can just delegate the public verifiable workload to a third party and they can compute for you and you may pay some money for the computation power, then you put the proof on-chain. Right? So then after this prover marketplace compute the proof, then you still need to put it on-chain to get its usage. Either to say you want to attest some historical data, which is like a ZK coprocessor, or you want to do a zkRollup and there are many other use cases. And you can see the proof supply chain is, first, user getting the demand, second, someone need to compute the proof and the third you need to put the proof on-chain. The first product we called NEBRA UPA, we primarily focus on the third part.
[:I see.
[:And also I want to add a little bit here. We don't do the proving part. Maybe in the future we don't need to do the proving part ourselves, is that we want to have a most universal interface for the third step, which is putting the proof on-chain. For example, we only have a portion of our users or customers can delegate their proof. A huge chunk of our users cannot delegate their proof because of the privacy. One example is Worldcoin. For example, every single Worldcoin proof have to be generated on the consumer device, which is their cell phone. And we are talking to many other privacy project as well because we can lower the cost of privacy on this privacy-preserving proofs or through ZK proofs. You cannot delegate your proof generation to a third party. So we want to find the common denominators for the interfaces of people putting their proof on-chain, that's why we designed NEBRA UPA's interface as a purely interface for people verifying their proofs.
[:All right. I think I want to dig down into what that actually looks like, though. So, from what I've gathered is the proving marketplaces could use a proof aggregation tool like yourselves, like that they would be potentially a partner. But also the ZK applications, like the people where you're actually creating proofs, and I'm curious about this, maybe even client-side, would they then use this proof aggregator to get it on-chain?
[:Yeah, yeah, precisely.
[:Okay.
[:That's precisely what it is. So basically we find the most generic interface for people to put their proof on-chain, and we try to be as un-opinionated as possible. So both this like prover marketplace or someone who run their beefy servers can put their proof on-chain, and also we can support client side ZKP as well.
[:Nice.
[:And we're putting them in the same pool.
[:Would rollups also use you or they use something else?
[:Yes. So maybe that's a good timing for me to give a sort of a... Take a step back to give a big picture, or give a more generic view of what is NEBRA UPA. Right? So NEBRA UPA is the first universal proof aggregation protocol live on-chain. So we're on-chain protocol. We're not an off-chain protocol.
[:You are a set of smart contracts that I guess are operating on an L1 or an L2?
[:Yeah, precisely. And have some off-chain component. And our primary goal is to reduce the cost of people's ZK verification costs. Like I said today in Ethereum Layer 1, it costs $20, on L2, maybe cost $2 ish. So this is too much for ZK go to mainstream. So our approach here is that you send your proof to us and then we aggregate your proof and possibly with all different parties. That's why the universal is important. By universal we mean we can aggregate proof from all kind of different parties, ZK coprocessor, client-side proof, zkRollup, zkVM, you name it. It's kind of a carpooling. It's a great way of talking about, thinking about that, because we can aggregate the proofs from different parties into the same aggregated proof. And the beauty of ZK proof is that the verification cost is kind of irrelevant of how many proof you're aggregating.
[:And when you say aggregate, do you mean like recurse? Like you'll pool together a lot of these different proofs and then do a proof of all of them into a single proof, so the outcome is just like a smaller proof.
[:Precisely, precisely. So from the high level point of view, aggregation proof just means we are generating proof of proofs. That's actually one of the twitter tag of one of our engineers. I generate proof of proofs.
[:Nice.
[:But practically speaking, on the tech side, it's not... There are more clever innovation to help us to do that more efficiently. It's not just like, hey, we just write a verifier, then we run this circuit. Right? There are many, many clever technical innovation we're doing to make it fast. But from the high level, it's like proof of proofs.
[:Okay.
[:It's using ZK to beat ZK.
[:Yeah, yeah.
[:Do you live on an L1 or an L2?
[:We have a testnet running on Sepolia, which is Ethereum testnet, and we're going to deploy on Ethereum mainnet pretty soon. And we're very likely to deploy on all the major Ethereum L2s, like Optimism, Arbitrum, Base, etcetera. Our philosophy is we follow our users. Whenever our users want us to deploy, we'll deploy.
[:You'll deploy there.
[:Yeah.
[:If you're deploying in all these different places, does that make your system in any way inefficient? I know it's not pools or assets, but if you're working on the L1, but you're also working on the L2, you're not aggregating the same proofs on both of these, I'm assuming. So some proofs are going to come to the L1, you'll aggregate those there, and then other proofs, you're aggregating those on the L2. But is there any connection between these two things, or are they just separate tools?
[:That's a very good question. Currently, each settlement layer or chain, or L1 or L2 has its own pool, if we deploy that. That's kind of inevitable. It's not like we want to, it is more like the current architecture of the L1 and L2, you have to have in separate pools. And we are working on some interesting designs to see, hey, in the future, can we combine them, but that's in the outlooking. And practically speaking, there are some more technicalities of this. Our protocol on the Ethereum L2 would be a little bit different because the exact gas cost of core data and computation, the ratio is a little bit different. So if you want to have an optimized protocol, you need to design the protocol a little bit differently. Not too much on the L1 and L2 side.
[:Got it. I noticed at some point you also kind of talked about supporting different languages, like Honk. You made an announcement where you're like, we're supporting Honk, therefore we're supporting Noir. But I was pretty confused by what that even means, because if you're aggregating the proofs, what does that have anything to do with the ZK DSL?
[:Oh, so here is a general picture of how to think about this. So currently, our NEBRA UPA v1, we only support Groth16 proof system. Groth16 takes 70% of the on-chain verification right now. There are a few people like Worldcoin, like Succinct, like RISC Zero, they all use Groth16. So moving forward, our NEBRA UPA v2 will have multi-proof system support. Then we want to support as many proof system as possible. So on that regard, we need to have a sequence of the proof system we are support. So we see a lot of people are very excited about Noir, and we want to support Noir. So, but to be able to support Noir, the crucial part is to support Honk. To the audience who is not super familiar with Honk, Honk is kind of evolution of HyperPlonk and with some Aztec-only secret sauce of making it more efficient. So by supporting Honk, so we'll be able to support Noir, which is like a growing ecosystem of people who are doing exciting things on ZK. That's the context.
[:That was the context. So really the question should be about the proving systems. So let's actually do that. So you mentioned you're supporting Groth16, actually, that was one of my other questions, and now you're going to be supporting Honk. But I also, like you mentioned Succinct, but actually they're using Plonky3. So would you be adding Plonky3 support?
[:Possibly, but right now, I had a pretty deep conversation with Succinct team, including Uma. Right now, so they have a two-layer system. Their top layer is Plonky3, their bottom layer is Plonk or Groth16. Right? So we will be supporting SP1's either Plonk or Groth16, depending on whatever the proof system they chose on the second layer. So as far as I understand, they either are going to use Gnarc's Groth16 or using Gnarc's Plonk. So we will support either of these.
[:Got it. But do you have Plonk then already? Because you mentioned Honk, but not Plonk.
[:We have vanilla Plonk. Okay, so on the v2, it's still a prototype, but it's end-to-end working. Also, we have another flavor of Plonk called a fflonk...
[:The FF.
[:Which is... Yeah FF, and that was used by many zkRollups, including Polygon zkVM, and as far as I understand zkSync as well.
[:Okay, cool.
[:So you can see, multi-proof system support will make us more applicable to many, many more applications and zkRollups.
[:Nice. Part of your work is to implement these. What does implementing them even mean in your case? Like, what do you actually have to build to be able to interface with Groth16 proofs versus Plonk proofs?
[:I can write like a 100 page PhD thesis on that, but let me be succinct. So from the high level point of view, you want to build a verifier circuit for this proof, right? Because we're doing recursive proof, which means we're generating proof of a batch of proofs. The hard question here is, how do you make the verifier universal? So this is a great journey from us. We actually published a paper last year called UniPlonk. It's a universal verifier. The key question is, how can you make a universal verifier circuit? So in our first iteration, we're using Halo2 to make the universal verifier circuit for Groth16, which means we can aggregate proof no matter what your circuit is. It's a circuit-agnostic proof aggregation protocol.
So in our circuit second generation, that's where your question is, Anna, what does it mean to support a new proof system? So in our circuit second generation, we're actually moving to a zkVM plus precompile architecture, which means supporting a new proof system is relatively easy. Basically, we're just implementing a verifier of this new proof system in Rust. Of course there is more than that, is that because this is a zkVM plus precompile. Take a step back. The base case is that you can just use a zkVM to implement a verifier, then you can aggregate the proof already, but it's very, very slow. We're talking about 1 hour-ish. We definitely want to be moving faster than this. Then basically we take what we have learned in the NEBRA UPA v1 to how do we build the efficient verifiers in the circuit. We're adding proof aggregation specific precompile, or you can say we're adding ECC precompile or a hash precompile, that's to be more specific to the zkVM so that we can aggregate these more generic proofs from different proof system more efficiently. That's a high level. And specifically, adding Honk proof means we are adding a Honk verifier to our second generation architecture, which is the zkVM plus precompile.
[:This is leaving me with two questions. One is are you building your own zkVM? Are you going to use something out of the box?
[:So I think we are using something out of box. Basically we tried a few open source zkVM like Succinct SP1 and RISC Zero and also Jolt. We haven't decided which one to use in the end, but I think to be honest, their tech stack is very very similar. They're all having a very similar Rust compilation toolchain. So right now we are building a zkVM agnostic backend for this proof aggregation. So again, right, so our aim is that we're not trying to reinvent in the wheel, however, we want to be focused on where we want to improve on the technical side.
[:Got it. What you described though, the fact that you're actually building this universal verifier circuit makes me start to think like are you then becoming a verifier aggregation layer yourself? Because that does seem to be a new type of project that we're also seeing like Aligned Layer and Hyle. So they're doing more aggregation on the verification side. Would you then consider Aligned Layer or Hyle competitors to what you're doing?
[:I think by the way, my understanding of the Aligned Layer and also Hyle might be a little bit rusty. So I think my understanding for Aligned Layer is that they are using EigenLayer crypto economic security to do the verification. And I would say we have a similar interface to the developers by having different security guarantees. So basically their security guarantee is this EigenLayer crypto economic security to the users, and our security guarantees are this recursive ZKP guarantee, which is pure mathematic. For people who don't need a full Ethereum security, there might be some use cases for Aligned Layer, but we do see we kind of similar developer interface but a different security guarantee.
[:Got it.
[:And for Hyle, it's a very similar situation. I think they're using TEE, which is SGX as their security guarantee to do the verification. So we would love to see more and more people working on this verification side. Because to be honest, if you're looking at the cost structure of a project on ZK, especially if you talk to the rollup-as-a-service providers like AltLayer, Caldera, Conduit they have a lot of complaint from the user side. Hey, people say ZKP proving is a cost, but ZKP verification even costs more. Right? So we'd love to see more and more players join the space to help to reduce the cost of putting a proof on-chain.
[:Cool. But this is actually helpful also for folks who are trying to group together what projects they're working on. Some of the projects use different language, but as you've been going through this, I'm starting to hear echoes of what I understood they were doing. And it does sound like you're supplying a similar service to the end user, to the developer, but you're doing it in different ways.
[:Yeah, yeah, yeah.
[:Cool. I've also seen you sometimes refer to what you're building as a shared settlement layer, but so far in what we've talked about, it hasn't come up. So is that still what you're doing?
[:That's right. So we are kind of like finding the best, the most accurate way to describe the solution. I think for us it's more about the last mile that gets the proof on-chain and then, so at one point we call it shared settlement layer. But I think for us is that we want to be the last mile that's pulling everyone together to allow the proofs on-chain. And we don't care where the proof come from, we don't care about which zkVM generated them, rollups and even consumer application. So that's why we're interested in the carpool analogy where you can be very different people, very different groups, but if you're going to the same L1 or same L2, we're carpooling it together. Right now, as Shumo said earlier, if you're going to the same destination, we're still having separate pools, but ultimately we get the passengers to the destination faster. And then by faster, meaning that if we're able to aggregate more proofs, then the waiting window, the rolling window that we wait to aggregate the proof to amortize the cost will be much shorter. So if, let's say, a application or a VM themselves, or even the prover network themselves, they want to build the aggregation layer themselves, and I think that's totally fair, but they might have to wait, let's say an hour for 50 proofs. But if you use a third party, credibly neutral proof aggregation, then you will only have to wait, let's say ten minutes or five minutes, and then you get the same cost reduction. And that's essentially how we find the value prop will be very appealing for users.
[:So what I'm hearing is you're not actually using the shared settlement layer term, but when I read that, I was kind of curious if you positioned this in any way, like towards the AggLayer or, at first maybe I thought it had something to do with shared sequencers, but it sounds like it's pretty far away from shared sequencers. But in the case of the AggLayer and that sort of aggregation that they're talking about over mostly in the Polygon ecosystem, but I know they have a vision for this in a bigger way, they talk about settlement. And, yeah, I'm just curious and sort of what we talked about before, where you're deployed on various L2s and an L1, and you've kind of hinted that you might think of talking to each of these deployments. Yeah, is there any sort of connection then to something like the AggLayer?
[:Yes, I think we're a lot more similar to AggLayer than comparing to Aligned Layer or Hyle.
[:Okay.
[:It's just that we're taking a more ecosystem agnostic approach for that. I think Shumo and the team probably have the idea about building proof aggregation around the same time last year. And I think it's that the fact that people are all building in this space is extra validation, that it's kind of the direction that we see will make ZKP accessible for end user. And then eventually we'll have a killer app that's ZK enabled.
[:Yeah, I think one addition to that is that both us and AggLayer are still in the very early stage of the protocol development.
[:Got it.
[:And take a look at AggLayer documentation. It's pretty high level. So take what I said as a grain of thought. So my understanding right now is, so to solve this rollup interoperability problem, you need many things. You need proof aggregation. Proof aggregation is the most important technical piece in this puzzle. But you need many, many, many things else. For example, you want to have a fast finality. You need something like, I'm not sure you covered this, Anna, but Justin Drake was keep talking about this pre-confirmation. Right? So to get a fast finality, you need pre-confirmation. And also it's a very huge design space. I think both us and AggLayer is just like scratching the surface. There are many, many interesting problems to be solved. I can name a few. First, how do you even define cross-rollup transactions? We know how to define transactions in a single rollup. Like, what even is the right definition of the cross-rollup transaction? That's the first question.
[:And at the moment it's just like it feels like bridging all the time.
[:Exactly. But I think bridge is a wrong abstraction. I did a talk at DBA research today called Bridges are a Scam. I think people need to think about the things from a first principle in order to solve the question. It's really not about hey, AggLayer versus NEBRA. It's all about can we actually having solved the interoperability problem for the Ethereum rollups? I think I want to put a more like higher level view of this to everyone. Maybe that's helpful. Right? I think for Ethereum it's very important to have a rollup-centric roadmap. It's very different because Ethereum want to keep its L1 truly decentralized. It's not a performance optimized L1. In that case, rollup is very important. But think about if we have all these rollups but these rollups doesn't have interoperability, then it's becoming kind of separate kingdom. That's fine, but if you think about in the longer term, then that will have a winner-take-all situation. That's actually not healthy for the entire Ethereum ecosystem. Basically these rollup ecosystem cannot share network effect among each others. And I think both us and of course AggLayer is doing is we're trying to designing innovative solutions to help rollups interop with each other. I think I will get back to Yi on that, but Yi has a beautiful analogy, is that maybe in the future every home will run a rollup, every coffee shop will run a rollup. Right? In that case, we definitely need more interoperability in that regard.
[:So the original quote was actually from Bill Gates because they mentioned when they started Microsoft a couple of decades ago, they talked about a computer on every desk and in every home. And then I joked to Shumo about we need a zkRollup on every desk and in every home.
[:And every phone.
[:And every phone, every light client. The idea here is that we want to lower the barrier of entry for building a zkRollup. Our thesis is that once zkRollup have a stack that's performant and reach feature parity with OP Stack, we will have a lot more use cases for them and then a lot more consumer apps that haven't thought about building a rollup themselves, once they reach adoption, once they have enough volume, enough traffic using their application, they will consider building their own rollup. And then it will be a zkRollup or a ZK-enabled rollup. Or maybe in the future we'll just call it a rollup because we don't talk about, I'm building an internet startup now, it just I'm building a startup. And then so I think that's kind of how we envision ZK would be a very fundamental underlying infrastructure that you no longer will talk about it. Right now we treat ZK as a separate sector, but in reality it would just be an element that's useful for people to do a lot more off-chain compute, to be able to prove something that has happened, to be able to have native interoperability between rollup. Like it's a huge design space and have very different attack angle that's building by a lot of the innovator in this space.
[:I'm still a bit confused on how you go from the aggregation of proofs into what I've understood AggLayer as kind of proposing, which was a little bit more like settlements and movements. Maybe not movements around the chains, but even how you described it, Shumo, where you'd build an application that's actually interacting with two or more rollups at the same time, and a transaction between those rollups is actually happening. How do you go from just proof aggregation, which to me is sort of like there's all this proofing going on, then you do an operation and then you write it to chain. How do you go from that to actually interacting between the different rollups?
[:That's the right time to share a little bit more what we're building next. We previously called it a shared settlement layer, now we are calling it NEBRA OS, which OS stand for, you can view it as operating system or you can view it Open Stack. So the whole idea here is that if you look at how can you make rollups interoperable, our definition is you need to have cross-rollup transactions. And this cross-rollup transactions need to be the first class citizen. It needs to be as safe as possible. And how can you make a cross-rollup transaction as safe as possible? You have to make the transaction settle at the same time as normal transactions. And then getting back to how does a rollup transaction settle? So a rollup transaction settle when the validity proof, assuming it's a zkRollup, then a rollup transaction settle when its validity proof is verified by Ethereum Layer 1.
[:By the L1. You wouldn't say it's when the sequencer puts it in the sequence.
[:I mean, this opens a cans of worm here about all the semantics, the pros and cons of zkRollup and Optimistic rollup. Let's put it simple, so sequencer put it, it's a soft guarantee. And if you want to set a huge amount of asset, I think the better way is to having much safer guarantee, which is...
[:The final writing to the chain.
[:The final grating to the chain. And also as a coordination layer on top of Ethereum, but below all the zkRollups. Right? So basically we want to enforce the safety guarantee of the cross-rollup transactions. So, in my view at least, the only meaningful way of enforcing that is that together with all the validity proof coming from different rollups, we want to have safety proofs across rollup transactions. Together we basically aggregate all this proof into a same proof and settle on Ethereum. I'm actually not sure this is how AggLayer is doing this, but in our view this is sort of the most un-opinionated way of doing this because we are using Layer 1 security to ensure the safety of the cross-rollup transactions. And also you bring up shared sequencer, and I think that's a great angle as well. So you can think of a shared sequencer as a way of enforcing this atomic compostability at a much early stage, and we're enforcing this cross-rollup transaction at a much later stage. I think that's more un-opinionated and more... Give more flexibility and sovereignty to the rollups, so in the sense that rollups doesn't have to use a shared sequencer. We're happy for them to use a shared sequencer if they want to, but they don't have to. All they need to do is that the rollup give some guarantees or give... Sign some confirmations about, hey, I'm going to put this cross-rollup transaction in the next batch. So in this way you can think of it gives them more sovereignty, which I generally think is a good thing. You may know the Cosmos Thesis, and also in practice that's also getting from where I used to work with Manta. I know the rollup sequencing is one of their most important revenue source for all the rollups. So I think shared sequencing, the idea is great, but if you want to convince other rollups to actually get into shared sequencing, it's kind of an uphill battle. And to put an interesting analogy, it's somewhat similar to central planning, economics and the free market.
[:Oh wow.
[:In our view, we should facilitate the free market approach and building the infrastructure to net people forming this free market instead of we have sort of a centralized planner for these cross-rollup transactions.
[:Yeah, I think to some extent we think that by adopting this approach, we allow the integrity of the sovereignty that each rollup might want to maintain. And I think our general approach is we want to be... Like we want to cohabitate with existing infrastructure and people's design choices. So in the sense that the reason we're not double into the decentralized prover market is because we think that is a very important part for a zkVM to own, and that is the meaty part of their supply chain, and they were more focused on the last mile verification. And then here into building an Open Stack for zkRollups, we believe that sequencing is a lot of the memory retention that each rollup might want to maintain. And then our approach is more focused on offering native interoperability and potentially a canonical bridge to make the user experience and developer experience a lot more seamless.
[:By this Open Stack, though, are you also thinking about it a little bit like as ZK Stack or OP Stack? Would people use your stack to fully deploy end-to-end rollup? Or is it more like there's a piece that they would take from the NEBRA OS and then add it to one of those deployments?
[:It's still in the very early stage.
[:Oh yes, true, true.
[:So I think there are two things. First, it is an entire stack. People can deploy their appchain or zkRollup, however you call it. The big problem we're solving here is that by this integrated stack, people can really just click a few buttons to deploy a new zkRollup. We are kind of half there, but not really. The problem right now here, if you are talking about people who are building appchain, is that, yes, they can use Caldera, Conduit, AltLayer to do so, but they need to have a lot of business development negotiation with Chainlink, with Pyth, with all these oracles. And also they need to have a lot of business negotiation with other bridges like LayerZero, across, et cetera. So I think one of the benefit of using NEBRA OS is that they don't have to do so. They can truly do one click deployment, and even in the future, permissionless deployment of the zkRollups.
And secondly, what's our relationship with OP Stack, ZK Stack and Polygon AggLayer? I do think we actually want to collaborate with them and also make ourselves interoperable with them. So like I said before, we're in this early exploration phase of to figure out the design space of how does the cross-rollup transactions can work. I think everyone is just scratching the surface right now. We definitely want to be first... I think open means two things to us. First is open source. So the pure stack will be open source from day one. And second, be open to collaborate and interoperable, which means we will actively seek collaborations and cross-stack interoperability between us and other stacks. I think these are where we are coming from. And also, I don't view this as a zero-sum game, so I think we want to fulfill Yi's vision, which is put a rollup in a single desk. If you think about that, we're only in the 1%...
[:Not even.
[:...of the progress bar.
[:Fair. I want to bring it back to what you have today. So kind of bringing it back to the UPA version 2, can developers already use that?
[:UPA v1, people can already use that. UPA v2, we have a working prototype in-house and people probably can use that after one or two months, which is not too long. And we have some initial very close users that are already trying that. We also just want to say the UPA v1 and UPA v2 have the exact same interface. For example, you can just try to build a very small Circom circuit to try it today.
[:Cool. I'm recording this, it's the first episode I'm recording after wrapping up ZK Hack Kraków, our IRL hackathon, do you think that hackers building applications should already be experimenting with this stuff or do you think...
[:100%
[:Yeah. Okay.
[:Yeah. So we'd love to see if we can come to more ZK hacker houses and to help them. So I can tell you one thing I'm really, really happy about. So we are working with a partner called Project Galaxy. We just throw our developer doc and zhanghz didn't get back to us in one week. I was like, are they still interested in using us? Then just one week, boom they already integrate NEBRA UPA v1 in their SDK, no question asked. And I think that make me super happy, which means our developer documentation is actually good enough and people just like no question asked and did the integration.
[:Interesting.
[:To say a little bit about that, NEBRA UPA has a very simple developer interface. Basically three step. As a ZK developer, register your verification key, change your proof verification contract to NEBRA, and change your application logic to subscribing to NEBRA UPA event. Then that's it. So we try to be plug and play to all the ZK applications and we'd love to support ZK hacker houses in the future.
[:So that's for the sort of first part. This was the UPA v1 and the UPA v2. What is your timeline for the NEBRA OS?
[:We don't really have a timeline right now, and I think we probably want to first get a good design and ask as many people as possible to say, hey, how do you view about the design? And then we can have a more concrete timeline here. I think in my view, to getting the design right and building something that is meaningful for people to use is more important to sort of rushing to have something people can use, yeah.
[:Yeah, yeah, yeah.
[:And then I think one last bit of like call of action is that not only are we excited about going to more hacker houses and support more hackers, I think there are specific design space we're actively exploring. So aside from the all the amazing infra partners that we have, we're also looking into specific consumer application space where it could be in the DeFi space where there are private smart contract, there are decentralized voting, there are different kind of scaling solution, private transaction and even partial KYC. And then also in social application where it's in-person attestation, there is badging system passport and for people to have PVP interaction online and offline, or I guess on-chain and off-chain, and then be able to utilize that. I think these are all the things that were actively kind of expanding the scope and the utility function to allow other application not to be ZK first, but be ZK integrated. And then that's how UPA could be the backend for components that you don't want to touch.
[:Nice. Cool. Well, thank you both of you, for coming on the show and sharing with us the story of NEBRA, kind of the work that you were doing early on, how that's evolved and where you're going. Thanks so much.
[:Thanks for having us.
[:Thanks a lot, Anna. It's a great pleasure to be here, as always.
[:All right, and I want to say thank you to the podcast team, Rachel, Henrik, and Tanya, and to our listeners, thanks for listening.