In this week’s episode, Anna chats with Alex Gluchowski, CEO of Matter Labs & co-creator of the zkSync network. They catch up on the zkSync project since it launched in Feb 2023. They dive into recent initiatives like the ZK Stack framework, Hyperchains, and the ZK Credo mission statement. They also explore the upcoming Boojum proof system upgrade planned for zkSync Era and discuss the future of the zkSync project as a whole.
Here’s some additional links for this episode:
- Introducing the ZK Stack
- Introduction to Hyperchains
- Matter Labs Era Boojum GitHub
- zkSync Era: Everything you need to know about ZK Credo, ZK Stack, & Boojum Upgrade
- Episode 72: zkSNARKs for Scale with Matter Labs
- Episode 116: zkSync and Redshift: Matter Labs update
- Episode 175: zkEVM & zkPorter with Matter Labs
- Introducing zkSync: the missing link to mass adoption of Ethereum
- The different types of ZK-EVMs by Vitalik Buterin
- Different types of layer 2s
- L2BEAT
- emailwallet.org
- prove.email
Check out the latest in ZK Jobs on our Jobs Board here.
Launching soon, Namada is a proof-of-stake L1 blockchain focused on multichain, asset-agnostic privacy, via a unified shielded set. Namada is natively interoperable with fast-finality chains via IBC, and with Ethereum using a trust-minimised bridge.
Follow Namada on Twitter @namada for more information and join the community on Discord discord.gg/namada
If you like what we do:
- Find all our links here! @ZeroKnowledge | Linktree
- Subscribe to our podcast newsletter
- Follow us on Twitter @zeroknowledgefm
- Join us on Telegram
- Catch us on YouTube
Transcript
00:05: Anna Rose
Welcome to Zero Knowledge. I'm your host, Anna Rose. In this podcast, we will be exploring the latest in Zero Knowledge research and the decentralized web, as well as new paradigms that promise to change the way we interact and transact online.
This week, I chat with Alex Gluchowski, CEO of Matter Labs and one of the creators of the zkSync network. In this episode, we catch up about all things zkSync. The last time he was on the show was over two years ago, and so there is a lot to talk about. We cover the ZK Stack framework, Hyperchains, the ZK Credo mission statement, and the Boojum Proof System Upgrade, planned for zkSync Era. We also chat about the future of the project and more. Now, before we kick off, Tanya will share a little bit about this week's sponsor.
00:59 : Tanya
Launching soon, Namada is a proof-of-stake L1 blockchain focused on multi-chain asset-agnostic privacy via a unified set. Namada is natively interoperable with fast finality chains via IBC and with Ethereum using a trust-minimized bridge. Any compatible assets from these ecosystems, whether fungible or non-fungible, can join Namada's unified shielded set, effectively erasing the fragmentation of privacy sets that has limited multi-chain privacy guarantees in the past. By remaining within the shielded set, users can utilize shielded actions to engage privately with applications on various chains, including Ethereum, Osmosis, and Celestia, that are not natively private. Namada's unique incentivization is embodied in its shielded set rewards. These rewards function as a bootstrapping tool, rewarding multi-chain users who enhance the overall privacy of Namada participants. Follow Namada on Twitter, @namada for more information, and join the community on Discord, discord.gg/namada. And now, here's our episode.
02:00: Anna Rose
Today I'm here with Alex Gluckowski, the CEO and one of the co-founders of Matter Labs, as well as a creator of the zkSync network. Welcome back, Alex.
02:10: Alex Gluckowski:
Hi, Anna. It's an honor and a pleasure to be here again.
02:13:Anna Rose
rlier episode was from April,:03:20: Alex Gluckowski:
Absolutely, with pleasure. So the zkSync network that you've been using back then is now known as zKSync Lite. It was the first protocol that we developed and launched on the mainnet. It was a simple protocol for sending tokens and later token swap were added to this, and that you used it at Gitcoin grants also captures a very important aspect of what zKSync is all about. So zKSync is not really a specific network or specifically just the name of the protocol, it's the name of the project. It's the name of a bigger vision. And this has been the vision of the project since the very beginning, since the inception. We've always been very deeply focused on this core mission, and we've been very consistent about it.
And it's all about scaling crypto to everyone in the world without losing the properties that make crypto valuable in the first place. The decentralization, self-custody, security, permissionlessness, this inclusiveness where anyone can participate no matter what without asking anyone for permission to join. So we had that in the beginning and it was also very, very clear that in order to fulfill this mission we have to make it very nice and user-friendly and convenient. And I think this was the experience that was very impressive back then with Gitcoin grants because you could do just one click in MetaMask and the system would process dozens of transactions to different recipients of the grants, and it felt like magic. This is part and part what ZK is all about. It's the magic moon math, which adds magic to blockchain applications and scales it too.
05:11: Anna Rose
It is funny though, that like back then, that was very user-friendly, but it was such a novel way of the... Like MetaMask before that point, you would never choose networks. You wouldn't actually make a distinction. I mean, there was only one network you were usually working on, and I remember there was a learning curve even back then. Nowadays, though, I feel like a lot of that has become, people are quite used to it. And actually, I think that conversation about usability, I want to come back to that later on. Because as much as I feel like the goal is usability, getting there still is at times very challenging. I feel a lot of users, even today, even with easier tools, are still having a hard time getting into it. So let's come back to that. I do want to say, you mentioned that from the very start, you were focused on the scaling of blockchains. And in, I think, our first interview, we talked about how a lot of your team and a lot of the founders of zkSync were coming from the Plasma world, had moved over into roll-ups. At what point did the idea of the zkEVM really come into your mind? Was it already back then that that was the goal of using ZK? Or was it like instead of Plasma, we're going to use ZK roll-ups and then eventually this idea dawned on you that you could actually create an environment where people could deploy apps the way they do on mainnet Ethereum on an actual ZK roll-up?
06:41: Alex Gluckowski:
This vision was there from the very beginning. It was very obvious that the end game has to be full scalability.
06:48: Anna Rose
You knew it was going to be zkEVM.
06:50: Alex Gluckowski:
If you read the very first blog post introducing zkSync, where we mentioned zkSync for the very first time, and it was close to five years ago, it's called the Introduction of zkSync. One of the points that we make there is that eventually we envision full EVM bytecodes being executable and provable in zero knowledge proofs. The distinction from where we are today and where we were back then, is it became feasible. It was not feasible five years ago, and it was not really clear how specifically we're gonna manage that. It was clear that eventually it will come, we need it. Breakthroughs in the protocol development, in the fundamental crypto primitives of zero knowledge proofs. Those breakthroughs eventually came in the form of Plonk, Recursive Plonk, RedShift and then the systems that Cambrian exploded after that, so that we can finally have performance high enough to process essentially zkEVM level of computation. And this is to say what happened since the last time we spoke. We launched the zkEVMs on Ethereum, and they are now live and flourishing in the ZKSync Era. It was the first Blockchain, L2, where you could deploy Ethereum contracts without modifications and just let them work in exactly the same way the users interact with Ethereum mainnet or with Optimistic roll-ups, which were launched earlier. And now we're still at the beginning of a long journey to make it universally available and universally usable.
08:33: Anna Rose
I want to ask you about, given that I feel like the audience now knows much more about the zkEVM space and the different kinds of zkEVMs, I know a few years ago, or maybe a year ago, Vitalik published this zkEVM landscape with these different types. I'm curious, where does zkSync fall?
08:54: Alex Gluckowski:
Yes, zkSync Era and ZK Stack, the technology that we currently have, still falls under type 4, meaning you have to compile your contracts and deploy them in this network. But these borders are blurring eventually. I think we will have systems that will support different types of virtual machines in the near future where it will not really matter. You will be able to execute any types of programming environment from EVM native bytecode to something that is natively compiled for this system for maximum performance to something like RISC-V or WASM or other types of virtual machines with different bytecodes. All of that will live together and will cooperate with different trade-offs for different purposes. That is definitely where zKSync is heading and everyone is heading.
09:47: Anna Rose
Yeah, yeah. Let's talk about the launch of the zkEVM. This was the second network, ZK Era. What was that like? And actually, when did it launch? Was it a year ago?
10:01: Alex Gluckowski:
That was in February of this year. It was very exciting. It was the first time that you could deploy EVM contracts with exact same invariance, exact same interfaces, same user experience on a ZK roll-up. So it was obviously a very... A moment of high responsibility because we were thinking a lot about security, about different ways the system can break. We were putting a lot of precaution measures in there. And so it was a mix of this responsibility and alertness and actual excitement for going live. And we absolutely did not anticipate such a fast pace of onboarding users and capital and applications. We thought it's going to be a lot smoother and a lot longer for people to gain trust in the system. Gradually, over time, things will start moving, but it was like a snowball. It was really, really fast.
10:59: Anna Rose
se I remember this was Lisbon:11:25: Alex Gluckowski:
We've set the dates a couple of times, we had to postpone it because you're building something completely new and you cannot be focusing on several different conflicting priorities all at the same time. For us, security was paramount. So we knew we're not going to launch something that is not meeting the standards that we expected in terms of security and diligence and code quality. But the moment we solved all of these issues and we actually had a live testnet going on in a very stable way with partners launching things working properly, we gained more and more confidence and then we just put a line saying like specifically by this date we will feel that the system is battle tested enough and mature enough to be launched. And so it was a gradual improvement from that point. We were just very, very conservative. We did not, really did not want to rush. So the final line we've set was actually from a point where we were very, very confident that the system is functional.
12:31: Anna Rose
And maybe this is why I confused it a little. I think back then you might have said something like, it's coming sooner, like around November. And then I guess, is it possible that then you pushed it to February?
12:41: Alex Gluckowski:
No, no, no. I think the initial estimates for building zkEVM were, I think, from the point where we started working on it, we thought it could be completed in one year. It took us closer to two years. And so within this time period, I think there were a couple of points where we had to postpone.
13:01: Anna Rose
I see. It's very common to all projects. I feel like we do hear that often, that idea of predicting how long it's going to take. What do you think took longer? Was it because of needing to do more audits? Maybe can you point to any of the most complicated or challenging parts of building this?
13:19: Alex Gluckowski:
I think it was just a combination of things. I cannot single out a specific one thing. You know, like you start building stuff, and then you do it iteratively, and you discover challenges on the way. You cannot just foresee all of them completely imaginary and saying, oh, here is the perfect system. All of the best world's products that are being shipped fast are being shipped in the cyclic way, where you make an iteration, you launch your MVP, you get the feedback, you see what works, what doesn't work, what you have not thought about, and then you work over and over with further iterations too.
13:55: Anna Rose
In that time too though, did you change the ZK under the hood at all? Like was there any sort of adaptivity on the ZK part or would you say you already locked that in kind of at the beginning of that two-year process?
14:09: Alex Gluckowski:
The primitives were locked. It would change for the circuits, we discovered more ways to make things more efficient and so some of them were rewritten, but mostly it was just iterative work of building the complete body of all components required to make a complete zkEVM. But we now have a really good sense of all of these components. It turned out that a lot of complexity was not in the ZK circuits itself and people over-emphasize the complexity of zero knowledge proof specifically. There is a lot of complexity in roll-ups, in just the platform side of things, in your core node, in scaling the storage, in scaling the transaction processing, in scaling the APIs for querying transactions. So that required a lot of work and luckily we have a very, very talented team and we made some right choices from the beginning doing everything in Rust and using best practices from engineering to make sure that we can eventually scale with the demand, not putting any artificial limits where we say, oh, 10 transactions per second is going to be enough. And then all of a sudden you hit this wall of 10 transactions per second and then what? Are you going to shut down your chain and invest time in making the system more scalable? That's not how you build in a sustainable way. You have to anticipate spikes in demand. You have to anticipate growth and you have to put in a lot of buffer, actually orders of magnitude of buffer in order to be able to accommodate unexpected black swans. This is what the work was all about.
15:54: Anna Rose
That's fascinating. I was just... In prep for this, I was actually listening to our first interview together with Alex V, who was also on that one. And I think you actually mentioned that thought. I think he said something along the lines of like, plasma was built for this kind of capacity, with ZK roll-ups, it will be a lower capacity, but these are the trade-offs that are made. But what you're describing is like, as you went on, you realized, actually, those spikes of usage, like you can't actually have a chain halt or not be able to use it. It may not need to maintain that capacity all the time, but it has to be able to maintain that upper boundary. So this sounds like real learning, even from that episode. What exists today? Like, what is the ZK system in ZK Era today? Because I know there's something coming, but I'm curious what it is right now.
16:44: Alex Gluckowski:
As of today, on mainnet, we use Plonk with recursion. What we're switching to now is the proof system called Boojum, or implementation called Boojum. This is now live on testnet, and we're making full switch on mainnet in a couple of weeks. And Boojum is the implementation of a construction we called RedShift originally. Alex V published it together with Konstantin and Aki, very shortly after Plonk was announced, it was an extension of Plonk, which enabled FRI to be used as polynomial commitments and essentially turning Plonk into a transparent proof system.
17:32: Anna Rose
Which then, I mean, this idea was very influential in general, because we've seen lots of systems since then go with that idea. We actually covered RedShift on one of the episodes we did together as well. But I remember last time we spoke, you had actually talked about shelving it, and now it sounds like you brought it back in action. What did you need to do to RedShift to get it into the state that it would be Boojum?
17:55: Alex Gluckowski:
So we postponed the RedShift implementation back then because it was not very efficient with the primitives which were available at the time when the RedShift appeared. This has changed since then with a couple of breakthroughs. An important one came from the Plonky2 team. They came up with an idea to use a shorter field in a really cool way, which boosted the performance of the hashing. And also there were a couple of other insights. There was... I'm not the right person to answer this question. You should talk to Alex V. But there was something with some research on better cached quotients and multivariate lookups from Ariel Gabizon and a couple of other guys. Those things combined gave Boojum the performance necessary to be able to be used in production.
18:45: Anna Rose
And you mentioned that it's coming in a few weeks. What does it mean to change out the proving system? This is what we're changing the full like ZK proving system under the hood, right?
18:55: Alex Gluckowski:
Yes, we're changing the prover behind the roll-up system. But it's actually relatively easy for us to do because the new prover just follows one to one the block structure. So we're actually running it now in parallel for all the blocks that are being produced on the system that we proved with Plonk. In parallel, we're also proving that with Boojum, and what we're going to do is we're going to just shift to the new prover and just drop the old one. So it's a very smooth transition.
19:29: Anna Rose
In the case of a roll-up and sort of making that change, does the change just happen in a smart contract? Do you just upgrade a smart contract, basically?
19:38: Alex Gluckowski:
You essentially upgrade the verifier, and you upgrade the prover that produces the new proofs. So one thing to mention here is it was very important to keep the proof system agnostic. So what we did is we on purpose used SHA-256 hash function for the Merkle trees in the storage so that we can change the... We did not use Poseidon that would be depending on the field, and whenever you change the field, whenever you change the proof system, you would have to rehash and do re-genesis of the entire system, which would have to be a trusted operation. We did not want that. So we built the system from the beginning, slightly less efficient, but with more future-proofness, if you want. And this is how we always approach things. So this is why it's so easy for us to make this complete switch, and this is why it's going to be easy for us to switch in the future to any new innovation that might be coming and it will definitely coming in the years ahead of us, while preserving the system, keeping it intact with all the value and all the contracts, all the state that is being accumulated there.
20:49: Anna Rose
What does it actually do? What does the upgrade actually do? Does it make it faster? Does it make the proofs smaller? Is it cheaper? What's the actual benefit?
21:00: Alex Gluckowski:
It's going to make the proofs more efficient, meaning for the end-user that the proof generation is going to be cheaper and the overall throughput of the system is going to be higher. Although throughput is not really a constraint here because you can pair it like a zero knowledge proof generation is really well parallelizable. You can go and add more and more machines. We are running GPU provers, we have optimized the Boojum GPU prover to be consumer friendly with the aim of future decentralization of the proving space, you only need under 16 gigabytes of RAM and then in a decent gaming GPU to be able to generate the proofs for Boojum.
And so eventually, yes, it boils down to the cost. So now, this is interesting because the cost is invisible, the cost of the proof generation. It's negligible compared to the cost of data availability. This is going to change with the usage of Validia or hybrid systems like Volition, like zkPorter. There it will make a big difference. Like today, on Ethereum, users are paying 10, 20 cents per transaction on average, so they don't really notice the cost because they're tiny. And Boojum makes it orders of payment cheaper, so you will eventually be able to have very, very cheap transactions on Validiums.
22:28: Anna Rose
Interesting. You just sort of talked about like the prover and the actual proof creation potentially being decentralized. This is a space that I've at least heard people talking about for, well, publicly for a few years. I know like Mina and Aleo have always talked about these SNARK marketplaces, like proving marketplaces somehow. I've heard of a few projects doing just that, where that's their entire business at the moment is developing these marketplaces for proof generation. What's your vision for that? Do you eventually see other teams or some third party creating the proofs actually in zkSync-like networks? Or do you... Would it need to come from your org?
23:13: Alex Gluckowski:
So let's start with the basics. We see the decentralization of all aspects, all components of the system as an absolute non-negotiable requirement. Remember, the mission of zkSync is to scale blockchains with preserving these core values, the core philosophy, the core valuable properties that we have there. These properties include decentralization, they include resilience. The only way to become resilient is to decentralize. Decentralization paradoxically is actually not a value itself, it's a means to enable several important values. And it's a means to give your systems, your blockchain networks credible neutrality. Like if all of the proof generation is happening on one cloud provider, or just on a few big cloud providers that completely control it, then they have this self power of being able... They can just always threaten to switch off your proof generation, your system, and then your blockchain just shuts down. So you will be forced to follow whatever orders, whatever subtle requirements they impose on you. And we want to oppose that. We want to build truly resilient systems. So you have to decentralize the sequencer, you have to decentralize the prover, and you have to decentralize also the development process and the community that watches over the system and guards... stewards the development, and points in which direction the system should be developed for them. And all of that brings us to the idea or the document called ZK Credo, which we also published this summer.
24:57: Anna Rose
Yeah. I want to talk about ZK Credo, but right before that I just want to ask you what your thoughts are on the decentralized sequencer space or maybe even the shared sequencers. As a major ZK roll-up at this point, do you see yourselves working with one of those shared sequencers or do you imagine actually a decentralized sequencer on your side?
25:18: Alex Gluckowski:
So as I said, we definitely will have a decentralized sequencer. In some form, we will have many chains as a part of the bigger ZKSync ecosystem. We call them hyperchains. The priority for us in building the ZK Stack that powers the hyperchains is the sovereignty of the chains, giving maximum freedom to all of these chains to decide their parameters, to decide their structure, configuration and so on. And this includes the sequencing. So they will be able to choose the same decentralized sequencing approach as ZKSync Era. They will be able to use centralized sequencing. They will be able to use, to opt in into shared sequencing schemes, to go for other providers. There are a number of talented teams working on the shared sequencer design space. So I think we'll see a lot of experimentation and we'll definitely see different chains pursuing different strategies. And it's not gonna be one-size-fits-all. It's gonna be some variety, some different trade-off space.
26:23: Anna Rose
Makes sense. Let's talk about ZK Credo. I actually found this, I wasn't sure what it was. I actually asked you before the interview, is this a mission statement? Is this a kind of guiding document? So yeah, what is ZK Credo?
26:38: Alex Gluckowski:
So ZK Credo is a statement. It is a statement about our mission, philosophy, and values of the zkSync project. It's not bringing in something new. Those are the values, whatever we articulated in the first draft of the ZK Credo, I don't think it's going to be the last draft. We have community discussing it, getting involved, and it will be an involved process. But what we set out to do is to articulate those principles in a very specific way, which can serve as the basis of the formation of community of zkSync governance. Because we're building systems that have to be credibly neutral, we are striving for using math instead of relying on humans, on validators, on some centralized authorities or even on decentralized groups of people, because there will inevitably be some forms of politics, some slight decision-making changes. We want to make them as neutral, as transparent, as immutable as possible.
And this actually works for blockchain systems. It doesn't really work well for evolving those blockchain systems. We are not yet at the point where the code can write off its own. The systems do not evolve by their own. We don't have yet artificial general intelligence. They are being evolved and developed by people. And those people make subtle choices which might affect entire ecosystems, that might affect specific applications, it might affect specific groups of people. And so it's really, really important to have some guiding principles and a strong decentralized community that enforces those principles. So I think of blockchains, I think we have a really nice analogy in the real world, which is called charter cities. It's this idea that you can go and create a new territory where no one lives, and you just write new rules of the game. And then whoever likes those rules can join, move in and start living and working there and build something interesting. And if you don't like them, you leave, and those rules can also define how the rules can be changed.
And so this enables you to experiment, to go and create a lot of different communities with these different approaches to different lifestyles, different values, different governance systems. And we see this in the world of blockchains. It's a little harder to do in L2 world than in the L1 world. Because in L1s you can easily fork a system always. In L2s the forkability is not really possible. You can migrate, but you cannot really fork the assets. If you have some ether, it's locked in one contract, it's gonna stay there.
29:39: Anna Rose
Unless the base chain is forked, which is a much bigger...
29:42: Alex Gluckowski:
Unless the base chain is forked, but then everything is being forked, right? But even layer-1 forkability has its limits. So ideally we want to come up with a system which is having some minimum common ground because blockchains are for universal coordination. So we're gonna have a lot of wildly different people participating in those systems and they don't have to agree on everything. We only have to agree on the consensus of the blockchain state that I own so much ETH, you own so much ETH, I send some transaction to you. It's objective. You can believe in completely different ideologies or economic systems or whatever, but we all agree on this objective consensus state. So, we want to come together with this minimum form of governance that will enable us to move on, to iterate on the system design into the future while not scaring off some groups of these people or enabling them to fork away and have their lives if they want to. And so the ZK Credo is this foundational, if you want, a declaration of independence or declaration of values for this digital community that will form around zkSync idea.
31:05: Anna Rose
In this case though, using the term fork kind of is confusing because you wouldn't actually be forking any particular L2 or zkEVM. You could create a new one, I guess is what you sort of mean, right? Like using the ZK Stack framework and you could create a new chain.
31:23: Alex Gluckowski:
Correct. It's a little... The word itself is a little ambiguous, but let's illustrate this. So we're starting off with a single deployment of zkSync. Let's say zkSync Era is the first hyperchain in this universe of interconnected hyperchains. We should talk about that separately. So now if you like the rules of the system, you can just move in and participate. The rules are built in a way that give you sovereignty. Like the validators, the proof generators, whoever, like whatever stakeholders are in the system that are necessary for operation of the system, cannot mess with your assets. They cannot change the rules. They cannot rewrite contracts, rewrite state, or even prevent you from withdrawing your assets back to Ethereum or to some other L2, right?
So the upgradability of the systems is going to be done in a way that is giving users a lot of time to withdraw if they don't like any new upgrade. But if an upgrade is coming that is changing the rules of the system in a way that you don't like, there must be a way for you to withdraw. So you will probably not want to withdraw to layer-1, because the costs of using the layer-1 will become prohibitively expensive with time. So everyone will be living in L2s, L3s, some kind of scalability systems. That's Ethereum's vision of the future. So you will want to withdraw to a different layer-2, and if you don't have any layer-2 that you see that is fulfilling the promise that you want, you can just take the code, fork the code, deploy an instance of this new layer-2, and then invite your fellow citizens of this network state, whoever were using the first L2 in the first place, to join you and say, look, this new change that's coming, it's actually contradicting the values we stand for, and it's our obligation to prevent that. So we should all just vote with our feet and migrate to this new fork.
33:42: Anna Rose
Yeah, it's so interesting. So it's not forking in the way that we've understood blockchains in the past, but it is still forking in a way, right? Because would you be able to also maintain the state of what's the existing balances on that L2 if you did that?
33:56: Alex Gluckowski:
Well, at the very least, you fork the code.
33:59: Anna Rose
Yeah.
33:59: Alex Gluckowski:
The forkability of the code is the very essence of open source movement. Everything we do for zkSync is obviously full free open source for this very reason in order to enable this forkability. And then, forking the state, yeah, you cannot fork the state, but you can migrate. And it serves the same purpose. Kind of like, you move away. You create your own version of this universe, which you like more, and you just walk there.
34:26: Anna Rose
Let's actually dive into the ZK Stack, the hyperchains, because I want to understand how these chains interface and interact with one another. When you talk about walking away, I want to understand a little more what that means, or if you were to migrate. So ZK Stack. You sort of mentioned it a few times. Is it the Cosmos SDK equivalent, the OP stack equivalent? Like this is the builder library that anyone can use to deploy another zkEVM, zkSync?
34:53: Alex Gluckowski:
This is correct. I would call it a framework.
34:56: Anna Rose
Okay, a framework.
34:57: Alex Gluckowski:
Which is a ready to use system complex of code that you can deploy and start your own hyperchain. We call it hyperchains for a specific reason. The difference of a hyperchain and some random roll-up is that hyperchains are hyperlinkable. We're using a technology called the hyperbridges to connect hyperchains in a very interesting way which enables what we call, surprise, hyperscalability. The ability of the system to grow indefinitely large, accumulating or absorbing as many users, as many transactions as will be necessary for the growth of Ethereum. You want 1 million users? Sure. You want 10 million users? You can do that. You just keep adding blockchains, keep adding the systems and it just grows. And hyperbridges are using the zero knowledge proofs and a specific architecture, which is a little hard to illustrate without video, but it enables you to move assets from any hyperchain to any other hyperchain without friction. Like, not adding security assumptions or trust assumptions and not adding any capital cost and doing all of that very fast. Not adding any footprint on the underlying layers on Ethereum, for example.
36:26: Anna Rose
Is it similar at all to some of the other ZK bridge kind of technologies, like having a very compressed light client on one side basically communicating across these two hyperchains? Is it similar to anything maybe we've already heard about on the show?
36:44: Alex Gluckowski:
This is similar, but it has a very, very big and important distinction. So let's try to illustrate it. Imagine that you have two systems, two roll-ups on Ethereum for simplicity. And you want to move assets, you want to move native assets minted on Ethereum from one of this roll-up to another. Let's call them roll-up A and roll-up B. So roll-up A has, let's say 100 ETH locked into it. What does it mean? It means that there is a contract on layer-1 that governs the treasury of roll-up A, and it has a balance of 100 ETH. And there is a similar contract for roll-up B, also on Ethereum, which does not have this balance. So now, if you want to use zero knowledge bridging to move these assets from roll-up A to roll-up B, you first need some way to pass a message from roll-up A to roll-up B in a trustless way, and you can use those ZK bridges to do this.
So what you do is you make some commitment in roll-up A, this commitment is being propagated through Merkle trees, down to Ethereum, and then it can be read by roll-up B. So all of that works. So one contract from one roll-up can talk to some other contract on the other roll-up completely trustlessly. So that's all good and great. But how do we actually get the 100 ETH to move from the treasury contract A to the treasury contract B. There's no way to do it. You cannot just burn and mint it there because those are separate contracts. You have to move this Ether on Ethereum itself. You cannot have the treasury contract of roll-up B, go to Ethereum and tell Ethereum, mint me 100 ETH. Where from? Are you a miner?
38:38: Anna Rose
Yeah, yeah.
38:38: Alex Gluckowski:
So you need to somehow move... And this is a fundamental problem. So what you have to do in order to enable hyperbridging is you have to have all of these chains have a shared single contract on layer-1 that manages the treasury of all of them in a single place. And then you can use this magic ZK bridging to pass arbitrary messages between system contracts and then the system contracts can trust each other because they all of these hyperchains have to run the exact same circuits. at least for the system contracts. So some minimum viable shared state of the circuits. And then they can mint, they can instruct the treasury to release certain amount of assets because it's coming from the system. So it's a consensus for all of this hyperchain that this actually happened secured by math and cryptography.
39:33: Anna Rose
I want to just clarify it to see if I understand this, but like, so you have the two contracts that represent the two roll-ups. That's the original case. In this new case, is there like a third contract that manages the treasury of everyone? Or is there a joint contract that manages both of those roll-ups?
39:51: Alex Gluckowski:
There is a joint contract that manages both of those roll-ups.
39:53: Anna Rose
Really?
39:54: Alex Gluckowski:
Yes.
39:54: Anna Rose
Okay, but do they also... LIke, so just to check, though, does roll-up A still have its own contract, or is it just attached to this joint contract?
40:05: Alex Gluckowski:
It can have its own contract for managing the state or maybe for some other activities, but all of the assets have to be in one shared contract.
40:15: Anna Rose
Got it. I don't know how some of the other ecosystems have developed, but does the OP Stack do something like that too?
40:23: Alex Gluckowski:
They do exactly the same thing. So we pioneered this with our hyperchain vision a year ago, and since then we saw the OP Stack and the Polygon CDK embracing the same approach.
40:33: Anna Rose
I see.
40:34: Alex Gluckowski:
So there will be a few of these big ecosystems that are perfectly seamless inside, like the zkSyncs hyperchains, the OP Superchain, something from Polygon, maybe something from others, that will be this big super networks of roll-ups that are easily talkable to each other. But it's much more expensive and requires more time and cost and trust assumptions to talk between those different ecosystems.
41:04: Anna Rose
Yeah, interesting. I mean, there are a lot of bridge projects like there's Axelar and Hyperlane who are doing kind of that interfacing between these different ecosystems.
41:16: Alex Gluckowski:
They will still remain, they will fulfill their role. They will be connecting these completely separate ecosystems. You can think of these ecosystems as countries. You have a country, maybe on an island, then you have different cities in that country, each city is a roll-up and you have a network of high-speed railways that connects them. And you can move goods and people really fast inside, or maybe even you can think of one city, right? You can move things really seamlessly inside. But if you want to go to a different continent, or to a country overseas, you have to take a plane. And the plane is going to be necessarily more expensive, and it will take you longer to get there. So you're not going to be using planes, hopefully, just to get the dinner in some place and then come back, right? But you will be using it when you have to move like once, while something big. So I think, but and planes are still important, we'll still have this big continents, countries connected via planes, and I see those bridge systems more like this alternative for airports.
42:27: Anna Rose
I'm trying to figure out what it is in the case of the zkSync universe. There's a lot of small roll-ups all on Ethereum. But in this case, a lot of the value will just be moving in between these chains and not really touching the main chain, even though, yes, the original funds might be locked and held in that smart contract. It's all kind of happening under the hood.
42:51: Alex Gluckowski:
Exactly.
42:51: Anna Rose
Is there a moment where actually you start minting native tokens on the L2 that aren't on the L1?
42:58: Alex Gluckowski:
Of course, you will have a lot of native tokens.
43:00: Anna Rose
Yeah. And then I have a question about like, so in that case, then are you just using the Ethereum blockchain as your data availability space? Because at that point, you're not even using it for the financial, it's not holding the original funds or the original tokens.
43:16: Alex Gluckowski:
Yes, all roll-ups and validiums are going to be using Ethereum for a couple of functions. Number one is consensus on the state. So all of them will agree on what is the final state, what is the sequence of transactions that have been executed, and that's going to be final. Then you're going to be using Ethereum as the source of security of the validity of your computations. Like all of the ZK systems are eventually verified by every single validator on the Ethereum network, by every single full node of the Ethereum network. So this is how you ensure that the math is actually correct. Like someone needs to verify it and it's gonna be Ethereum. So, and in addition to that, roll-ups are going to use Ethereum for data availability. That's gonna be the most censorship resistant source of data availability out there. You might have seen Vitalik's recent post about L2s, and he points out to this specifically that for high-value transactions, for high-value interactions, for example, for your most valuable tokens, but also for things like your account, private keys, you will be using Ethereum. You want this data to be completely censorship resistant and unlosable.
But then in addition to that, Ethereum's data availability bandwidth is going to remain limited. No matter how performant, even if we get to full sharding, it's still going to have some limits. And we'll still need systems that can go without limits. And those systems will be validiums. You will be able to extend it. Some of the hyperchains will be using at least part of their state stored or made available through some external data availability solutions, either managed by them or managed by external providers or managed by decentralized systems like Celestia. There will be some way for them to offload data off-chain, growing indefinitely, absorbing arbitrary demand for Ethereum.
45:29: Anna Rose
Wild. You sort of mentioned that these hyperchains can be any systems. You could have kind of any language. It could be a Rust-based system. It could be... I don't know, basically the app developers, if there's all these hyperchains with these different language requirements, they can actually deploy in native languages on these hyperchains, not always using Solidity and sort of that EVM basis. That's correct, right?
45:57: Alex Gluckowski:
Yes, that's correct.
45:57: Anna Rose
These hyperchains could be anything. I kind of want to bring the conversation a little bit back to languages, to ask you about a language that was created by Matter Labs a long while ago called Zinc, right? Is that ever going to make a reappearance, do you think? Like, that was also something around the time of RedShift. You were doing RedShift and you were doing Zinc. Yeah, I'm just curious if you see any further development on that, and if you could imagine a hyperchain that actually uses it, potentially.
46:25: Alex Gluckowski:
No, we abandoned Zinc. It's not going to be developed further. It lost its justification. The original reason we were creating Zinc was that we needed a language for non-turing complete computation. And since we now can do full turing complete stuff with zkEVM, with RISC or WASM or other virtual machines, why would you use something Rust-like if you can just use Rust with all the tooling of Rust? With all the... It just doesn't make any sense.
46:58: Anna Rose
Do you imagine though maybe Matter Labs itself deploying a hyperchain with a different base language?
47:05: Alex Gluckowski:
Ourselves, we're focusing on the core protocol, but I think that some hyperchains could be launching with languages that are better suited for smart contracts. I can think of Move, for example. The development from Facebook Libra, Diem, that was the language that was inspired by Rust, but was actually optimized for making smart contracts more secure and easier to develop. If that takes off, I can totally imagine a hyperchain using Move or something else entirely. Sure, existing languages such as Rust were created for specific purposes, like system programming, making sure that you have safe memory space, using it for parallelism and so on. All of that does not really matter in the world of smart contracts. You want different elements of the language to support your development, to make it more expressive. So I can totally imagine that in the future we'll see that coming.
48:09: Anna Rose
launched the zkEVM, February:48:26: Alex Gluckowski:
Oh, we still have not just a lot of adoption, we are the most actively used L2 on Ethereum today.
48:34: Anna Rose
That's wild.
48:35: Alex Gluckowski:
If you go to l2beat.info, just switch to activity. So we're number four by TVL, but we're consistently number one by far with 24 million average monthly transactions. Ethereum stands at 30 million and all the other chains are starting with like 16-14 million and below. So there is a lot of growth happening also in terms of protocols adopting zkSync and building new stuff and completely entirely new stuff that was not done before, going in the direction also of native account abstraction, which we haven't covered yet, but it's an important aspect of making blockchains usable by mainstream audience. And we see a lot of projects that are expanding in this mainstream audience space. So we see things like Pudgy Penguins launching their NFT campaigns with Walmart corporations where millions of users will be able to just scan a QR code and get their NFTs and connect the physical world with the virtual space, with the metaverse.
We see projects like the city government of Buenos Aires, Argentina, is using zkSync for the ID system for the citizens, where they will be using blockchain to connect to goods and services and providers. We're seeing hyperchains being used for interesting mainstream audience spaces. So I think what contributed to this popularity of zkSync is, on the one hand, this focus on technological innovation being future-proof, building systems for the future, like not focusing so much on being backwards compatible, but focusing on being future compatible and building for the end consumer in mind. And on the other hand, just this consistency with the mission, with the values. Like people know that they can rely on us. People know that when we say decentralization, we actually mean it. It's not a project that was created just to pump and dump, promote some idea and burn it in one year. We are here for a very long future. We are really well funded. We as Matter Labs, so we will be able to build everything we need for the final vision. To make this reality of anyone in the world can access Ethereum in an affordable way, fully preserving the properties of Ethereum. And all of that is owned by the community, all this network governance, the direction in which it's going, it's all controlled not by a single party, but by the broad community. So we've been very consistent with this messaging and I think it's also contributed to where we are today.
51:22: Anna Rose
Do you think there is a moat? This is something I've sort of been curious about with a lot of the L2 projects. What do you think will keep people on zkSync? Because if it's EVM compatible, they can deploy elsewhere, right? They can do it on Optimism, they could do it on the new zkEVMs that are coming out. Yeah, what's the moat?
51:42: Alex Gluckowski:
First of all, I don't care that much about the moat, I care about the mission. We have a very specific criteria of where the mission is complete and what we need to do for this mission. If you ask me what I think the development is going to be on Ethereum in terms of how things will play out, I think that you will see network effects accruing to this big hyperchain superchain ecosystems. And that is gonna be what matters. They will not be bound to individual chains but to this big ecosystem. But I mean, individual chains will also have their network effects, obviously, for instant composability with synchronous transactions and so on. But this access to users and liquidity from one hyperchain to all the other hyperchains in just one click without any capital or trust or security friction is what's going to matter. So you will be measuring not just the TVL or transactions or users, active daily users, all the parameters by which we shine with zkSync Era today, you will not be measuring that on individual chains, but in this chain ecosystems.
52:56: Anna Rose
That's cool. What's your vision then for the future of this space? One of the big questions, and I kind of hinted at it before, but it was this idea of like, if you have the native assets on the block, on the L2s themselves, in your vision of the future, is Ethereum still at the center, or does it look more like a mesh or a net where like more interconnection and actually value has shifted a little bit further up the L2 stack?
53:23: Alex Gluckowski:
Ethereum is definitely going to be at the center as the backbone connecting all the L2s. I believe that the internet of value, the eventual form of the Web3 that embraces all forms of value transactions in the world, will be in Ethereum L2s, L3s. It will be in these networks. I don't think they're going to be far outside of Ethereum. You will have a couple of projects outside of Ethereum, a couple of blockchains, still doing some things and being used for different purposes, but I believe that the bulk of this world value transactions will be happening on Ethereum networks. Not on layer-1 itself, as it is Ethereum's vision to use layer-1 as the fundament, the connection layer. It's not where the actual end user transactions will happen, except for maybe like some super, super high value transactions.
54:19: Anna Rose
Yeah. In the ZK Credo, there is one term which is privacy. And it's something when we first met, we talked a lot about. And then I know that there was more of a focus on scaling. Do you see privacy kind of coming back for zkSync? Maybe it's always been there, and sorry if I... But at least in the messaging, it hasn't been as much of a focus.
54:40: Alex Gluckowski:
Sure, it's not been at the forefront, but it's always been there. If you read the same introductory post about zkSync, five years ago, one of the key points there is privacy. Because you can't be using blockchains as Twitter for your bank account. You cannot be having transactions where everyone in the world can see all the assets you have and all the people you interact with. In order to enter mainstream adoption, you have to implement privacy. The reason we've not been focused on privacy is two things. One, scalability is a prerequisite for privacy. Specifically, the way zkSync and ZK Stack is architected is going to make privacy-preserving transactions, recursive zero-knowledge proof verification on these chains super cheap, because we will not need to use data availability from Ethereum, even on roll-ups, to verify zero-knowledge proofs. It's all going to be embedded, you will be just paying a fraction of a cent for this verification.
And this paves the path for privacy preserving applications. But the second reason is that we are as an organization, as Matter Labs, as one of the contributors to the ZK Stack, and there are now more and there will be more and more and more organizations, individuals who are contributing to this open source code base of ZK Stack, we are embracing Ethereum's philosophy of subtraction. We don't want to become an empire like Google or Facebook or whatnot, with thousands of employees doing all kind of different things. We want to focus on one problem, which is the scalability, and do it really, really well. So what we're building is this internet protocol for this internet of chains. The rest should be done by other people. We don't want to be building wallets, we don't want to be building privacy extensions, we don't want to be building like individual dApps, any kind of DeFi, NFTs, whatever, on top of zkSync. That is all for others to be built and we want to support them.
56:51: Anna Rose
Okay, so it's in the Credo more as guidance, but not as something that you're building currently. Like you're not going to build the privacy modules or the privacy hyperchains, but someone else could build, I guess, a potentially private hyperchain.
57:07: Alex Gluckowski:
I don't see it coming for now, unless no one really builds it for a long period of time, in which case we will have to intervene. So eventually I want to see all the points from ZK Credo being built. Like actually live and being used by millions of people. That is success for us. I believe that other people will do it, but if no one builds, I don't know, like a fantastic wallet or a point of sale system or something else, we will have to help this happen. Maybe not in a way that Matter Labs builds it, maybe in a way that the zkSync ecosystem supports some teams with grants and funds the development and just helps the systems to be created. But eventually, all of that is going to be built. I can promise you this.
57:57: Anna Rose
That's cool. Are you also paying attention to some of the coprocessor or this other way that chains and off-chain computation is happening, not really the traditional L2s? And I'm curious if you'll be interacting with any of that.
58:17: Alex Gluckowski:
Sure. I think eventually we will see the coprocessing being natively seamlessly integrated into blockchain systems. So you will be able to just make a call to a smart contract and then execute arbitrary complex functions on some public data set, plugging in external oracles that will provide access to this data and relying on some networks to do the core processing. That is gonna be very easy from the programmability point of view.
58:49: Anna Rose
Can you imagine building something like that within the ZK Stack? Or would you imagine that living somewhere else being kind of the third party groups that are putting it together?
59:00: Alex Gluckowski:
For now, we're focusing on making ZK Stack come to life with all the promises of the ZK Stack. With hyperscalability, connection between the hyperchains with hyperbridges, different data availability models like Validium, zkPorter, all of that has to work in a very, very smooth way and with fantastic user experience. All the extensions, like adding like Rust programs, coprocessing, whatever, that is not in the primary focus. The primary focus is to make the foundation work. Meanwhile, so as we're approaching the completion of the foundation, we'll probably have a lot of projects working on those things, and we'll be happy to support them and integrate.
59:47: Anna Rose
Cool, cool. Is there actually a foundation for zkSync? Like, who gives the grants? Is it a treasury that's part of the governance of the actual network or is it some sort of existing organization?
::It must be rooted in the governance system of the chain if you want. So, obviously as long as... Like Matter Labs is a centralized organization, it's just a private company. So we can give grants at our own discretion to whoever we want and we are supporting some teams and we're doing some partnerships to accelerate the development. But I think you're asking about this network community governance.
::Like exactly that... Like following that question on privacy, how... I don't really imagine Matter going and funding a privacy project given that it's not in the exact sort of, like you just said in the exact space that you're working. But yeah, is there going to be some larger treasury pool or something that the community can actually make decisions on what gets funded?
::Yeah, I think that every big protocol has to come up with some form of governance that can also be sustainable financially over a long period of time. And this is a big challenge that we see a lot of experiments, but we don't see final system that is perfect yet. No one has come up with anything that I believe is yet sustainable. So this is a big research challenge that we're still in.
::Interesting. Alex, thank you so much for coming on the show and sharing with us all of the updates on zkSync. I feel like this was a very overdue episode. Hopefully it won't take another two years for us to get on one of these again.
::Thank you, Anna. I really enjoyed the questions and it's always fun to be here.
::Thanks. I want to say thank you to the podcast team, Henrik, Jonas, Rachel, and Tanya, and to our listeners, thanks for listening.
Transcript
00:05: Anna Rose
Welcome to Zero Knowledge. I'm your host, Anna Rose. In this podcast, we will be exploring the latest in Zero Knowledge research and the decentralized web, as well as new paradigms that promise to change the way we interact and transact online.
This week, I chat with Alex Gluchowski, CEO of Matter Labs and one of the creators of the zkSync network. In this episode, we catch up about all things zkSync. The last time he was on the show was over two years ago, and so there is a lot to talk about. We cover the ZK Stack framework, Hyperchains, the ZK Credo mission statement, and the Boojum Proof System Upgrade, planned for zkSync Era. We also chat about the future of the project and more. Now, before we kick off, Tanya will share a little bit about this week's sponsor.
00:59 : Tanya
Launching soon, Namada is a proof-of-stake L1 blockchain focused on multi-chain asset-agnostic privacy via a unified set. Namada is natively interoperable with fast finality chains via IBC and with Ethereum using a trust-minimized bridge. Any compatible assets from these ecosystems, whether fungible or non-fungible, can join Namada's unified shielded set, effectively erasing the fragmentation of privacy sets that has limited multi-chain privacy guarantees in the past. By remaining within the shielded set, users can utilize shielded actions to engage privately with applications on various chains, including Ethereum, Osmosis, and Celestia, that are not natively private. Namada's unique incentivization is embodied in its shielded set rewards. These rewards function as a bootstrapping tool, rewarding multi-chain users who enhance the overall privacy of Namada participants. Follow Namada on Twitter, @namada for more information, and join the community on Discord, discord.gg/namada. And now, here's our episode.
02:00: Anna Rose
Today I'm here with Alex Gluckowski, the CEO and one of the co-founders of Matter Labs, as well as a creator of the zkSync network. Welcome back, Alex.
02:10: Alex Gluckowski:
Hi, Anna. It's an honor and a pleasure to be here again.
02:13:Anna Rose
rlier episode was from April,:03:20: Alex Gluckowski:
Absolutely, with pleasure. So the zkSync network that you've been using back then is now known as zKSync Lite. It was the first protocol that we developed and launched on the mainnet. It was a simple protocol for sending tokens and later token swap were added to this, and that you used it at Gitcoin grants also captures a very important aspect of what zKSync is all about. So zKSync is not really a specific network or specifically just the name of the protocol, it's the name of the project. It's the name of a bigger vision. And this has been the vision of the project since the very beginning, since the inception. We've always been very deeply focused on this core mission, and we've been very consistent about it.
And it's all about scaling crypto to everyone in the world without losing the properties that make crypto valuable in the first place. The decentralization, self-custody, security, permissionlessness, this inclusiveness where anyone can participate no matter what without asking anyone for permission to join. So we had that in the beginning and it was also very, very clear that in order to fulfill this mission we have to make it very nice and user-friendly and convenient. And I think this was the experience that was very impressive back then with Gitcoin grants because you could do just one click in MetaMask and the system would process dozens of transactions to different recipients of the grants, and it felt like magic. This is part and part what ZK is all about. It's the magic moon math, which adds magic to blockchain applications and scales it too.
05:11: Anna Rose
It is funny though, that like back then, that was very user-friendly, but it was such a novel way of the... Like MetaMask before that point, you would never choose networks. You wouldn't actually make a distinction. I mean, there was only one network you were usually working on, and I remember there was a learning curve even back then. Nowadays, though, I feel like a lot of that has become, people are quite used to it. And actually, I think that conversation about usability, I want to come back to that later on. Because as much as I feel like the goal is usability, getting there still is at times very challenging. I feel a lot of users, even today, even with easier tools, are still having a hard time getting into it. So let's come back to that. I do want to say, you mentioned that from the very start, you were focused on the scaling of blockchains. And in, I think, our first interview, we talked about how a lot of your team and a lot of the founders of zkSync were coming from the Plasma world, had moved over into roll-ups. At what point did the idea of the zkEVM really come into your mind? Was it already back then that that was the goal of using ZK? Or was it like instead of Plasma, we're going to use ZK roll-ups and then eventually this idea dawned on you that you could actually create an environment where people could deploy apps the way they do on mainnet Ethereum on an actual ZK roll-up?
06:41: Alex Gluckowski:
This vision was there from the very beginning. It was very obvious that the end game has to be full scalability.
06:48: Anna Rose
You knew it was going to be zkEVM.
06:50: Alex Gluckowski:
If you read the very first blog post introducing zkSync, where we mentioned zkSync for the very first time, and it was close to five years ago, it's called the Introduction of zkSync. One of the points that we make there is that eventually we envision full EVM bytecodes being executable and provable in zero knowledge proofs. The distinction from where we are today and where we were back then, is it became feasible. It was not feasible five years ago, and it was not really clear how specifically we're gonna manage that. It was clear that eventually it will come, we need it. Breakthroughs in the protocol development, in the fundamental crypto primitives of zero knowledge proofs. Those breakthroughs eventually came in the form of Plonk, Recursive Plonk, RedShift and then the systems that Cambrian exploded after that, so that we can finally have performance high enough to process essentially zkEVM level of computation. And this is to say what happened since the last time we spoke. We launched the zkEVMs on Ethereum, and they are now live and flourishing in the ZKSync Era. It was the first Blockchain, L2, where you could deploy Ethereum contracts without modifications and just let them work in exactly the same way the users interact with Ethereum mainnet or with Optimistic roll-ups, which were launched earlier. And now we're still at the beginning of a long journey to make it universally available and universally usable.
08:33: Anna Rose
I want to ask you about, given that I feel like the audience now knows much more about the zkEVM space and the different kinds of zkEVMs, I know a few years ago, or maybe a year ago, Vitalik published this zkEVM landscape with these different types. I'm curious, where does zkSync fall?
08:54: Alex Gluckowski:
Yes, zkSync Era and ZK Stack, the technology that we currently have, still falls under type 4, meaning you have to compile your contracts and deploy them in this network. But these borders are blurring eventually. I think we will have systems that will support different types of virtual machines in the near future where it will not really matter. You will be able to execute any types of programming environment from EVM native bytecode to something that is natively compiled for this system for maximum performance to something like RISC-V or WASM or other types of virtual machines with different bytecodes. All of that will live together and will cooperate with different trade-offs for different purposes. That is definitely where zKSync is heading and everyone is heading.
09:47: Anna Rose
Yeah, yeah. Let's talk about the launch of the zkEVM. This was the second network, ZK Era. What was that like? And actually, when did it launch? Was it a year ago?
10:01: Alex Gluckowski:
That was in February of this year. It was very exciting. It was the first time that you could deploy EVM contracts with exact same invariance, exact same interfaces, same user experience on a ZK roll-up. So it was obviously a very... A moment of high responsibility because we were thinking a lot about security, about different ways the system can break. We were putting a lot of precaution measures in there. And so it was a mix of this responsibility and alertness and actual excitement for going live. And we absolutely did not anticipate such a fast pace of onboarding users and capital and applications. We thought it's going to be a lot smoother and a lot longer for people to gain trust in the system. Gradually, over time, things will start moving, but it was like a snowball. It was really, really fast.
10:59: Anna Rose
se I remember this was Lisbon:11:25: Alex Gluckowski:
We've set the dates a couple of times, we had to postpone it because you're building something completely new and you cannot be focusing on several different conflicting priorities all at the same time. For us, security was paramount. So we knew we're not going to launch something that is not meeting the standards that we expected in terms of security and diligence and code quality. But the moment we solved all of these issues and we actually had a live testnet going on in a very stable way with partners launching things working properly, we gained more and more confidence and then we just put a line saying like specifically by this date we will feel that the system is battle tested enough and mature enough to be launched. And so it was a gradual improvement from that point. We were just very, very conservative. We did not, really did not want to rush. So the final line we've set was actually from a point where we were very, very confident that the system is functional.
12:31: Anna Rose
And maybe this is why I confused it a little. I think back then you might have said something like, it's coming sooner, like around November. And then I guess, is it possible that then you pushed it to February?
12:41: Alex Gluckowski:
No, no, no. I think the initial estimates for building zkEVM were, I think, from the point where we started working on it, we thought it could be completed in one year. It took us closer to two years. And so within this time period, I think there were a couple of points where we had to postpone.
13:01: Anna Rose
I see. It's very common to all projects. I feel like we do hear that often, that idea of predicting how long it's going to take. What do you think took longer? Was it because of needing to do more audits? Maybe can you point to any of the most complicated or challenging parts of building this?
13:19: Alex Gluckowski:
I think it was just a combination of things. I cannot single out a specific one thing. You know, like you start building stuff, and then you do it iteratively, and you discover challenges on the way. You cannot just foresee all of them completely imaginary and saying, oh, here is the perfect system. All of the best world's products that are being shipped fast are being shipped in the cyclic way, where you make an iteration, you launch your MVP, you get the feedback, you see what works, what doesn't work, what you have not thought about, and then you work over and over with further iterations too.
13:55: Anna Rose
In that time too though, did you change the ZK under the hood at all? Like was there any sort of adaptivity on the ZK part or would you say you already locked that in kind of at the beginning of that two-year process?
14:09: Alex Gluckowski:
The primitives were locked. It would change for the circuits, we discovered more ways to make things more efficient and so some of them were rewritten, but mostly it was just iterative work of building the complete body of all components required to make a complete zkEVM. But we now have a really good sense of all of these components. It turned out that a lot of complexity was not in the ZK circuits itself and people over-emphasize the complexity of zero knowledge proof specifically. There is a lot of complexity in roll-ups, in just the platform side of things, in your core node, in scaling the storage, in scaling the transaction processing, in scaling the APIs for querying transactions. So that required a lot of work and luckily we have a very, very talented team and we made some right choices from the beginning doing everything in Rust and using best practices from engineering to make sure that we can eventually scale with the demand, not putting any artificial limits where we say, oh, 10 transactions per second is going to be enough. And then all of a sudden you hit this wall of 10 transactions per second and then what? Are you going to shut down your chain and invest time in making the system more scalable? That's not how you build in a sustainable way. You have to anticipate spikes in demand. You have to anticipate growth and you have to put in a lot of buffer, actually orders of magnitude of buffer in order to be able to accommodate unexpected black swans. This is what the work was all about.
15:54: Anna Rose
That's fascinating. I was just... In prep for this, I was actually listening to our first interview together with Alex V, who was also on that one. And I think you actually mentioned that thought. I think he said something along the lines of like, plasma was built for this kind of capacity, with ZK roll-ups, it will be a lower capacity, but these are the trade-offs that are made. But what you're describing is like, as you went on, you realized, actually, those spikes of usage, like you can't actually have a chain halt or not be able to use it. It may not need to maintain that capacity all the time, but it has to be able to maintain that upper boundary. So this sounds like real learning, even from that episode. What exists today? Like, what is the ZK system in ZK Era today? Because I know there's something coming, but I'm curious what it is right now.
16:44: Alex Gluckowski:
As of today, on mainnet, we use Plonk with recursion. What we're switching to now is the proof system called Boojum, or implementation called Boojum. This is now live on testnet, and we're making full switch on mainnet in a couple of weeks. And Boojum is the implementation of a construction we called RedShift originally. Alex V published it together with Konstantin and Aki, very shortly after Plonk was announced, it was an extension of Plonk, which enabled FRI to be used as polynomial commitments and essentially turning Plonk into a transparent proof system.
17:32: Anna Rose
Which then, I mean, this idea was very influential in general, because we've seen lots of systems since then go with that idea. We actually covered RedShift on one of the episodes we did together as well. But I remember last time we spoke, you had actually talked about shelving it, and now it sounds like you brought it back in action. What did you need to do to RedShift to get it into the state that it would be Boojum?
17:55: Alex Gluckowski:
So we postponed the RedShift implementation back then because it was not very efficient with the primitives which were available at the time when the RedShift appeared. This has changed since then with a couple of breakthroughs. An important one came from the Plonky2 team. They came up with an idea to use a shorter field in a really cool way, which boosted the performance of the hashing. And also there were a couple of other insights. There was... I'm not the right person to answer this question. You should talk to Alex V. But there was something with some research on better cached quotients and multivariate lookups from Ariel Gabizon and a couple of other guys. Those things combined gave Boojum the performance necessary to be able to be used in production.
18:45: Anna Rose
And you mentioned that it's coming in a few weeks. What does it mean to change out the proving system? This is what we're changing the full like ZK proving system under the hood, right?
18:55: Alex Gluckowski:
Yes, we're changing the prover behind the roll-up system. But it's actually relatively easy for us to do because the new prover just follows one to one the block structure. So we're actually running it now in parallel for all the blocks that are being produced on the system that we proved with Plonk. In parallel, we're also proving that with Boojum, and what we're going to do is we're going to just shift to the new prover and just drop the old one. So it's a very smooth transition.
19:29: Anna Rose
In the case of a roll-up and sort of making that change, does the change just happen in a smart contract? Do you just upgrade a smart contract, basically?
19:38: Alex Gluckowski:
You essentially upgrade the verifier, and you upgrade the prover that produces the new proofs. So one thing to mention here is it was very important to keep the proof system agnostic. So what we did is we on purpose used SHA-256 hash function for the Merkle trees in the storage so that we can change the... We did not use Poseidon that would be depending on the field, and whenever you change the field, whenever you change the proof system, you would have to rehash and do re-genesis of the entire system, which would have to be a trusted operation. We did not want that. So we built the system from the beginning, slightly less efficient, but with more future-proofness, if you want. And this is how we always approach things. So this is why it's so easy for us to make this complete switch, and this is why it's going to be easy for us to switch in the future to any new innovation that might be coming and it will definitely coming in the years ahead of us, while preserving the system, keeping it intact with all the value and all the contracts, all the state that is being accumulated there.
20:49: Anna Rose
What does it actually do? What does the upgrade actually do? Does it make it faster? Does it make the proofs smaller? Is it cheaper? What's the actual benefit?
21:00: Alex Gluckowski:
It's going to make the proofs more efficient, meaning for the end-user that the proof generation is going to be cheaper and the overall throughput of the system is going to be higher. Although throughput is not really a constraint here because you can pair it like a zero knowledge proof generation is really well parallelizable. You can go and add more and more machines. We are running GPU provers, we have optimized the Boojum GPU prover to be consumer friendly with the aim of future decentralization of the proving space, you only need under 16 gigabytes of RAM and then in a decent gaming GPU to be able to generate the proofs for Boojum.
And so eventually, yes, it boils down to the cost. So now, this is interesting because the cost is invisible, the cost of the proof generation. It's negligible compared to the cost of data availability. This is going to change with the usage of Validia or hybrid systems like Volition, like zkPorter. There it will make a big difference. Like today, on Ethereum, users are paying 10, 20 cents per transaction on average, so they don't really notice the cost because they're tiny. And Boojum makes it orders of payment cheaper, so you will eventually be able to have very, very cheap transactions on Validiums.
22:28: Anna Rose
Interesting. You just sort of talked about like the prover and the actual proof creation potentially being decentralized. This is a space that I've at least heard people talking about for, well, publicly for a few years. I know like Mina and Aleo have always talked about these SNARK marketplaces, like proving marketplaces somehow. I've heard of a few projects doing just that, where that's their entire business at the moment is developing these marketplaces for proof generation. What's your vision for that? Do you eventually see other teams or some third party creating the proofs actually in zkSync-like networks? Or do you... Would it need to come from your org?
23:13: Alex Gluckowski:
So let's start with the basics. We see the decentralization of all aspects, all components of the system as an absolute non-negotiable requirement. Remember, the mission of zkSync is to scale blockchains with preserving these core values, the core philosophy, the core valuable properties that we have there. These properties include decentralization, they include resilience. The only way to become resilient is to decentralize. Decentralization paradoxically is actually not a value itself, it's a means to enable several important values. And it's a means to give your systems, your blockchain networks credible neutrality. Like if all of the proof generation is happening on one cloud provider, or just on a few big cloud providers that completely control it, then they have this self power of being able... They can just always threaten to switch off your proof generation, your system, and then your blockchain just shuts down. So you will be forced to follow whatever orders, whatever subtle requirements they impose on you. And we want to oppose that. We want to build truly resilient systems. So you have to decentralize the sequencer, you have to decentralize the prover, and you have to decentralize also the development process and the community that watches over the system and guards... stewards the development, and points in which direction the system should be developed for them. And all of that brings us to the idea or the document called ZK Credo, which we also published this summer.
24:57: Anna Rose
Yeah. I want to talk about ZK Credo, but right before that I just want to ask you what your thoughts are on the decentralized sequencer space or maybe even the shared sequencers. As a major ZK roll-up at this point, do you see yourselves working with one of those shared sequencers or do you imagine actually a decentralized sequencer on your side?
25:18: Alex Gluckowski:
So as I said, we definitely will have a decentralized sequencer. In some form, we will have many chains as a part of the bigger ZKSync ecosystem. We call them hyperchains. The priority for us in building the ZK Stack that powers the hyperchains is the sovereignty of the chains, giving maximum freedom to all of these chains to decide their parameters, to decide their structure, configuration and so on. And this includes the sequencing. So they will be able to choose the same decentralized sequencing approach as ZKSync Era. They will be able to use centralized sequencing. They will be able to use, to opt in into shared sequencing schemes, to go for other providers. There are a number of talented teams working on the shared sequencer design space. So I think we'll see a lot of experimentation and we'll definitely see different chains pursuing different strategies. And it's not gonna be one-size-fits-all. It's gonna be some variety, some different trade-off space.
26:23: Anna Rose
Makes sense. Let's talk about ZK Credo. I actually found this, I wasn't sure what it was. I actually asked you before the interview, is this a mission statement? Is this a kind of guiding document? So yeah, what is ZK Credo?
26:38: Alex Gluckowski:
So ZK Credo is a statement. It is a statement about our mission, philosophy, and values of the zkSync project. It's not bringing in something new. Those are the values, whatever we articulated in the first draft of the ZK Credo, I don't think it's going to be the last draft. We have community discussing it, getting involved, and it will be an involved process. But what we set out to do is to articulate those principles in a very specific way, which can serve as the basis of the formation of community of zkSync governance. Because we're building systems that have to be credibly neutral, we are striving for using math instead of relying on humans, on validators, on some centralized authorities or even on decentralized groups of people, because there will inevitably be some forms of politics, some slight decision-making changes. We want to make them as neutral, as transparent, as immutable as possible.
And this actually works for blockchain systems. It doesn't really work well for evolving those blockchain systems. We are not yet at the point where the code can write off its own. The systems do not evolve by their own. We don't have yet artificial general intelligence. They are being evolved and developed by people. And those people make subtle choices which might affect entire ecosystems, that might affect specific applications, it might affect specific groups of people. And so it's really, really important to have some guiding principles and a strong decentralized community that enforces those principles. So I think of blockchains, I think we have a really nice analogy in the real world, which is called charter cities. It's this idea that you can go and create a new territory where no one lives, and you just write new rules of the game. And then whoever likes those rules can join, move in and start living and working there and build something interesting. And if you don't like them, you leave, and those rules can also define how the rules can be changed.
And so this enables you to experiment, to go and create a lot of different communities with these different approaches to different lifestyles, different values, different governance systems. And we see this in the world of blockchains. It's a little harder to do in L2 world than in the L1 world. Because in L1s you can easily fork a system always. In L2s the forkability is not really possible. You can migrate, but you cannot really fork the assets. If you have some ether, it's locked in one contract, it's gonna stay there.
29:39: Anna Rose
Unless the base chain is forked, which is a much bigger...
29:42: Alex Gluckowski:
Unless the base chain is forked, but then everything is being forked, right? But even layer-1 forkability has its limits. So ideally we want to come up with a system which is having some minimum common ground because blockchains are for universal coordination. So we're gonna have a lot of wildly different people participating in those systems and they don't have to agree on everything. We only have to agree on the consensus of the blockchain state that I own so much ETH, you own so much ETH, I send some transaction to you. It's objective. You can believe in completely different ideologies or economic systems or whatever, but we all agree on this objective consensus state. So, we want to come together with this minimum form of governance that will enable us to move on, to iterate on the system design into the future while not scaring off some groups of these people or enabling them to fork away and have their lives if they want to. And so the ZK Credo is this foundational, if you want, a declaration of independence or declaration of values for this digital community that will form around zkSync idea.
31:05: Anna Rose
In this case though, using the term fork kind of is confusing because you wouldn't actually be forking any particular L2 or zkEVM. You could create a new one, I guess is what you sort of mean, right? Like using the ZK Stack framework and you could create a new chain.
31:23: Alex Gluckowski:
Correct. It's a little... The word itself is a little ambiguous, but let's illustrate this. So we're starting off with a single deployment of zkSync. Let's say zkSync Era is the first hyperchain in this universe of interconnected hyperchains. We should talk about that separately. So now if you like the rules of the system, you can just move in and participate. The rules are built in a way that give you sovereignty. Like the validators, the proof generators, whoever, like whatever stakeholders are in the system that are necessary for operation of the system, cannot mess with your assets. They cannot change the rules. They cannot rewrite contracts, rewrite state, or even prevent you from withdrawing your assets back to Ethereum or to some other L2, right?
So the upgradability of the systems is going to be done in a way that is giving users a lot of time to withdraw if they don't like any new upgrade. But if an upgrade is coming that is changing the rules of the system in a way that you don't like, there must be a way for you to withdraw. So you will probably not want to withdraw to layer-1, because the costs of using the layer-1 will become prohibitively expensive with time. So everyone will be living in L2s, L3s, some kind of scalability systems. That's Ethereum's vision of the future. So you will want to withdraw to a different layer-2, and if you don't have any layer-2 that you see that is fulfilling the promise that you want, you can just take the code, fork the code, deploy an instance of this new layer-2, and then invite your fellow citizens of this network state, whoever were using the first L2 in the first place, to join you and say, look, this new change that's coming, it's actually contradicting the values we stand for, and it's our obligation to prevent that. So we should all just vote with our feet and migrate to this new fork.
33:42: Anna Rose
Yeah, it's so interesting. So it's not forking in the way that we've understood blockchains in the past, but it is still forking in a way, right? Because would you be able to also maintain the state of what's the existing balances on that L2 if you did that?
33:56: Alex Gluckowski:
Well, at the very least, you fork the code.
33:59: Anna Rose
Yeah.
33:59: Alex Gluckowski:
The forkability of the code is the very essence of open source movement. Everything we do for zkSync is obviously full free open source for this very reason in order to enable this forkability. And then, forking the state, yeah, you cannot fork the state, but you can migrate. And it serves the same purpose. Kind of like, you move away. You create your own version of this universe, which you like more, and you just walk there.
34:26: Anna Rose
Let's actually dive into the ZK Stack, the hyperchains, because I want to understand how these chains interface and interact with one another. When you talk about walking away, I want to understand a little more what that means, or if you were to migrate. So ZK Stack. You sort of mentioned it a few times. Is it the Cosmos SDK equivalent, the OP stack equivalent? Like this is the builder library that anyone can use to deploy another zkEVM, zkSync?
34:53: Alex Gluckowski:
This is correct. I would call it a framework.
34:56: Anna Rose
Okay, a framework.
34:57: Alex Gluckowski:
Which is a ready to use system complex of code that you can deploy and start your own hyperchain. We call it hyperchains for a specific reason. The difference of a hyperchain and some random roll-up is that hyperchains are hyperlinkable. We're using a technology called the hyperbridges to connect hyperchains in a very interesting way which enables what we call, surprise, hyperscalability. The ability of the system to grow indefinitely large, accumulating or absorbing as many users, as many transactions as will be necessary for the growth of Ethereum. You want 1 million users? Sure. You want 10 million users? You can do that. You just keep adding blockchains, keep adding the systems and it just grows. And hyperbridges are using the zero knowledge proofs and a specific architecture, which is a little hard to illustrate without video, but it enables you to move assets from any hyperchain to any other hyperchain without friction. Like, not adding security assumptions or trust assumptions and not adding any capital cost and doing all of that very fast. Not adding any footprint on the underlying layers on Ethereum, for example.
36:26: Anna Rose
Is it similar at all to some of the other ZK bridge kind of technologies, like having a very compressed light client on one side basically communicating across these two hyperchains? Is it similar to anything maybe we've already heard about on the show?
36:44: Alex Gluckowski:
This is similar, but it has a very, very big and important distinction. So let's try to illustrate it. Imagine that you have two systems, two roll-ups on Ethereum for simplicity. And you want to move assets, you want to move native assets minted on Ethereum from one of this roll-up to another. Let's call them roll-up A and roll-up B. So roll-up A has, let's say 100 ETH locked into it. What does it mean? It means that there is a contract on layer-1 that governs the treasury of roll-up A, and it has a balance of 100 ETH. And there is a similar contract for roll-up B, also on Ethereum, which does not have this balance. So now, if you want to use zero knowledge bridging to move these assets from roll-up A to roll-up B, you first need some way to pass a message from roll-up A to roll-up B in a trustless way, and you can use those ZK bridges to do this.
So what you do is you make some commitment in roll-up A, this commitment is being propagated through Merkle trees, down to Ethereum, and then it can be read by roll-up B. So all of that works. So one contract from one roll-up can talk to some other contract on the other roll-up completely trustlessly. So that's all good and great. But how do we actually get the 100 ETH to move from the treasury contract A to the treasury contract B. There's no way to do it. You cannot just burn and mint it there because those are separate contracts. You have to move this Ether on Ethereum itself. You cannot have the treasury contract of roll-up B, go to Ethereum and tell Ethereum, mint me 100 ETH. Where from? Are you a miner?
38:38: Anna Rose
Yeah, yeah.
38:38: Alex Gluckowski:
So you need to somehow move... And this is a fundamental problem. So what you have to do in order to enable hyperbridging is you have to have all of these chains have a shared single contract on layer-1 that manages the treasury of all of them in a single place. And then you can use this magic ZK bridging to pass arbitrary messages between system contracts and then the system contracts can trust each other because they all of these hyperchains have to run the exact same circuits. at least for the system contracts. So some minimum viable shared state of the circuits. And then they can mint, they can instruct the treasury to release certain amount of assets because it's coming from the system. So it's a consensus for all of this hyperchain that this actually happened secured by math and cryptography.
39:33: Anna Rose
I want to just clarify it to see if I understand this, but like, so you have the two contracts that represent the two roll-ups. That's the original case. In this new case, is there like a third contract that manages the treasury of everyone? Or is there a joint contract that manages both of those roll-ups?
39:51: Alex Gluckowski:
There is a joint contract that manages both of those roll-ups.
39:53: Anna Rose
Really?
39:54: Alex Gluckowski:
Yes.
39:54: Anna Rose
Okay, but do they also... LIke, so just to check, though, does roll-up A still have its own contract, or is it just attached to this joint contract?
40:05: Alex Gluckowski:
It can have its own contract for managing the state or maybe for some other activities, but all of the assets have to be in one shared contract.
40:15: Anna Rose
Got it. I don't know how some of the other ecosystems have developed, but does the OP Stack do something like that too?
40:23: Alex Gluckowski:
They do exactly the same thing. So we pioneered this with our hyperchain vision a year ago, and since then we saw the OP Stack and the Polygon CDK embracing the same approach.
40:33: Anna Rose
I see.
40:34: Alex Gluckowski:
So there will be a few of these big ecosystems that are perfectly seamless inside, like the zkSyncs hyperchains, the OP Superchain, something from Polygon, maybe something from others, that will be this big super networks of roll-ups that are easily talkable to each other. But it's much more expensive and requires more time and cost and trust assumptions to talk between those different ecosystems.
41:04: Anna Rose
Yeah, interesting. I mean, there are a lot of bridge projects like there's Axelar and Hyperlane who are doing kind of that interfacing between these different ecosystems.
41:16: Alex Gluckowski:
They will still remain, they will fulfill their role. They will be connecting these completely separate ecosystems. You can think of these ecosystems as countries. You have a country, maybe on an island, then you have different cities in that country, each city is a roll-up and you have a network of high-speed railways that connects them. And you can move goods and people really fast inside, or maybe even you can think of one city, right? You can move things really seamlessly inside. But if you want to go to a different continent, or to a country overseas, you have to take a plane. And the plane is going to be necessarily more expensive, and it will take you longer to get there. So you're not going to be using planes, hopefully, just to get the dinner in some place and then come back, right? But you will be using it when you have to move like once, while something big. So I think, but and planes are still important, we'll still have this big continents, countries connected via planes, and I see those bridge systems more like this alternative for airports.
42:27: Anna Rose
I'm trying to figure out what it is in the case of the zkSync universe. There's a lot of small roll-ups all on Ethereum. But in this case, a lot of the value will just be moving in between these chains and not really touching the main chain, even though, yes, the original funds might be locked and held in that smart contract. It's all kind of happening under the hood.
42:51: Alex Gluckowski:
Exactly.
42:51: Anna Rose
Is there a moment where actually you start minting native tokens on the L2 that aren't on the L1?
42:58: Alex Gluckowski:
Of course, you will have a lot of native tokens.
43:00: Anna Rose
Yeah. And then I have a question about like, so in that case, then are you just using the Ethereum blockchain as your data availability space? Because at that point, you're not even using it for the financial, it's not holding the original funds or the original tokens.
43:16: Alex Gluckowski:
Yes, all roll-ups and validiums are going to be using Ethereum for a couple of functions. Number one is consensus on the state. So all of them will agree on what is the final state, what is the sequence of transactions that have been executed, and that's going to be final. Then you're going to be using Ethereum as the source of security of the validity of your computations. Like all of the ZK systems are eventually verified by every single validator on the Ethereum network, by every single full node of the Ethereum network. So this is how you ensure that the math is actually correct. Like someone needs to verify it and it's gonna be Ethereum. So, and in addition to that, roll-ups are going to use Ethereum for data availability. That's gonna be the most censorship resistant source of data availability out there. You might have seen Vitalik's recent post about L2s, and he points out to this specifically that for high-value transactions, for high-value interactions, for example, for your most valuable tokens, but also for things like your account, private keys, you will be using Ethereum. You want this data to be completely censorship resistant and unlosable.
But then in addition to that, Ethereum's data availability bandwidth is going to remain limited. No matter how performant, even if we get to full sharding, it's still going to have some limits. And we'll still need systems that can go without limits. And those systems will be validiums. You will be able to extend it. Some of the hyperchains will be using at least part of their state stored or made available through some external data availability solutions, either managed by them or managed by external providers or managed by decentralized systems like Celestia. There will be some way for them to offload data off-chain, growing indefinitely, absorbing arbitrary demand for Ethereum.
45:29: Anna Rose
Wild. You sort of mentioned that these hyperchains can be any systems. You could have kind of any language. It could be a Rust-based system. It could be... I don't know, basically the app developers, if there's all these hyperchains with these different language requirements, they can actually deploy in native languages on these hyperchains, not always using Solidity and sort of that EVM basis. That's correct, right?
45:57: Alex Gluckowski:
Yes, that's correct.
45:57: Anna Rose
These hyperchains could be anything. I kind of want to bring the conversation a little bit back to languages, to ask you about a language that was created by Matter Labs a long while ago called Zinc, right? Is that ever going to make a reappearance, do you think? Like, that was also something around the time of RedShift. You were doing RedShift and you were doing Zinc. Yeah, I'm just curious if you see any further development on that, and if you could imagine a hyperchain that actually uses it, potentially.
46:25: Alex Gluckowski:
No, we abandoned Zinc. It's not going to be developed further. It lost its justification. The original reason we were creating Zinc was that we needed a language for non-turing complete computation. And since we now can do full turing complete stuff with zkEVM, with RISC or WASM or other virtual machines, why would you use something Rust-like if you can just use Rust with all the tooling of Rust? With all the... It just doesn't make any sense.
46:58: Anna Rose
Do you imagine though maybe Matter Labs itself deploying a hyperchain with a different base language?
47:05: Alex Gluckowski:
Ourselves, we're focusing on the core protocol, but I think that some hyperchains could be launching with languages that are better suited for smart contracts. I can think of Move, for example. The development from Facebook Libra, Diem, that was the language that was inspired by Rust, but was actually optimized for making smart contracts more secure and easier to develop. If that takes off, I can totally imagine a hyperchain using Move or something else entirely. Sure, existing languages such as Rust were created for specific purposes, like system programming, making sure that you have safe memory space, using it for parallelism and so on. All of that does not really matter in the world of smart contracts. You want different elements of the language to support your development, to make it more expressive. So I can totally imagine that in the future we'll see that coming.
48:09: Anna Rose
launched the zkEVM, February:48:26: Alex Gluckowski:
Oh, we still have not just a lot of adoption, we are the most actively used L2 on Ethereum today.
48:34: Anna Rose
That's wild.
48:35: Alex Gluckowski:
If you go to l2beat.info, just switch to activity. So we're number four by TVL, but we're consistently number one by far with 24 million average monthly transactions. Ethereum stands at 30 million and all the other chains are starting with like 16-14 million and below. So there is a lot of growth happening also in terms of protocols adopting zkSync and building new stuff and completely entirely new stuff that was not done before, going in the direction also of native account abstraction, which we haven't covered yet, but it's an important aspect of making blockchains usable by mainstream audience. And we see a lot of projects that are expanding in this mainstream audience space. So we see things like Pudgy Penguins launching their NFT campaigns with Walmart corporations where millions of users will be able to just scan a QR code and get their NFTs and connect the physical world with the virtual space, with the metaverse.
We see projects like the city government of Buenos Aires, Argentina, is using zkSync for the ID system for the citizens, where they will be using blockchain to connect to goods and services and providers. We're seeing hyperchains being used for interesting mainstream audience spaces. So I think what contributed to this popularity of zkSync is, on the one hand, this focus on technological innovation being future-proof, building systems for the future, like not focusing so much on being backwards compatible, but focusing on being future compatible and building for the end consumer in mind. And on the other hand, just this consistency with the mission, with the values. Like people know that they can rely on us. People know that when we say decentralization, we actually mean it. It's not a project that was created just to pump and dump, promote some idea and burn it in one year. We are here for a very long future. We are really well funded. We as Matter Labs, so we will be able to build everything we need for the final vision. To make this reality of anyone in the world can access Ethereum in an affordable way, fully preserving the properties of Ethereum. And all of that is owned by the community, all this network governance, the direction in which it's going, it's all controlled not by a single party, but by the broad community. So we've been very consistent with this messaging and I think it's also contributed to where we are today.
51:22: Anna Rose
Do you think there is a moat? This is something I've sort of been curious about with a lot of the L2 projects. What do you think will keep people on zkSync? Because if it's EVM compatible, they can deploy elsewhere, right? They can do it on Optimism, they could do it on the new zkEVMs that are coming out. Yeah, what's the moat?
51:42: Alex Gluckowski:
First of all, I don't care that much about the moat, I care about the mission. We have a very specific criteria of where the mission is complete and what we need to do for this mission. If you ask me what I think the development is going to be on Ethereum in terms of how things will play out, I think that you will see network effects accruing to this big hyperchain superchain ecosystems. And that is gonna be what matters. They will not be bound to individual chains but to this big ecosystem. But I mean, individual chains will also have their network effects, obviously, for instant composability with synchronous transactions and so on. But this access to users and liquidity from one hyperchain to all the other hyperchains in just one click without any capital or trust or security friction is what's going to matter. So you will be measuring not just the TVL or transactions or users, active daily users, all the parameters by which we shine with zkSync Era today, you will not be measuring that on individual chains, but in this chain ecosystems.
52:56: Anna Rose
That's cool. What's your vision then for the future of this space? One of the big questions, and I kind of hinted at it before, but it was this idea of like, if you have the native assets on the block, on the L2s themselves, in your vision of the future, is Ethereum still at the center, or does it look more like a mesh or a net where like more interconnection and actually value has shifted a little bit further up the L2 stack?
53:23: Alex Gluckowski:
Ethereum is definitely going to be at the center as the backbone connecting all the L2s. I believe that the internet of value, the eventual form of the Web3 that embraces all forms of value transactions in the world, will be in Ethereum L2s, L3s. It will be in these networks. I don't think they're going to be far outside of Ethereum. You will have a couple of projects outside of Ethereum, a couple of blockchains, still doing some things and being used for different purposes, but I believe that the bulk of this world value transactions will be happening on Ethereum networks. Not on layer-1 itself, as it is Ethereum's vision to use layer-1 as the fundament, the connection layer. It's not where the actual end user transactions will happen, except for maybe like some super, super high value transactions.
54:19: Anna Rose
Yeah. In the ZK Credo, there is one term which is privacy. And it's something when we first met, we talked a lot about. And then I know that there was more of a focus on scaling. Do you see privacy kind of coming back for zkSync? Maybe it's always been there, and sorry if I... But at least in the messaging, it hasn't been as much of a focus.
54:40: Alex Gluckowski:
Sure, it's not been at the forefront, but it's always been there. If you read the same introductory post about zkSync, five years ago, one of the key points there is privacy. Because you can't be using blockchains as Twitter for your bank account. You cannot be having transactions where everyone in the world can see all the assets you have and all the people you interact with. In order to enter mainstream adoption, you have to implement privacy. The reason we've not been focused on privacy is two things. One, scalability is a prerequisite for privacy. Specifically, the way zkSync and ZK Stack is architected is going to make privacy-preserving transactions, recursive zero-knowledge proof verification on these chains super cheap, because we will not need to use data availability from Ethereum, even on roll-ups, to verify zero-knowledge proofs. It's all going to be embedded, you will be just paying a fraction of a cent for this verification.
And this paves the path for privacy preserving applications. But the second reason is that we are as an organization, as Matter Labs, as one of the contributors to the ZK Stack, and there are now more and there will be more and more and more organizations, individuals who are contributing to this open source code base of ZK Stack, we are embracing Ethereum's philosophy of subtraction. We don't want to become an empire like Google or Facebook or whatnot, with thousands of employees doing all kind of different things. We want to focus on one problem, which is the scalability, and do it really, really well. So what we're building is this internet protocol for this internet of chains. The rest should be done by other people. We don't want to be building wallets, we don't want to be building privacy extensions, we don't want to be building like individual dApps, any kind of DeFi, NFTs, whatever, on top of zkSync. That is all for others to be built and we want to support them.
56:51: Anna Rose
Okay, so it's in the Credo more as guidance, but not as something that you're building currently. Like you're not going to build the privacy modules or the privacy hyperchains, but someone else could build, I guess, a potentially private hyperchain.
57:07: Alex Gluckowski:
I don't see it coming for now, unless no one really builds it for a long period of time, in which case we will have to intervene. So eventually I want to see all the points from ZK Credo being built. Like actually live and being used by millions of people. That is success for us. I believe that other people will do it, but if no one builds, I don't know, like a fantastic wallet or a point of sale system or something else, we will have to help this happen. Maybe not in a way that Matter Labs builds it, maybe in a way that the zkSync ecosystem supports some teams with grants and funds the development and just helps the systems to be created. But eventually, all of that is going to be built. I can promise you this.
57:57: Anna Rose
That's cool. Are you also paying attention to some of the coprocessor or this other way that chains and off-chain computation is happening, not really the traditional L2s? And I'm curious if you'll be interacting with any of that.
58:17: Alex Gluckowski:
Sure. I think eventually we will see the coprocessing being natively seamlessly integrated into blockchain systems. So you will be able to just make a call to a smart contract and then execute arbitrary complex functions on some public data set, plugging in external oracles that will provide access to this data and relying on some networks to do the core processing. That is gonna be very easy from the programmability point of view.
58:49: Anna Rose
Can you imagine building something like that within the ZK Stack? Or would you imagine that living somewhere else being kind of the third party groups that are putting it together?
59:00: Alex Gluckowski:
For now, we're focusing on making ZK Stack come to life with all the promises of the ZK Stack. With hyperscalability, connection between the hyperchains with hyperbridges, different data availability models like Validium, zkPorter, all of that has to work in a very, very smooth way and with fantastic user experience. All the extensions, like adding like Rust programs, coprocessing, whatever, that is not in the primary focus. The primary focus is to make the foundation work. Meanwhile, so as we're approaching the completion of the foundation, we'll probably have a lot of projects working on those things, and we'll be happy to support them and integrate.
59:47: Anna Rose
Cool, cool. Is there actually a foundation for zkSync? Like, who gives the grants? Is it a treasury that's part of the governance of the actual network or is it some sort of existing organization?
::It must be rooted in the governance system of the chain if you want. So, obviously as long as... Like Matter Labs is a centralized organization, it's just a private company. So we can give grants at our own discretion to whoever we want and we are supporting some teams and we're doing some partnerships to accelerate the development. But I think you're asking about this network community governance.
::Like exactly that... Like following that question on privacy, how... I don't really imagine Matter going and funding a privacy project given that it's not in the exact sort of, like you just said in the exact space that you're working. But yeah, is there going to be some larger treasury pool or something that the community can actually make decisions on what gets funded?
::Yeah, I think that every big protocol has to come up with some form of governance that can also be sustainable financially over a long period of time. And this is a big challenge that we see a lot of experiments, but we don't see final system that is perfect yet. No one has come up with anything that I believe is yet sustainable. So this is a big research challenge that we're still in.
::Interesting. Alex, thank you so much for coming on the show and sharing with us all of the updates on zkSync. I feel like this was a very overdue episode. Hopefully it won't take another two years for us to get on one of these again.
::Thank you, Anna. I really enjoyed the questions and it's always fun to be here.
::Thanks. I want to say thank you to the podcast team, Henrik, Jonas, Rachel, and Tanya, and to our listeners, thanks for listening.