In this week’s episode, Anna catches up with Jordi Baylina, OG Ethereum contributor and Polygon zkEVM Technical Lead. They cover what Jordi has been working on since he was last on the show in 2021. Back then, zkEVMs were still just an idea. Now that many of these systems have launched, they have a chance to look at how these fit into the general L2 landscape.

They cover Jordi’s view on engineering decentralized systems and how these are rolled out, and the recent research from Polygon, including their AggLayer proposal. They wrap up on what inspires him to keep contributing in the space.

Here’s some additional links for this episode:

The next ZK Hack IRL is happening May 17-19 in Kraków, apply to join now at zkkrakow.com

Gevulot is the first decentralized proving layer. With Gevulot, users can generate and verify proofs using any proof system, for any use case.

Gevulot is offering priority access to ZK Podcast listeners, register on gevulot.com and write “Zk Podcast” in the note field of the registration form!

Aleo is a new Layer-1 blockchain that achieves the programmability of Ethereum, the privacy of Zcash, and the scalability of a rollup.

Dive deeper and discover more about Aleo at http://aleo.org/

If you like what we do:

Read transcript

Transcript

00:05: Anna Rose:

Welcome to Zero Knowledge. I'm your host, Anna Rose. In this podcast, we will be exploring the latest in zero knowledge research and the decentralized web, as well as new paradigms that promise to change the way we interact and transact online.

00:26:

This week, I catch up with the legendary Jordi Baylina, Ethereum OG and Polygon's zkEVM Technical Lead. Last time he was on the show, we talked about the concept he had at the time for zkEVMs and how these would be built in the future. With this episode, we check in on what these systems look like now that they've been launched. I get Jordi's take on the general L2 landscape, decentralized systems, and how these are rolled out, the research that has come out of Polygon, and what keeps him inspired working in the space.

Now, before we kick off, I just want to let you know about an upcoming hackathon we are getting very excited about. ZK Hack Kraków is now set to happen from May 17th to 19th in Kraków. In the spirit of ZK Hack Lisbon and ZK Hack Istanbul, we will be hosting hackers from all over the world to join us for a weekend of building and experimenting with ZK tech. We already have some amazing sponsors confirmed, like Mina, O(1) Labs, Polygon, Aleph Zero, Scroll, Avail, Nethermind, and more. If you're interested in participating, apply as a hacker. There will be prizes and bounties to be won, new friends and collaborators to meet, and great workshops to get you up to date on ZK tooling. Hope to see you there. I've added the link in the show notes, and you can also visit zkhackkrakow.com to learn more. Now Tanya will share a little bit about this week's sponsors.

01:48: Tanya:

Gevulot is the first decentralized proving layer. With Gevulot, users can generate and verify proofs using any proof system, for any use case. You can use one of the default provers from projects like Aztec, Starknet, and Polygon, or you can deploy your own. Gevulot is on a mission to dramatically decrease the cost of proving by aggregating proving workloads from across the industry to better utilize underlying hardware while not compromising on performance. Gevulot is offering priority access to ZK Podcast listeners. So if you would like to start using high-performance proving infrastructure for free, go register on gevulot.com and write ZK Podcasts in the note field of the registration form. So thanks again, Gevulot.

Aleo is a new layer-1 blockchain that achieves the programmability of Ethereum, the privacy of Zcash, and the scalability of a rollup. Driven by a mission for a truly secure internet, Aleo has interwoven zero-knowledge proofs into every facet of their stack, resulting in a vertically integrated layer-1 blockchain that's unparalleled in its approach. Aleo is ZK by design. Dive into their programming language, Leo, and see what permissionless development looks like, offering boundless opportunities for developers and innovators to build ZK apps. This is an invitation to be part of a transformational ZK journey. Dive deeper and discover more about Aleo at aleo.org. And now here's our episode.

03:16: Anna Rose:

Today I'm here with Jordi Baylina, who has asked to be introduced as a casual developer, but is really an Ethereum OG and is a Polygon zkEVM Technical Lead, someone who's been in the space a really long time. Jordi, I'm so excited to invite you back to the show. Welcome.

03:31: Jordi Baylina:

It's a huge pleasure to be here, Anna.

03:33: Anna Rose:

e first time was in September:

03:56: Jordi Baylina:

Yeah, it's a long time ago.

03:57: Anna Rose:

e wants to check that out. In:

04:42: Jordi Baylina:

You recorded one day before the announcement or something like that.

04:46: Anna Rose:

The reason I wanted to have you back on the show was I really wanted to do a catch up with you. It's been almost three years since you've been on. And just going back to the topics that we were talking about at the time, like back then, L2s and zkEVMs were still kind of a figment of our imagination. This was like a future plan. These were not really real yet. They were still kind of being spec’d out. And I really thought it would be cool to catch up with you and learn what you make of everything that's developed, what's been happening on the Polygon side. Yeah, and just generally kind of catch up on all things ZK.

05:22: Jordi Baylina:

Yeah, that was a moment where ZK rollups were theoretically possible. I mean, we were already in the phase because I mean, probably... I don't know, a few months ago were like impossible projects. And I mean, they were theoretically possible, but nobody knew how to build it in a practical manner. And at that point we realized, okay, this should be practical. I mean, this should be possible and doable. And at that time, we were starting working in engineering this and putting all this thing that was theoretical in a practical way. And that was the point. I mean, it's like the moment where you think that, okay, I think they have a solution for everything, but then you start working on that and then you start to face hundreds of challenges.

06:06: Anna Rose:

Yeah. So this is what I want to hear about. I think maybe we can pick up the story from where we were. At that time, Iden3 and Hermez were kind of standalone NTTs. And as we learned right around that time, you merged then into Polygon. What happened since? What was the joining of Polygon like? What did that open up for you? How did it change sort of the vision for what you were building?

06:31: Jordi Baylina:

I mean, at that time, we already started building the zkEVM, but Polygon, very much what we did was a couple of things. Well first is to accelerate the process. I mean, being in an organization like Polygon allows us to hire a much better team and to put all the resources to accelerate. It also gave us access to other teams. Here is very important to mention the Zero team. At that time was the Mir team, but it was the Zero team where we just borrowed with a lot of the technology they were building all these small prime fields, STARKs and so on. And also Bobbin, I mean Bobbin with Miden was also very fundamental in getting STARKs the right way, let's say. So we were building already STARKs, but I mean, Bobbin was a super expert on the time. So these two teams that they were already part of Polygon were fundamental to accelerate and to bring the zkEVM reality.

This was two years' work, more or less. It was a hard work of a set of engineers working days and nights and solving challenges. I can tell you that from the engineering perspective and even from the personal perspective, I enjoy it a lot. I mean, it's doing... It's probably the first time that you do the work as an engineer. I mean, you do... In general, when you are managing, you do a lot of things that are not engineering, but at that point, we were building engineering. I mean, we were engineering the system and going to all the solutions. We, I mean, I think we were able to create an amazing team that was a mix of people that came from the blockchain world, from the ZK world, but also people that came from other industries, but they were excellent developers in there. Team, I would say that the average age of this team was relatively high, especially for a blockchain, the blockchain standards. Say of course there is a variety here, but there was a huge, quite mature team, and we started working very well together. And I mean, this has been a... I would say this has been the key of the success on the project.

08:41: Anna Rose:

I want to sort of take it back to Hermez. Hermez was not a zkEVM, was it?

08:48: Jordi Baylina:

It was a rollup. It was an only payment rollups, where the prover was huge proof written fully in Circom. It was verified in Groth16.

08:57: Anna Rose:

I see.

08:57: Jordi Baylina:

But here we proof a set of... It was already a rollup, and a working rollup. And all the experience that we got building Hermez was a fundamental for building the Polygon zkEVM, all the learnings. And it was really, really, really important having... I mean, people think, oh, you just do a... I mean, putting a system in production, there is a lot of things you need to take in account. And people that has never put anything in production cannot realize that, okay? But there is a lot of work to do in the engineering part on that. And this... We had a lot of experience in that side and this was the starting point. Of course, the ZK state at that point was Circom. I mean, the number of constraints was limited. Actually, we did a huge improvement in Circom, especially for big circuits like the Hermez one and how to build them, the machines. I mean, we did a huge work on that front, but that was not enough for a zkEVM. And there is where I would say that they started like a new generation of provers. It's a stark-based proof with recursion and we can talk a little bit about that, but that's what the zkEVM is about. It's just changing the generation of provers.

10:16: Anna Rose:

my mind was just thinking of:

11:08: Jordi Baylina:

Yeah, I mean, I'm an engineer. So even before I was in crypto, my work was to put things in production. I mean, I was building things for real, for solving real people problems. This is what's an engineer about. So for me, this is what I've been doing. You just go on and you ship things. Of course this can be a success or they can... You ship things and you can mess up. But if you don't try it, I mean, you don't know what's gonna happen, but you need to ship things. And shipping things sometimes requires some sacrifices, maybe in the research part, because when you're putting something in production, at some point you need to cut and say, okay, this is what I'm going to launch. I'm going to go for that. and...

11:51: Anna Rose:

Yeah, you freeze it.

11:52: Jordi Baylina:

You need to freeze that. And this is, I mean, for people that's very research-based, I mean, they want to learn more. They want to improve more. Okay? And doing these, I mean, freeze things, okay? Let's just stop here. This is what we have. I know that we can do things much better, and things are going to be discovered, but let's launch this. Let's freeze this, let's put this in production, and let's start solving problems with that. For me, this is important. I mean, this is part of my DNA, I would say, and it's about phases. I mean, just taking things that are in research phase and bringing them to production. This has been my personal role here in my life in general. I mean, again, and not only in this ecosystem. Everyone has their own roles. I mean, there are people that's doing research and they are doing an amazing work in research. We owe them a lot. There are people that are more maintainers, people that... Of the systems, they need to be, okay, they are in production, but at some point they need to be maintained and they need to be kept and improved. Okay, this is also a role and it's also an important role. And depending on the profiles, you will fit better in one place or the other. But for me, this has been my place.

13:03: Anna Rose:

Have you seen, like, is there a phenomenon of engineers who cannot accept those sort of mistakes? What you almost are saying is like, once you freeze it, there might be things you don't like about it, and you still have to ship it and be public. Do you know sort of an engineer profile that has a lot of trouble doing that? Like, does that happen sometimes in our ecosystem where people are kind of precious? They want it to be truly perfect, and it stops them.

13:30: Jordi Baylina:

Yeah, I think this, but this is... I mean, again, depending on the environment that you are, this can be a really good thing, but this can be also a really bad thing. So that is where you are. I mean, if you are in a research group and your goal is to ship protocols, or that's perfectly fine, but shipping or creating engineering products means much more than that. I mean, just putting things that are maintainable, things that are well documented, things that works well, that are safe, that are audited. I mean, it requires a lot of things that you need to take into account. And it's a learning process also, but you need to be focused in the product, in what you are shipping, what problem actually you are solving. This is very important if you really want to build things that are useful for humanity. And don't understand me wrong. I mean, it's super useful to generate protocols. Actually, it's the base of everything. So all the pieces are important, but we need to understand what are the different stages and what are the goals of the different stages. And do not underestimate any person that's in one of these stages. I mean, sometimes there are some faces that they are maybe less...

14:44: Anna Rose:

Known.

14:45: Jordi Baylina:

Well, less known, or maybe they have maybe worse reputation on that. I don't know. But for example, giving support to the users. This is a fundamental role. Absolutely fundamental, and the people... It's super important to have a good train of people and people that's motivated. And this is part of the product. If you don't have this, a product just cannot succeed. Just to put some examples, but again, we can talk about documentation, we can talk about maintaining, we can talk about auditing, we can talk about... So there are many, many, many roles, many, many pieces that they need to work perfect. And if one of those doesn't work good, then the full system may suffer.

15:25: Anna Rose:

Back when we were talking about, so actually in that episode we did go quite deep into the zkEVM model in that it was a SNARK and a STARK. In implementing that, like I almost want to now ask, at what point did you freeze it? Because back then I think you probably had a vision for it and you started to go out and build it, but yeah, was there changes that happened along the way? Like did you use a different system in the end because you connected with Bobbin? Yeah, stuff like that.

15:55 :Jordi Baylina:

Yeah, at some point we were... So there was very much a decision. Of course, here is also trial and error, okay? Because you know what you want to build and then you do things, you test things, and maybe you see, okay, this line doesn't work and then you need to change because maybe there is, you are doing what you're doing doesn't work. But I would say that big pieces, the last important change that we made was switching from 256 bits to Goldilocks to 64 bits prime field. This was an important the factor of the limitation of the system. And this probably was the last thing that we changed. Of course, right now we are working on new things and yeah, we are going to the next steps, okay? But that was the last freezing point.

16:41: Anna Rose:

How updatable is the zkEVM? Like, have you built it in a way that you can change the system under the hood quite easily? Or is it like a rewrite every time you'd want to do an upgrade like that? Or would you have to even almost redeploy it?

16:55: Jordi Baylina:

Well, actually it's just an upgradable smart contract. Right now it's limited with time lock. Currently I think it's a 10 days time lock. It should be 30 days at some point. But again, I mean, this is from the technical perspective, I mean, you decide how you want to upgrade. The problem of the upgrades is that they are quite dangerous and quite centralized. So here is the importance of governance systems for these upgrades and being sure that the users of the network are not affected, or at least they have the right to exit before an upgrade.

Here, I would say that, especially ZK proof are still in getting mature phase. Of course, the maturity of the systems is much, much, much, much, much better than it was one year ago, but still requires to get some maturity. We're still learning. I mean, we're still finding some bugs. And because of that, we need to set some protections. These protections are if you want some centralized systems or things that you don't want them to have, but for security reasons, you have to have them in there.

18:06: Anna Rose:

Do you mean like committees and stuff like that?

18:09: Jordi Baylina:

Yeah, there's committees and even there's escape hatches and things like that. I mean, they're not nice from the decentralization perspective or from the idea of what we are building. I mean, an attended system that just works, and everybody wants to use. It's a common good. I mean, this is the final goal. But we are a little bit far from this ideal scenario. But so it's a path. I mean, the idea is to go to that path, and there is no other way to go to that path than getting maturity on the systems, and we're improving a lot.

18:43: Anna Rose:

I want to talk about some of those systems. So I know that your team is focused mostly on the zkEVM side, so really the engineering. But from the Polygon group, there's been Plonky2 that's come out, Plonky3 that's come out. More recently, there's Circle STARKs. There's been work, I think, just generally on STARK-like constructions. At what point did those research pieces become something you could imagine implementing? And are any of them in the works, I guess, is kind of a follow-up to that.

19:11: Jordi Baylina:

Yeah, yeah, I mean, clearly we are taking advantage of all this research that these groups are doing. Just to mention a few ones of them is we are working very much in what we call VADCOPs. VADCOPs is variable degree composite proofs. The idea is instead of having a huge monolithic proof, the idea is to divide the proof in sub-proofs and with aggregation with the ZK, I mean using the recursion part, we convert all these sub-proofs to a single proof. And this is much more modular, and this allows you to do variable size, because you can aggregate as many as you want. So this allows us to build infinite circuits. I mean, circuits that can be as big as you want. You are not limited by a number of constraints. You're just aggregating different proofs, and you can do that. And also, you are using much better other polynomials, they don't need to be exactly the same degree. So you have a lot of advantages on there.

Here, for example, there is research from Polygon team, but other people in the STARKs world that just connecting GKR to the STARKs just to do the idea of the past. This is an interesting thing that's happening. There is also the work that currently Polygon Zero is doing about the Plonky3 and 32-bit small prime fields, Mersenne proof and all these Circle STARKs and all that, that looks really promising for that. I mean there is also the Binius system that looks also very promising. Again, we need to see how everything matches. But there is a lot of research and a lot of things, interesting things that are happening. Of course, they are evolving and getting these proof systems even much faster. But what I can tell you is that the real revolution of the ZKs is not going to come... I mean, with all these systems, with all these new protocols, you can get, I don't know, maybe a 20% speed up, 50% speed up, 10% speed up. They may even not work on that, but I would say the big breakthrough is going to come from the hardware acceleration.

21:22: Anna Rose:

Ooh, interesting. Okay.

21:25: Jordi Baylina:

So if we really want to, I don't know, 100x speed up in the ZKs...

21:30: Anna Rose:

It's going to be hardware.

21:31: Jordi Baylina:

I don't know. Maybe tomorrow somebody invents something really cool and...

21:36: Anna Rose:

Changes your mind.

21:38: Jordi Baylina:

Exactly. But it does not look like this will happen. And what I see are really, I mean, one or two orders of magnitude increase is in the hardware acceleration. I mean, there are some interesting projects that are working in that direction, having ASICs, having specific processors for that. Of course, hardware requires its time, and it requires also a little bit of stability, at least in the primitives of the ZKs, but here clearly, it looks like the primitives are becoming quite stable. Here, the merkalization NTTs and MSM in the case of SNARKs, in the case of STARKs-less, but these three primitives are really important, and we are starting seeing hardware that are focusing on this. And this is where probably the next evolution of ZK will happen in the coming years.

22:32: Anna Rose:

I like that you're saying that. I also want to quickly throw to a few episodes that we just did actually in the last six months. So we had Ulrich on talking about the Mersenne 31. We did an episode on Binius. We've done a few episodes on hardware acceleration over the years and recently we did one with Ingonyama. I think in that we also mentioned that there's now at least the first sort of ASIC prototype. I don't know exactly what it's for. In what you're seeing in hardware, is it mostly hardware for SNARKs or is it also hardware for sort of the kinds of constructions that you've made?

23:05: Jordi Baylina:

Yeah, probably the first thinking is for the SNARKs, because this was the standard ZK provers a few years ago. But now, the STARKs are becoming the standard. You see that most of the projects, they are moving to STARKs. So hardware, it's following this trend. And yeah, yeah, that is clearly the big acceleration will go to hardware. Also, the STARKs, I mean, they have a lot of advantages for the hardware acceleration. I mean, it's hashing functions are in general faster to implement, you can work with small prime fields. This is very good for hardware. You can even work with binary fields. This is the Binius, I mean, it's very much about that. You can... I mean, you have a lot of options. And these scales, I mean, it starts to scale much, much better in the hardware world. The problem is that the hardware, being doing an ASIC, it takes a while. I mean, you cannot do an ASIC in a week.

24:00: Anna Rose:

Yeah, a couple of years.

24:01: Jordi Baylina:

Yeah, so that's, we need to see. But that's clearly gonna change the way we write the ZK circuits.

24:08: Anna Rose:

Back in the day, there was this real battle between the SNARK world and the STARK world. And then I know projects like yours were some of the first to really start to incorporate... You use both, so you're kind of like, there's a SNARK that then uses a STARK, I think, right? Both of those things are being used. Which one was it actually? Was the STARK the thing that does the recursion at the end?

24:27: Jordi Baylina:

I would say the big breakthrough of a STARK is that recursion is very, very natural. You can do a recursion with a SNARKs, but it's harder. And then you end up doing these folding schemes, and you need to do, I mean, you need to work with different curves and there's exchange of curves. It's not as natural as with a STARK. STARK, you do just recursion really fast. And this is super important because recursion enables a lot of things. I mean, the real revolution of these new zero knowledge systems is recursion, and this is what the STARKs are excellent for doing that. And the other thing that actually makes... Just changed the balance a lot was the introduction of these small prime fields. So the combination of these recursion plus the small prime fields, this actually beats a lot. I mean, they just win the battle of the SNARKs, the STARKs that at some point could be some dopes. I would say that even some try to reactions from the SNARK team, I mean, if we say that's a competition, which is absolutely not, I mean, just they are technologies.

25:38: Anna Rose:

Yeah, yeah, but there was a time, right? There was a time where it was like camp that sort of claimed one of these things.

25:44: Jordi Baylina:

Yeah, yeah, yeah, you can... People like that, these kind of competitions and so on, but I mean, in our case, for example, we use both and the idea is that we're using a STARKs for everything and for doing the basic proofs and for recursion and so on, but the SNARKs are very good in short proofs. I mean, if you want a short proof and if you want to verify them cheaply in a blockchain, then you want to build a SNARK. I mean, an elliptic curve based proof. So what we did is, okay, just let's build on a SNARK, and recursion and fast and all the probabilities. And just in the last minute, in the last moment, okay, we convert this... We just prove this as the STARK with the SNARK. And then we have a SNARK and that's what we prove on-chain. So actually is we just use the best of each technology. So I would not say that that's a competition.

26:29: Anna Rose:

I think to put it to bed, the SNARK versus STARK competition, who won? Both.

26:34: Jordi Baylina:

Yeah, I mean, each one is useful for its own thing but for doing these big circuits and SNARKs probably has been much, much, much superior. Even I mean, there was... For this big circuit, there was a reaction of Nova and here for example, Ethereum Foundation had a lot of interest on the Nova.

26:55: Anna Rose:

This is the IVC like folding stuff.

26:57: Jordi Baylina:

Yeah, all these folding schemas and all that. But even the performance of these systems being much, much, much better than traditional recursion in STARKs, they cannot beat... At least at this point, they cannot beat...

27:12: Anna Rose:

STARKs.

27:13: Jordi Baylina:

STARKs.

And mainly it's because the primitive, underlying primitive, even if you are doing a folding, you will end up doing a multi-scalar multiplication. And multi-scalar multiplication is going to be always slower than a hashing function.

27:27: Anna Rose:

Yeah, and that's that MSM that you mentioned earlier, the multi-scalar multiplication?

27:31: Jordi Baylina:

Yeah. That's the main reason why elliptic curves systems are in general slower than hash functions systems. And the other side, the elliptic curves, they have the arithmetic structure. I mean, the hashes, you cannot do anything with a hash. You cannot add them. But in the elliptic curve, you have some algebra there. So there are some systems that they are trying to use all these algebraic properties of elliptic curves and trying to get better. The basic thing, I mean, is you have a big circuit, you have a big witness, and you need to run to all these witness. Okay? And what's the basic function that you are using there? In the case of a STARKs is just a merkelization... An NTT and the merkelization in a small prime field. In the elliptic curve, you need to go to 256 bits a field and you need to do an MSM, which is by definition is slower. Again here, hardware can change the things. I mean, because...

28:27: Anna Rose:

It's almost like if the SNARK hardware comes out sooner, maybe SNARKs reach sort of like... They sort of reach parity with STARKs.

28:35: Jordi Baylina:

Yeah, but I don't think, I mean essentially the computation you need is smaller in the STARKs versus SNARKs.

28:45: Anna Rose:

what you make of it. Back in:

29:38: Jordi Baylina:

Our biggest contribution here was the architecture that we were using for building the zkEVM. I mean, here we built the processor, we were using a processor built in a STARK. A processor means a processor with a program written in assembly that runs on this processor. This was a processor that was tailor-made for the EVM. That supports the zkEVM. So actually we just did all the modifications we need to this processor in order for optimize the computation of the EVM circuits. But the fact that... Okay, so we were like... I don't know if it's the first project. But...

30:22: Anna Rose:

Pioneers in a way.

30:23: Jordi Baylina:

Probably, I don't know about the projects and it's very difficult to say that, but so we just took this idea. So instead of building a circuit, we built a circuit that was a processor with memory, with ROM, with coprocessors. And we engineered it like an electronic system of course with a ZK. We didn't have transistors, we had polynomials instead of transistors. But we built the processor, we written an assembly, we built a code that was running on that processor, we built the coprocessors required for building the zkEVM. And this architecture has been proven that was the right architecture.

Right now everybody's doing that at this point. We're talking about the ZK processors in different ways, different strategies, but this was, probably the biggest contribution. I mean, and being something that was theoretical, I mean, I didn't invent the ZK processors, but we just took this idea and actually built a full system, a full operational system that builds an EVM based on this idea that was an important breakthrough. And that doing this way, you have the flexibility to build the full EVM. EVM is really complex. It's really... It was not designed for a ZK... To be built on ZK. But doing the things this way, this allows you to even build... I mean, if you can build an EVM, you can build anything.

31:51: Anna Rose:

And we've seen that, actually. We're seeing more of that, like Rust... Full Rust, you know?

31:57: Jordi Baylina:

Actually, yes.

31:58: Anna Rose:

RISC-V, yeah.

31:59: Jordi Baylina:

RISC-V, SP1, this is something we can talk about. This is probably the future of the... ZK goes in that direction.

32:07: Anna Rose:

Yeah, I think even like there's VMs, there's like ZK rollups, there's Move language, there's Cairo is its own VM. There's like a whole thing.

32:17: Jordi Baylina:

Yeah, Cairo is kind of a language processor on that.

32:20: Anna Rose:

I'm sorry, yeah, it's not the VM. What's the VM of Cairo? It's the Starknet is the VM?

32:26: Jordi Baylina:

Yeah, something like that. I don't know.

32:28: Anna Rose:

I don't know where the VM is in that stack, but OK.

32:31: Jordi Baylina:

But it's... No, no, but at the end, you have some sort of VM that is doing these basic instructions, these basic processors. So this idea is fundamental. Again, and bringing them further. This is what SP1 and RISC Zero is doing. I mean, it's OK, let's go further, not do a specific processor. Let's put just a generic processor, and then just put a Rust on top of that, and that you have everything. I mean, this is kind of the holy grail. The problem is how optimal is that? I mean, this has a lot of advantages. I mean, at the point that brightening ZK circuits, you just use Rust. Okay? So this is super interesting, and this is the feature of that. The problem is how optimal is that? And here I would say two things that are important.

One is, of course, hardware acceleration. This is where really all these RISC-V SP1s, this one can really explode these systems because you will not care about optimization. I mean, the systems will go so fast that you just design it in Rust, you run it and that's it. And maybe in the wild, there is this idea of the coprocessors. Okay, so there are four certain functions or I mean, these specific functions having a Ketcher coprocessor, Arithmetic coprocessor, a SHA-256 coprocessor or a C-recover coprocessor or an Elliptic Curve coprocessor. I mean, you can have like different specific functions, functions that you're using many times inside the processor and it's like a special code, a special opcode of your processor that's doing something specific. So probably it's going to be this.

So the next stage will not be like one single processor, probably it's going to be one processor with many coprocessors. So, probably designing these coprocessors, or having a set of tooling of standard coprocessors for doing different pieces so that you can join them together, this is where the future of ZK is going at.

34:30: Anna Rose:

Yeah, that's so interesting to think of it from that perspective. We've done episodes with RISC Zero, I think in October, we did a recent one with Succint who released SP1.

34:40: Jordi Baylina:

Yeah, I mean, SP1... People may don't know, but SP1 is fully built on top of Plonky3. So I would say that the core of SP1 is very much Plonky3, and I mean, here we are collaborating and working together with SP1, very much working in that direction. Because this is clearly the future of the ZK systems.

34:59: Anna Rose:

And I think it sort of follows the same story that you had, which is the realization that for certain parts of these systems, you will need a STARK, because RISC Zero always has sort of a STARK angle. And I guess SP1 will now, too. And in the case of those coprocessors, are there actually also the SNARK component? Or is it similar to what you've done with the zkEVMs, where it's like many SNARKs recursed by a STARK, but just that what's going into those SNARKs is not EVM bytecode, but rather like the RISC-V instruction set?

35:32: Jordi Baylina:

Yeah, I mean, this process, they require a couple of, so well, let's say different pieces. One is what's called continuations. Continuations means like, okay, you have a code that's executed, but you want your code to keep going. I mean, you want a long program to run. So you need a way to take an execution, another execution, another execution that goes after each other. You have like different proofs and then you want to pack them together in the single proof. Okay? So this idea of continuation is because you are executing something that's long and you may want even though all this proof to compute them in parallel in different systems, but you generate a huge computation in a single place. Okay, so then there is different techniques for doing this continuation, which at the end is recursion. And again, if you want to then prove this computation in Ethereum or in a blockchain, you probably will want to do a small proof and a proof that's cheap to verify. And then you just take this final proof and then you convert it to a SNARK. Okay?

36:30: Anna Rose:

To a STARK?

36:32: Jordi Baylina:

To a STARK, no, to a SNARK. So you have a STARK and then you convert it to a SNARK. I mean to...

36:36: Anna Rose:

At the very end.

36:36: Jordi Baylina:

In our case we are using Plonk.

36:38: Anna Rose:

Okay, okay.

36:39: Jordi Baylina:

But I mean you can use Groth16 or you can use Plonk or other elliptic curve based systems which are the biggest properties that the proof is really small and is cheaper to verify it on-chain. And the other piece, of course, is going to be these specific opcodes or pre-compiles or these specific coprocessors that will be important, especially at the beginning. Maybe when hardware acceleration is so fast, maybe all this will become less important. But I would say that in the next generation, all these coprocessors are going to be... They're going to make a lot of difference for the systems to being real practical.

37:22: Anna Rose:

I mean, there's also a lot of coprocessor projects out there, so maybe that's good.

37:26: Jordi Baylina:

Yeah, I mean, that's a cool thing of the contribution and open source. I mean, people is writing, SP1 just opened this and people are just writing these coprocessors and so on. I mean, we just want to remind people that all these coprocessors need to be audited. You need to do it in the right way, but that's the way.

37:44: Anna Rose:

I feel like we used to see this image of the mainnet Ethereum with these rollups that came off it. With the coprocessor, it actually adds this different landscape to what's around this mainnet. I'm curious how you imagine all of these tools working together. Do you think that eventually these tools are, for example, connected by a shared sequencer or connected by an AggLayer or there's like a mesh of bridging happening between all of these elements? Can you paint a bit of a picture of what you see coming in that regard?

38:18: Jordi Baylina:

remember at the beginning in:

38:30: Anna Rose:

Sharding, exactly.

38:31: Jordi Baylina:

arding, all the sharding, but:

38:38: Anna Rose:

The execution layers of Ethereum.

38:40: Jordi Baylina:

Execution environments and I mean... But even, I mean, execution environment even was a...

38:45: Anna Rose:

It wasn't a concept.

38:45: Jordi Baylina:

It wasn't a word at that time. But the full idea is, OK, we have one Ethereum, we can have many Ethereums, and they can run in parallel, and they communicate with each other. I mean, this is the original Ethereum design. This was the serenity phase by the time. And this sharding, but this thing. Here, two important things happen in between. The most important thing is this separation between data availability and execution environment or validity proof if you want. So you proof in one side the execution and you keep the data available in one side. And Ethereum right now is focused very much in data availability, that's what they are doing. And then the ZKs is all these processors. And we are start building these processors. The first impact of that is that these processors, they don't need to be uniform. They can be different processors and you can have different chains and different like these layers, they can be very different. I mean, each one can have their own token, each one can have different data availability models, they can have some centralized sequencer, decentralized sequencer, some of them... One can be VM. Sorry, some of them can be things that are not even VM. You can do maybe zkWASM or Maiden like, or they are designed in a different way. So you already have like different processors that can run all them on top of Ethereum.

But the next challenge is, OK, you have a lot of processors, a lot of chains that are running on top of Ethereum, but how you connect with each other? How they interact with each other? How you build this world that, I mean, zkWASM, private rollup is connecting zkEVM or to a Maiden or to Optimistic? So how they connect with each other? And here is the full idea. This is the main idea of the aggregation layer that most of the teams are building somehow. We are... Until now we didn't have processors. So talking about how we connect the processors, when you don't have processors, it looks like, OK, it's very theoretical. But now we are starting having this processor. So we are starting worrying about that. And here in Polygon, our proposal is the aggregation layer, is this piece that what it does is, you can have different chains, you have a single bridge, you have a unified liquidity bridge, or a place where you put all the liquidity. And then the idea is how you can do this transfer between systems, but with the warranty that the systems are good because there is a zero-knowledge proof that give you the warranty that what you're doing is correct.

This is the big difference, for example, with an Optimistic rollup. I mean, ZK rollup, they give you this immediate thing. So you change the proof, you show the proof, and you are there. You don't need to write anything else. And here, of course, from the ZK perspective, having a low latency ZK becomes really important. So this is where zero knowledge takes a lot of importance, this bridging, this inter-rollup communication, even inter-Layer 1s, it's important. But in our case, it's this inter-rollup. And here is how I see the world, because of here, okay, it's in Polygon, but what happened with things that are outside Polygon? So what happened with the people that's working with Optimism or that's working with Arbitrum or with zkSync or with, or even the Starknet. I mean, how you connect. So here, the way that I see, it's a matter of friction. So we have the...

The first level is you are in the same chain. If you are the same chain, you have a full composability. I mean, you are there. Okay? Next thing is you are in the same constellation. You are in the same aggregation layer in this case. Here we can expect that the transferring between chains can be in the order of few seconds. I don't know, three, four, five seconds the transfer. So it's like relatively low friction on that. If you want the inter-constellation rollup, then you will have to go through the layer 1. And layer 1, here we are talking about maybe lots of seconds or even a minute, few minutes each way.

42:46: Anna Rose:

You're talking about like writing to Ethereum and then writing back out to another constellation.

42:50: Jordi Baylina:

Yeah, if you need to wait for finality, I hope that Ethereum improves the finality at some point. Finality is relatively bad currently. It's just, I think it's like two eras or something like that. So it's between six and 12 minutes. But I think this can be improved at some point. But in any case, so going to a layer 1, you have a little bit more friction. And even beyond that, you can even have inter-layer 1. We can mean just, I don't think Bitcoin, but probably between Solana and Ethereum or between Ethereum and other layer 1s, it could be also some... Let me not call it bridging systems. So let me call it double pegging systems. Because when we talk about bridge, sometimes we see this multi-sig things... I'm talking about systems that are truly...

43:32: Anna Rose:

They're closer almost, right? They're more interconnected.

43:35: Jordi Baylina:

Yeah. I mean, they don't depend on anybody. They are just protocol based. You don't need to trust anybody that's part of the protocol. It's a double-pegging. I mean, this is a good...

43:46: Anna Rose:

What's the word you're saying there?

43:49: Jordi Baylina:

Double peg. I mean, this is the double pegging.

43:50: Anna Rose:

Double pegging?

43:51: Jordi Baylina:

Yeah, double pegging. This is the idea is that you can lock phones in one network, and then you start using in the other side. And then you can burn it in the other network and then unlock it in that. But all that, I mean, it's very much a bridge. But without having a multi-sig or, for example, if you lock that in Bitcoin, okay, who has the multi-sig? Somebody has the keys of the phones that you are locking. So can you do that in a protocol base? Well, if you are Bitcoin, you probably do some fork and it's gonna be complex. But if you are two Ethereum networks or maybe Solana and Ethereum or other two layer 1s, you may want to do that. And that's definitely possible. And for this bridging, the ZK is also very, very, very important because you can build these systems, okay? But this problem is gonna take longer friction. But even with different friction levels, you will have a single space where all the chains will connect with each other. And we are designing that.

44:54: Anna Rose:

Is there two AggLayers then? Is there sort of the AggLayer for the Polygon and then another AggLayer for all?

45:00: Jordi Baylina:

When we refer to AggLayer, we refer to this single liquidity bridge, and the idea is that you don't move the funds in the Layer 1. You just keep the funds in a single place, and you are just pointing one side to the other, and you can go really fast.

45:13: Anna Rose:

It's like a settlement layer above the L1.

45:15 Jordi Baylina:

A settlement layer, yeah.

45:16: Anna Rose:

Okay.

45:17: Jordi Baylina:

Exactly. And then, of course, if you have different settlement layers, then you need some mechanism also to interconnect them. Here we have a layer 1. Here, it's a world where everything needs to be designed here. This is thinking about how to start talking with other teams about these protocols, but it's very, very early stage. I mean, we actually, again, rollups are very, very, very new. We are even putting them, the networks are creating now. They are connecting now. We are start connecting in point, where are start connecting many layers to the system, but it's very early stage, and it's also a lot of opportunities are here, but I mean, it's like a nearly stage.

45:59: Anna Rose:

Yeah. You can't tell yet kind of how it's going to shake out. I want to make one quick note, you mentioned constellations. I think on previous episodes, Tarun had been referring to these different kind of groupings of rollups as like federations almost.

46:13: Jordi Baylina:

Federations. Just take the word you want. I mean, I'm not really good in wording, but that's the idea.

46:19: Anna Rose:

No, just for the listener to maybe make a little link to what we were talking about in those other episodes, because it's like I think of them, at least now, there's people who are deploying kind of chains of a certain nature in, like you said, Constellation or Federation, these sort of spaces in L2s. So you have like the ZK stack rollup builder. So you could build rollups, release them, and they'll be immediately tied into the zkSync ecosystem very easily. But connect them to outside rollups, you might have to do extra stuff. But that seems like a temporary setup, don't you find? I feel like it's right now you have all of the different federations, sort of like new rollups kind of coming out of them. But I do think there's going to need to be some connection, like a reconnection.

47:08: Jordi Baylina:

Yeah, I mean, this is the aggregation layer. This is very much what's that. It's important to understand that the aggregation layer is this layer that allows these different processors, these different rollups, not only to communicate with each other, also to send value between one, with each other. And right now, I mean, two years ago, there was almost no processors, OK? And now we are starting having a bunch of processors. So all this value transfer between the processors and the standards for transferring value between these processors, this will become, every day, more and more and more important. Here, the ZK tech is fundamental because at the end is you have a processor, you want to commit to a state and then you want to prove to another processor that this happened and you are committed and this is what it is and here ZK is important, especially for scaling. I mean, if you have four processors, you can always like try to follow all the rollups and follow all the chains. But if you have a world of many processors, you don't want to follow all the chains. Here the chain will be sovereign, they will generate a ZK and you will be able to import this state in your state, and that means that you will be able to use your, I mean, the funds, the transfers that they already commit to happen here. And that's why it's so important. And this is the aggregation layer that we're building is very much about that. It's about guaranteeing that these transfers and proposing a standard for different chains with different nature by itself to transfer value between one each other.

48:41: Anna Rose:

In the case of the AggLayer, the aggregation layer, like the level that it holds is really that settlement layer. But there are other ideas that have been floated like the shared sequencer model, where I think there they're kind of trying to also unify lots of different L2s, but at a different level. And I'm very curious, do you imagine those things working together? Do you not think that the shared sequencer is the right approach? Like, yeah, how are you thinking about it?

49:09: Jordi Baylina:

I see it, well here there is like two models. I mean, when you want to communicate this, especially when you want to communicate, but even more when you want to transfer values, there is these two models. I mean, the asynchronous model and the synchronous model. Okay? Shared sequencers is very much asynchronous model. Here you have some locking problems and locking problematics. Okay? And here the biggest doubt on the shared sequencers is how will they scale? I mean, I'm very worried on the impact because I mean, if you are waiting for the other network, you need to put some time outs, the other network needs to block, they have some specific times. And these systems, and this is more generic things, in general, the world works better... The world is asynchronous. And the world works better when you don't depend on the others. You're just transferring messages, you're sending messages.

And of course, if you have these mechanisms for you lock yourself, you transfer, you know that these funds are valuable, you have your time and then there is this idea of asynchronous calls or things that you can build on top of this message passing system. We need to see at the end, this is part of these algorithms or these protocols that we want to implement in between chains. What's the right solution? It's difficult to see. I think both of them are complementary. If you ask me, I would bet more for an asynchronous model. I think it's much easier. If we take the idea of internet, I mean, the protocols, there were at the beginning a lot of protocols that were synchronous that didn't work. And here I want to mention, DICOM, Corba, even the RPC, the first Linux, I mean, the first Linux RPC, they never work well on the internet. It was until the REST API, web, that are very much synchronous systems that the internet start working well. Even in the programming language, I mean, synchronous programming works much better when you are in an environment where you cannot trust the other side.

But again, I mean, this is very theoretical discussion. I want to see them to happen. We are very little stage here. We need to be very open to all the ideas. In the case of the Polygon aggregation layer, the idea is very... I mean, it's very much an asynchronous layer, but it allows easily to create these bundles, which is very much a synchronous, it's a synchronous sequencer. And it also allows you to connect to synchronous sequencer. So we are not closing the doors to any model. But again, I mean, this is a current discussion that's happening, a lot of work is happening. I'm super excited to see these things start happening. Right now in the Polygon the aggregation layer, we already can transfer between the chains and we're actually bridging between the chains in a way. But we need to see how all these applications work on this new world, in this new space that we are all building.

52:08: Anna Rose:

And I sort of, I think I said this before, I think things still also need to shake out in a few different levels. Like obviously the tech needs to exist, we need to see how people use it. But I want to talk a little bit about usage in this like L2 present that we live in, where I feel like it's hard to tell often when you're looking at the metrics that are used what the users are there for. Like you have these huge flows of people running into one chain, joining an L2, like they're bridging everything into an L2, spending some time in there. But then, I don't know, another L2 beckons and they all sort of head over there. I'm very curious what you make of that kind of activity. Was this kind of what you thought would happen or was this kind of unexpected that this is the way it's playing out? And how much of it do you think is real? I think that's the real question.

52:57: Jordi Baylina:

I mean, probably there is a lot of these things you are mentioning happening. I mean, here is very easy to put bots generating transactions or just throwing a lot of your own money in your chain just saying, just because you're saying that you have a huge TVL. I mean, well, this is a little bit more difficult because maybe you are risking your money, but besides that, I would say that from the technical perspective, it doesn't add a lot of value. And I would say that's kind of a honeypot. So the only useful thing is that could be a honeypot for somebody else. But if your TVL is based on your token, I don't think it's even useful. For me, what I'm interested in is in real applications, even if they have low TVL, even if they have low transactions, and being able to see how they are using it, what are their real needs and learnings.

In the case of the zkEVM, for example, we have been working in a lot of different projects and listening to them and what are their concerns, and a lot of the hidden work that we did during the last year was very much about solving all these things. For example, we had some issues with accuracy of the timestamp. I mean, there is, for example, a lot of DeFi protocols that the timing is really important.

54:10: Anna Rose:

We need that.

54:11: Jordi Baylina:

Yeah. So here, for example, we did some adjustments on that front in order to have these applications. Also the way that the people is using, I mean, people that's using like a single transaction with many sub-transactions. So who can we have them to do these things better? And the idea is working closer with the different applications and understanding these developers that are building on top of that and making the things better for them, scaling and preparing the things. This is what I'm most worried about. I mean, I mentioned at the beginning about the engineering. Engineering is about solving real problems. Well, an airdrop is an application, but I don't think that building applications for doing airdrops is what we are here for. I mean, it's an application, I respect all applications. This is one interesting applications. It's interesting because it tests... It can test the network and it can stress the network at certain point, but I hope that there are more real applications that will happen.

And for me, I mean, it's very sentimental here, but for me, it's like, okay, you just need to put the systems available and show them, explain them. And slowly the people will pick up real applications, real... People that really needs to use those chains will start coming and using. But it's a matter of time. It can be months, even years that all these applications and new applications sort of start happening on-chain. So slowly, in one side, it's like improving our systems, making the systems more reliable, even making the system more safe, all the auditing, making the systems more stable, making the system more decentralized. This is one path that we are taking. And the other is that option should come from the different chains, from different networks, just create for different applications, just building on top of that. I believe in solid rods, not in fast guru. But again, I mean, we'll see the market is very... Sometimes it's very, it's very...

56:23: Anna Rose:

nment akin to maybe November,:

56:56: Jordi Baylina:

I mean, in one side, I'm a little bit, sometimes a little bit disappointed in the industry, but this is not new, probably, especially in the last years. I mean, you see a lot of projects, applications that they don't care a lot at all about the values. This is a little bit annoying, but I'm an optimistic guy. I always have the hope that it takes some time to get the full point of what this industry is about. I think that people need to go to this phase. I mean, if you come from a different industry, you go here, you think in terms of other industries, how it works, meaning the competitive spirit and products and these things. But there is some point, it can take a while that you realize that what you're building goes beyond normal business product. You're just trying to get richer than the other one and that you are just trying to... Mean, here you are trying to build a different way where all humans organize and you are trying to build these common good, something that just works, something that's there, something that's part of the humanity that don't belong to anybody, and that's something that's useful.

58:10: Anna Rose:

Doesn't belong to a country, doesn't belong to a company.

58:13: Jordi Baylina:

Exactly.

58:14: Anna Rose:

Yeah, it's kind of amazing.

58:15: Jordi Baylina:

And this is... I mean, and when you discover that, and when you see also how far away we are from that, but it's when you discover that, this is when you make the click and then you stop. Okay, you see the things in a very different way. Of course, you need to build the things in a sustainable way, you need to figure out how to do the things, but your view changes very much and then you prioritize other things. It's disappointing to see it, but at the same time you are convinced that, okay, it's a phase. I mean, it's normal that people come here and they see it in that way. All these people in a few years will become ambassadors of this technology and it's part of the understanding process. So there is nobody that the first day fully understand what's this tech about. Decentralization, I mean, these values of decentralization, this censorship resistance, this freedom, this is just there. It's like air. You just take the air and use it. Okay? And nobody will ask you these permissionless systems. I mean, nobody will ask you what's even your name for using Bitcoin or for using Ethereum or for using any of the dApps that we are building.

When you get this point, then your life change. I mean, your vision of the system change, and every time we are more, and this is what we are building. Steps to go there, maybe tough. I mean, sometimes we are building things that looks like they are very centralized, they have systems that you don't really like, and they have centralized sequencers, and they have centralized provers, and they have escape hatch, and they have all these things. But we need to understand what we're building. And I think we will, I think, well, people progressively is getting it. We are a minority, maybe, at this point, but we are a minority very much because a lot of new people came to the industry. So this is the good news.

::

Exactly. It's like a minority that's still growing along with the whole industry, because there's way more people nowadays than there were before. Even talking to you about this landscape and how many things we could talk about all these different ways of looking at what's there and actually spaces that are now there for people to experiment. That's so cool. Jordi, thank you so much for coming on the show.

::

It's a pleasure, Anna.

::

Thanks for sharing an update on all that you're doing, yeah.

::

Thank you very much and see you next time.

::

Sounds good. All right, I want to say thank you to the podcast team, Henrik, Rachel and Tanya. And to our listeners, thanks for listening.

Transcript

00:05: Anna Rose:

Welcome to Zero Knowledge. I'm your host, Anna Rose. In this podcast, we will be exploring the latest in zero knowledge research and the decentralized web, as well as new paradigms that promise to change the way we interact and transact online.

00:26:

This week, I catch up with the legendary Jordi Baylina, Ethereum OG and Polygon's zkEVM Technical Lead. Last time he was on the show, we talked about the concept he had at the time for zkEVMs and how these would be built in the future. With this episode, we check in on what these systems look like now that they've been launched. I get Jordi's take on the general L2 landscape, decentralized systems, and how these are rolled out, the research that has come out of Polygon, and what keeps him inspired working in the space.

Now, before we kick off, I just want to let you know about an upcoming hackathon we are getting very excited about. ZK Hack Kraków is now set to happen from May 17th to 19th in Kraków. In the spirit of ZK Hack Lisbon and ZK Hack Istanbul, we will be hosting hackers from all over the world to join us for a weekend of building and experimenting with ZK tech. We already have some amazing sponsors confirmed, like Mina, O(1) Labs, Polygon, Aleph Zero, Scroll, Avail, Nethermind, and more. If you're interested in participating, apply as a hacker. There will be prizes and bounties to be won, new friends and collaborators to meet, and great workshops to get you up to date on ZK tooling. Hope to see you there. I've added the link in the show notes, and you can also visit zkhackkrakow.com to learn more. Now Tanya will share a little bit about this week's sponsors.

01:48: Tanya:

Gevulot is the first decentralized proving layer. With Gevulot, users can generate and verify proofs using any proof system, for any use case. You can use one of the default provers from projects like Aztec, Starknet, and Polygon, or you can deploy your own. Gevulot is on a mission to dramatically decrease the cost of proving by aggregating proving workloads from across the industry to better utilize underlying hardware while not compromising on performance. Gevulot is offering priority access to ZK Podcast listeners. So if you would like to start using high-performance proving infrastructure for free, go register on gevulot.com and write ZK Podcasts in the note field of the registration form. So thanks again, Gevulot.

Aleo is a new layer-1 blockchain that achieves the programmability of Ethereum, the privacy of Zcash, and the scalability of a rollup. Driven by a mission for a truly secure internet, Aleo has interwoven zero-knowledge proofs into every facet of their stack, resulting in a vertically integrated layer-1 blockchain that's unparalleled in its approach. Aleo is ZK by design. Dive into their programming language, Leo, and see what permissionless development looks like, offering boundless opportunities for developers and innovators to build ZK apps. This is an invitation to be part of a transformational ZK journey. Dive deeper and discover more about Aleo at aleo.org. And now here's our episode.

03:16: Anna Rose:

Today I'm here with Jordi Baylina, who has asked to be introduced as a casual developer, but is really an Ethereum OG and is a Polygon zkEVM Technical Lead, someone who's been in the space a really long time. Jordi, I'm so excited to invite you back to the show. Welcome.

03:31: Jordi Baylina:

It's a huge pleasure to be here, Anna.

03:33: Anna Rose:

e first time was in September:

03:56: Jordi Baylina:

Yeah, it's a long time ago.

03:57: Anna Rose:

e wants to check that out. In:

04:42: Jordi Baylina:

You recorded one day before the announcement or something like that.

04:46: Anna Rose:

The reason I wanted to have you back on the show was I really wanted to do a catch up with you. It's been almost three years since you've been on. And just going back to the topics that we were talking about at the time, like back then, L2s and zkEVMs were still kind of a figment of our imagination. This was like a future plan. These were not really real yet. They were still kind of being spec’d out. And I really thought it would be cool to catch up with you and learn what you make of everything that's developed, what's been happening on the Polygon side. Yeah, and just generally kind of catch up on all things ZK.

05:22: Jordi Baylina:

Yeah, that was a moment where ZK rollups were theoretically possible. I mean, we were already in the phase because I mean, probably... I don't know, a few months ago were like impossible projects. And I mean, they were theoretically possible, but nobody knew how to build it in a practical manner. And at that point we realized, okay, this should be practical. I mean, this should be possible and doable. And at that time, we were starting working in engineering this and putting all this thing that was theoretical in a practical way. And that was the point. I mean, it's like the moment where you think that, okay, I think they have a solution for everything, but then you start working on that and then you start to face hundreds of challenges.

06:06: Anna Rose:

Yeah. So this is what I want to hear about. I think maybe we can pick up the story from where we were. At that time, Iden3 and Hermez were kind of standalone NTTs. And as we learned right around that time, you merged then into Polygon. What happened since? What was the joining of Polygon like? What did that open up for you? How did it change sort of the vision for what you were building?

06:31: Jordi Baylina:

I mean, at that time, we already started building the zkEVM, but Polygon, very much what we did was a couple of things. Well first is to accelerate the process. I mean, being in an organization like Polygon allows us to hire a much better team and to put all the resources to accelerate. It also gave us access to other teams. Here is very important to mention the Zero team. At that time was the Mir team, but it was the Zero team where we just borrowed with a lot of the technology they were building all these small prime fields, STARKs and so on. And also Bobbin, I mean Bobbin with Miden was also very fundamental in getting STARKs the right way, let's say. So we were building already STARKs, but I mean, Bobbin was a super expert on the time. So these two teams that they were already part of Polygon were fundamental to accelerate and to bring the zkEVM reality.

This was two years' work, more or less. It was a hard work of a set of engineers working days and nights and solving challenges. I can tell you that from the engineering perspective and even from the personal perspective, I enjoy it a lot. I mean, it's doing... It's probably the first time that you do the work as an engineer. I mean, you do... In general, when you are managing, you do a lot of things that are not engineering, but at that point, we were building engineering. I mean, we were engineering the system and going to all the solutions. We, I mean, I think we were able to create an amazing team that was a mix of people that came from the blockchain world, from the ZK world, but also people that came from other industries, but they were excellent developers in there. Team, I would say that the average age of this team was relatively high, especially for a blockchain, the blockchain standards. Say of course there is a variety here, but there was a huge, quite mature team, and we started working very well together. And I mean, this has been a... I would say this has been the key of the success on the project.

08:41: Anna Rose:

I want to sort of take it back to Hermez. Hermez was not a zkEVM, was it?

08:48: Jordi Baylina:

It was a rollup. It was an only payment rollups, where the prover was huge proof written fully in Circom. It was verified in Groth16.

08:57: Anna Rose:

I see.

08:57: Jordi Baylina:

But here we proof a set of... It was already a rollup, and a working rollup. And all the experience that we got building Hermez was a fundamental for building the Polygon zkEVM, all the learnings. And it was really, really, really important having... I mean, people think, oh, you just do a... I mean, putting a system in production, there is a lot of things you need to take in account. And people that has never put anything in production cannot realize that, okay? But there is a lot of work to do in the engineering part on that. And this... We had a lot of experience in that side and this was the starting point. Of course, the ZK state at that point was Circom. I mean, the number of constraints was limited. Actually, we did a huge improvement in Circom, especially for big circuits like the Hermez one and how to build them, the machines. I mean, we did a huge work on that front, but that was not enough for a zkEVM. And there is where I would say that they started like a new generation of provers. It's a stark-based proof with recursion and we can talk a little bit about that, but that's what the zkEVM is about. It's just changing the generation of provers.

10:16: Anna Rose:

my mind was just thinking of:

11:08: Jordi Baylina:

Yeah, I mean, I'm an engineer. So even before I was in crypto, my work was to put things in production. I mean, I was building things for real, for solving real people problems. This is what's an engineer about. So for me, this is what I've been doing. You just go on and you ship things. Of course this can be a success or they can... You ship things and you can mess up. But if you don't try it, I mean, you don't know what's gonna happen, but you need to ship things. And shipping things sometimes requires some sacrifices, maybe in the research part, because when you're putting something in production, at some point you need to cut and say, okay, this is what I'm going to launch. I'm going to go for that. and...

11:51: Anna Rose:

Yeah, you freeze it.

11:52: Jordi Baylina:

You need to freeze that. And this is, I mean, for people that's very research-based, I mean, they want to learn more. They want to improve more. Okay? And doing these, I mean, freeze things, okay? Let's just stop here. This is what we have. I know that we can do things much better, and things are going to be discovered, but let's launch this. Let's freeze this, let's put this in production, and let's start solving problems with that. For me, this is important. I mean, this is part of my DNA, I would say, and it's about phases. I mean, just taking things that are in research phase and bringing them to production. This has been my personal role here in my life in general. I mean, again, and not only in this ecosystem. Everyone has their own roles. I mean, there are people that's doing research and they are doing an amazing work in research. We owe them a lot. There are people that are more maintainers, people that... Of the systems, they need to be, okay, they are in production, but at some point they need to be maintained and they need to be kept and improved. Okay, this is also a role and it's also an important role. And depending on the profiles, you will fit better in one place or the other. But for me, this has been my place.

13:03: Anna Rose:

Have you seen, like, is there a phenomenon of engineers who cannot accept those sort of mistakes? What you almost are saying is like, once you freeze it, there might be things you don't like about it, and you still have to ship it and be public. Do you know sort of an engineer profile that has a lot of trouble doing that? Like, does that happen sometimes in our ecosystem where people are kind of precious? They want it to be truly perfect, and it stops them.

13:30: Jordi Baylina:

Yeah, I think this, but this is... I mean, again, depending on the environment that you are, this can be a really good thing, but this can be also a really bad thing. So that is where you are. I mean, if you are in a research group and your goal is to ship protocols, or that's perfectly fine, but shipping or creating engineering products means much more than that. I mean, just putting things that are maintainable, things that are well documented, things that works well, that are safe, that are audited. I mean, it requires a lot of things that you need to take into account. And it's a learning process also, but you need to be focused in the product, in what you are shipping, what problem actually you are solving. This is very important if you really want to build things that are useful for humanity. And don't understand me wrong. I mean, it's super useful to generate protocols. Actually, it's the base of everything. So all the pieces are important, but we need to understand what are the different stages and what are the goals of the different stages. And do not underestimate any person that's in one of these stages. I mean, sometimes there are some faces that they are maybe less...

14:44: Anna Rose:

Known.

14:45: Jordi Baylina:

Well, less known, or maybe they have maybe worse reputation on that. I don't know. But for example, giving support to the users. This is a fundamental role. Absolutely fundamental, and the people... It's super important to have a good train of people and people that's motivated. And this is part of the product. If you don't have this, a product just cannot succeed. Just to put some examples, but again, we can talk about documentation, we can talk about maintaining, we can talk about auditing, we can talk about... So there are many, many, many roles, many, many pieces that they need to work perfect. And if one of those doesn't work good, then the full system may suffer.

15:25: Anna Rose:

Back when we were talking about, so actually in that episode we did go quite deep into the zkEVM model in that it was a SNARK and a STARK. In implementing that, like I almost want to now ask, at what point did you freeze it? Because back then I think you probably had a vision for it and you started to go out and build it, but yeah, was there changes that happened along the way? Like did you use a different system in the end because you connected with Bobbin? Yeah, stuff like that.

15:55 :Jordi Baylina:

Yeah, at some point we were... So there was very much a decision. Of course, here is also trial and error, okay? Because you know what you want to build and then you do things, you test things, and maybe you see, okay, this line doesn't work and then you need to change because maybe there is, you are doing what you're doing doesn't work. But I would say that big pieces, the last important change that we made was switching from 256 bits to Goldilocks to 64 bits prime field. This was an important the factor of the limitation of the system. And this probably was the last thing that we changed. Of course, right now we are working on new things and yeah, we are going to the next steps, okay? But that was the last freezing point.

16:41: Anna Rose:

How updatable is the zkEVM? Like, have you built it in a way that you can change the system under the hood quite easily? Or is it like a rewrite every time you'd want to do an upgrade like that? Or would you have to even almost redeploy it?

16:55: Jordi Baylina:

Well, actually it's just an upgradable smart contract. Right now it's limited with time lock. Currently I think it's a 10 days time lock. It should be 30 days at some point. But again, I mean, this is from the technical perspective, I mean, you decide how you want to upgrade. The problem of the upgrades is that they are quite dangerous and quite centralized. So here is the importance of governance systems for these upgrades and being sure that the users of the network are not affected, or at least they have the right to exit before an upgrade.

Here, I would say that, especially ZK proof are still in getting mature phase. Of course, the maturity of the systems is much, much, much, much, much better than it was one year ago, but still requires to get some maturity. We're still learning. I mean, we're still finding some bugs. And because of that, we need to set some protections. These protections are if you want some centralized systems or things that you don't want them to have, but for security reasons, you have to have them in there.

18:06: Anna Rose:

Do you mean like committees and stuff like that?

18:09: Jordi Baylina:

Yeah, there's committees and even there's escape hatches and things like that. I mean, they're not nice from the decentralization perspective or from the idea of what we are building. I mean, an attended system that just works, and everybody wants to use. It's a common good. I mean, this is the final goal. But we are a little bit far from this ideal scenario. But so it's a path. I mean, the idea is to go to that path, and there is no other way to go to that path than getting maturity on the systems, and we're improving a lot.

18:43: Anna Rose:

I want to talk about some of those systems. So I know that your team is focused mostly on the zkEVM side, so really the engineering. But from the Polygon group, there's been Plonky2 that's come out, Plonky3 that's come out. More recently, there's Circle STARKs. There's been work, I think, just generally on STARK-like constructions. At what point did those research pieces become something you could imagine implementing? And are any of them in the works, I guess, is kind of a follow-up to that.

19:11: Jordi Baylina:

Yeah, yeah, I mean, clearly we are taking advantage of all this research that these groups are doing. Just to mention a few ones of them is we are working very much in what we call VADCOPs. VADCOPs is variable degree composite proofs. The idea is instead of having a huge monolithic proof, the idea is to divide the proof in sub-proofs and with aggregation with the ZK, I mean using the recursion part, we convert all these sub-proofs to a single proof. And this is much more modular, and this allows you to do variable size, because you can aggregate as many as you want. So this allows us to build infinite circuits. I mean, circuits that can be as big as you want. You are not limited by a number of constraints. You're just aggregating different proofs, and you can do that. And also, you are using much better other polynomials, they don't need to be exactly the same degree. So you have a lot of advantages on there.

Here, for example, there is research from Polygon team, but other people in the STARKs world that just connecting GKR to the STARKs just to do the idea of the past. This is an interesting thing that's happening. There is also the work that currently Polygon Zero is doing about the Plonky3 and 32-bit small prime fields, Mersenne proof and all these Circle STARKs and all that, that looks really promising for that. I mean there is also the Binius system that looks also very promising. Again, we need to see how everything matches. But there is a lot of research and a lot of things, interesting things that are happening. Of course, they are evolving and getting these proof systems even much faster. But what I can tell you is that the real revolution of the ZKs is not going to come... I mean, with all these systems, with all these new protocols, you can get, I don't know, maybe a 20% speed up, 50% speed up, 10% speed up. They may even not work on that, but I would say the big breakthrough is going to come from the hardware acceleration.

21:22: Anna Rose:

Ooh, interesting. Okay.

21:25: Jordi Baylina:

So if we really want to, I don't know, 100x speed up in the ZKs...

21:30: Anna Rose:

It's going to be hardware.

21:31: Jordi Baylina:

I don't know. Maybe tomorrow somebody invents something really cool and...

21:36: Anna Rose:

Changes your mind.

21:38: Jordi Baylina:

Exactly. But it does not look like this will happen. And what I see are really, I mean, one or two orders of magnitude increase is in the hardware acceleration. I mean, there are some interesting projects that are working in that direction, having ASICs, having specific processors for that. Of course, hardware requires its time, and it requires also a little bit of stability, at least in the primitives of the ZKs, but here clearly, it looks like the primitives are becoming quite stable. Here, the merkalization NTTs and MSM in the case of SNARKs, in the case of STARKs-less, but these three primitives are really important, and we are starting seeing hardware that are focusing on this. And this is where probably the next evolution of ZK will happen in the coming years.

22:32: Anna Rose:

I like that you're saying that. I also want to quickly throw to a few episodes that we just did actually in the last six months. So we had Ulrich on talking about the Mersenne 31. We did an episode on Binius. We've done a few episodes on hardware acceleration over the years and recently we did one with Ingonyama. I think in that we also mentioned that there's now at least the first sort of ASIC prototype. I don't know exactly what it's for. In what you're seeing in hardware, is it mostly hardware for SNARKs or is it also hardware for sort of the kinds of constructions that you've made?

23:05: Jordi Baylina:

Yeah, probably the first thinking is for the SNARKs, because this was the standard ZK provers a few years ago. But now, the STARKs are becoming the standard. You see that most of the projects, they are moving to STARKs. So hardware, it's following this trend. And yeah, yeah, that is clearly the big acceleration will go to hardware. Also, the STARKs, I mean, they have a lot of advantages for the hardware acceleration. I mean, it's hashing functions are in general faster to implement, you can work with small prime fields. This is very good for hardware. You can even work with binary fields. This is the Binius, I mean, it's very much about that. You can... I mean, you have a lot of options. And these scales, I mean, it starts to scale much, much better in the hardware world. The problem is that the hardware, being doing an ASIC, it takes a while. I mean, you cannot do an ASIC in a week.

24:00: Anna Rose:

Yeah, a couple of years.

24:01: Jordi Baylina:

Yeah, so that's, we need to see. But that's clearly gonna change the way we write the ZK circuits.

24:08: Anna Rose:

Back in the day, there was this real battle between the SNARK world and the STARK world. And then I know projects like yours were some of the first to really start to incorporate... You use both, so you're kind of like, there's a SNARK that then uses a STARK, I think, right? Both of those things are being used. Which one was it actually? Was the STARK the thing that does the recursion at the end?

24:27: Jordi Baylina:

I would say the big breakthrough of a STARK is that recursion is very, very natural. You can do a recursion with a SNARKs, but it's harder. And then you end up doing these folding schemes, and you need to do, I mean, you need to work with different curves and there's exchange of curves. It's not as natural as with a STARK. STARK, you do just recursion really fast. And this is super important because recursion enables a lot of things. I mean, the real revolution of these new zero knowledge systems is recursion, and this is what the STARKs are excellent for doing that. And the other thing that actually makes... Just changed the balance a lot was the introduction of these small prime fields. So the combination of these recursion plus the small prime fields, this actually beats a lot. I mean, they just win the battle of the SNARKs, the STARKs that at some point could be some dopes. I would say that even some try to reactions from the SNARK team, I mean, if we say that's a competition, which is absolutely not, I mean, just they are technologies.

25:38: Anna Rose:

Yeah, yeah, but there was a time, right? There was a time where it was like camp that sort of claimed one of these things.

25:44: Jordi Baylina:

Yeah, yeah, yeah, you can... People like that, these kind of competitions and so on, but I mean, in our case, for example, we use both and the idea is that we're using a STARKs for everything and for doing the basic proofs and for recursion and so on, but the SNARKs are very good in short proofs. I mean, if you want a short proof and if you want to verify them cheaply in a blockchain, then you want to build a SNARK. I mean, an elliptic curve based proof. So what we did is, okay, just let's build on a SNARK, and recursion and fast and all the probabilities. And just in the last minute, in the last moment, okay, we convert this... We just prove this as the STARK with the SNARK. And then we have a SNARK and that's what we prove on-chain. So actually is we just use the best of each technology. So I would not say that that's a competition.

26:29: Anna Rose:

I think to put it to bed, the SNARK versus STARK competition, who won? Both.

26:34: Jordi Baylina:

Yeah, I mean, each one is useful for its own thing but for doing these big circuits and SNARKs probably has been much, much, much superior. Even I mean, there was... For this big circuit, there was a reaction of Nova and here for example, Ethereum Foundation had a lot of interest on the Nova.

26:55: Anna Rose:

This is the IVC like folding stuff.

26:57: Jordi Baylina:

Yeah, all these folding schemas and all that. But even the performance of these systems being much, much, much better than traditional recursion in STARKs, they cannot beat... At least at this point, they cannot beat...

27:12: Anna Rose:

STARKs.

27:13: Jordi Baylina:

STARKs.

And mainly it's because the primitive, underlying primitive, even if you are doing a folding, you will end up doing a multi-scalar multiplication. And multi-scalar multiplication is going to be always slower than a hashing function.

27:27: Anna Rose:

Yeah, and that's that MSM that you mentioned earlier, the multi-scalar multiplication?

27:31: Jordi Baylina:

Yeah. That's the main reason why elliptic curves systems are in general slower than hash functions systems. And the other side, the elliptic curves, they have the arithmetic structure. I mean, the hashes, you cannot do anything with a hash. You cannot add them. But in the elliptic curve, you have some algebra there. So there are some systems that they are trying to use all these algebraic properties of elliptic curves and trying to get better. The basic thing, I mean, is you have a big circuit, you have a big witness, and you need to run to all these witness. Okay? And what's the basic function that you are using there? In the case of a STARKs is just a merkelization... An NTT and the merkelization in a small prime field. In the elliptic curve, you need to go to 256 bits a field and you need to do an MSM, which is by definition is slower. Again here, hardware can change the things. I mean, because...

28:27: Anna Rose:

It's almost like if the SNARK hardware comes out sooner, maybe SNARKs reach sort of like... They sort of reach parity with STARKs.

28:35: Jordi Baylina:

Yeah, but I don't think, I mean essentially the computation you need is smaller in the STARKs versus SNARKs.

28:45: Anna Rose:

what you make of it. Back in:

29:38: Jordi Baylina:

Our biggest contribution here was the architecture that we were using for building the zkEVM. I mean, here we built the processor, we were using a processor built in a STARK. A processor means a processor with a program written in assembly that runs on this processor. This was a processor that was tailor-made for the EVM. That supports the zkEVM. So actually we just did all the modifications we need to this processor in order for optimize the computation of the EVM circuits. But the fact that... Okay, so we were like... I don't know if it's the first project. But...

30:22: Anna Rose:

Pioneers in a way.

30:23: Jordi Baylina:

Probably, I don't know about the projects and it's very difficult to say that, but so we just took this idea. So instead of building a circuit, we built a circuit that was a processor with memory, with ROM, with coprocessors. And we engineered it like an electronic system of course with a ZK. We didn't have transistors, we had polynomials instead of transistors. But we built the processor, we written an assembly, we built a code that was running on that processor, we built the coprocessors required for building the zkEVM. And this architecture has been proven that was the right architecture.

Right now everybody's doing that at this point. We're talking about the ZK processors in different ways, different strategies, but this was, probably the biggest contribution. I mean, and being something that was theoretical, I mean, I didn't invent the ZK processors, but we just took this idea and actually built a full system, a full operational system that builds an EVM based on this idea that was an important breakthrough. And that doing this way, you have the flexibility to build the full EVM. EVM is really complex. It's really... It was not designed for a ZK... To be built on ZK. But doing the things this way, this allows you to even build... I mean, if you can build an EVM, you can build anything.

31:51: Anna Rose:

And we've seen that, actually. We're seeing more of that, like Rust... Full Rust, you know?

31:57: Jordi Baylina:

Actually, yes.

31:58: Anna Rose:

RISC-V, yeah.

31:59: Jordi Baylina:

RISC-V, SP1, this is something we can talk about. This is probably the future of the... ZK goes in that direction.

32:07: Anna Rose:

Yeah, I think even like there's VMs, there's like ZK rollups, there's Move language, there's Cairo is its own VM. There's like a whole thing.

32:17: Jordi Baylina:

Yeah, Cairo is kind of a language processor on that.

32:20: Anna Rose:

I'm sorry, yeah, it's not the VM. What's the VM of Cairo? It's the Starknet is the VM?

32:26: Jordi Baylina:

Yeah, something like that. I don't know.

32:28: Anna Rose:

I don't know where the VM is in that stack, but OK.

32:31: Jordi Baylina:

But it's... No, no, but at the end, you have some sort of VM that is doing these basic instructions, these basic processors. So this idea is fundamental. Again, and bringing them further. This is what SP1 and RISC Zero is doing. I mean, it's OK, let's go further, not do a specific processor. Let's put just a generic processor, and then just put a Rust on top of that, and that you have everything. I mean, this is kind of the holy grail. The problem is how optimal is that? I mean, this has a lot of advantages. I mean, at the point that brightening ZK circuits, you just use Rust. Okay? So this is super interesting, and this is the feature of that. The problem is how optimal is that? And here I would say two things that are important.

One is, of course, hardware acceleration. This is where really all these RISC-V SP1s, this one can really explode these systems because you will not care about optimization. I mean, the systems will go so fast that you just design it in Rust, you run it and that's it. And maybe in the wild, there is this idea of the coprocessors. Okay, so there are four certain functions or I mean, these specific functions having a Ketcher coprocessor, Arithmetic coprocessor, a SHA-256 coprocessor or a C-recover coprocessor or an Elliptic Curve coprocessor. I mean, you can have like different specific functions, functions that you're using many times inside the processor and it's like a special code, a special opcode of your processor that's doing something specific. So probably it's going to be this.

So the next stage will not be like one single processor, probably it's going to be one processor with many coprocessors. So, probably designing these coprocessors, or having a set of tooling of standard coprocessors for doing different pieces so that you can join them together, this is where the future of ZK is going at.

34:30: Anna Rose:

Yeah, that's so interesting to think of it from that perspective. We've done episodes with RISC Zero, I think in October, we did a recent one with Succint who released SP1.

34:40: Jordi Baylina:

Yeah, I mean, SP1... People may don't know, but SP1 is fully built on top of Plonky3. So I would say that the core of SP1 is very much Plonky3, and I mean, here we are collaborating and working together with SP1, very much working in that direction. Because this is clearly the future of the ZK systems.

34:59: Anna Rose:

And I think it sort of follows the same story that you had, which is the realization that for certain parts of these systems, you will need a STARK, because RISC Zero always has sort of a STARK angle. And I guess SP1 will now, too. And in the case of those coprocessors, are there actually also the SNARK component? Or is it similar to what you've done with the zkEVMs, where it's like many SNARKs recursed by a STARK, but just that what's going into those SNARKs is not EVM bytecode, but rather like the RISC-V instruction set?

35:32: Jordi Baylina:

Yeah, I mean, this process, they require a couple of, so well, let's say different pieces. One is what's called continuations. Continuations means like, okay, you have a code that's executed, but you want your code to keep going. I mean, you want a long program to run. So you need a way to take an execution, another execution, another execution that goes after each other. You have like different proofs and then you want to pack them together in the single proof. Okay? So this idea of continuation is because you are executing something that's long and you may want even though all this proof to compute them in parallel in different systems, but you generate a huge computation in a single place. Okay, so then there is different techniques for doing this continuation, which at the end is recursion. And again, if you want to then prove this computation in Ethereum or in a blockchain, you probably will want to do a small proof and a proof that's cheap to verify. And then you just take this final proof and then you convert it to a SNARK. Okay?

36:30: Anna Rose:

To a STARK?

36:32: Jordi Baylina:

To a STARK, no, to a SNARK. So you have a STARK and then you convert it to a SNARK. I mean to...

36:36: Anna Rose:

At the very end.

36:36: Jordi Baylina:

In our case we are using Plonk.

36:38: Anna Rose:

Okay, okay.

36:39: Jordi Baylina:

But I mean you can use Groth16 or you can use Plonk or other elliptic curve based systems which are the biggest properties that the proof is really small and is cheaper to verify it on-chain. And the other piece, of course, is going to be these specific opcodes or pre-compiles or these specific coprocessors that will be important, especially at the beginning. Maybe when hardware acceleration is so fast, maybe all this will become less important. But I would say that in the next generation, all these coprocessors are going to be... They're going to make a lot of difference for the systems to being real practical.

37:22: Anna Rose:

I mean, there's also a lot of coprocessor projects out there, so maybe that's good.

37:26: Jordi Baylina:

Yeah, I mean, that's a cool thing of the contribution and open source. I mean, people is writing, SP1 just opened this and people are just writing these coprocessors and so on. I mean, we just want to remind people that all these coprocessors need to be audited. You need to do it in the right way, but that's the way.

37:44: Anna Rose:

I feel like we used to see this image of the mainnet Ethereum with these rollups that came off it. With the coprocessor, it actually adds this different landscape to what's around this mainnet. I'm curious how you imagine all of these tools working together. Do you think that eventually these tools are, for example, connected by a shared sequencer or connected by an AggLayer or there's like a mesh of bridging happening between all of these elements? Can you paint a bit of a picture of what you see coming in that regard?

38:18: Jordi Baylina:

remember at the beginning in:

38:30: Anna Rose:

Sharding, exactly.

38:31: Jordi Baylina:

arding, all the sharding, but:

38:38: Anna Rose:

The execution layers of Ethereum.

38:40: Jordi Baylina:

Execution environments and I mean... But even, I mean, execution environment even was a...

38:45: Anna Rose:

It wasn't a concept.

38:45: Jordi Baylina:

It wasn't a word at that time. But the full idea is, OK, we have one Ethereum, we can have many Ethereums, and they can run in parallel, and they communicate with each other. I mean, this is the original Ethereum design. This was the serenity phase by the time. And this sharding, but this thing. Here, two important things happen in between. The most important thing is this separation between data availability and execution environment or validity proof if you want. So you proof in one side the execution and you keep the data available in one side. And Ethereum right now is focused very much in data availability, that's what they are doing. And then the ZKs is all these processors. And we are start building these processors. The first impact of that is that these processors, they don't need to be uniform. They can be different processors and you can have different chains and different like these layers, they can be very different. I mean, each one can have their own token, each one can have different data availability models, they can have some centralized sequencer, decentralized sequencer, some of them... One can be VM. Sorry, some of them can be things that are not even VM. You can do maybe zkWASM or Maiden like, or they are designed in a different way. So you already have like different processors that can run all them on top of Ethereum.

But the next challenge is, OK, you have a lot of processors, a lot of chains that are running on top of Ethereum, but how you connect with each other? How they interact with each other? How you build this world that, I mean, zkWASM, private rollup is connecting zkEVM or to a Maiden or to Optimistic? So how they connect with each other? And here is the full idea. This is the main idea of the aggregation layer that most of the teams are building somehow. We are... Until now we didn't have processors. So talking about how we connect the processors, when you don't have processors, it looks like, OK, it's very theoretical. But now we are starting having this processor. So we are starting worrying about that. And here in Polygon, our proposal is the aggregation layer, is this piece that what it does is, you can have different chains, you have a single bridge, you have a unified liquidity bridge, or a place where you put all the liquidity. And then the idea is how you can do this transfer between systems, but with the warranty that the systems are good because there is a zero-knowledge proof that give you the warranty that what you're doing is correct.

This is the big difference, for example, with an Optimistic rollup. I mean, ZK rollup, they give you this immediate thing. So you change the proof, you show the proof, and you are there. You don't need to write anything else. And here, of course, from the ZK perspective, having a low latency ZK becomes really important. So this is where zero knowledge takes a lot of importance, this bridging, this inter-rollup communication, even inter-Layer 1s, it's important. But in our case, it's this inter-rollup. And here is how I see the world, because of here, okay, it's in Polygon, but what happened with things that are outside Polygon? So what happened with the people that's working with Optimism or that's working with Arbitrum or with zkSync or with, or even the Starknet. I mean, how you connect. So here, the way that I see, it's a matter of friction. So we have the...

The first level is you are in the same chain. If you are the same chain, you have a full composability. I mean, you are there. Okay? Next thing is you are in the same constellation. You are in the same aggregation layer in this case. Here we can expect that the transferring between chains can be in the order of few seconds. I don't know, three, four, five seconds the transfer. So it's like relatively low friction on that. If you want the inter-constellation rollup, then you will have to go through the layer 1. And layer 1, here we are talking about maybe lots of seconds or even a minute, few minutes each way.

42:46: Anna Rose:

You're talking about like writing to Ethereum and then writing back out to another constellation.

42:50: Jordi Baylina:

Yeah, if you need to wait for finality, I hope that Ethereum improves the finality at some point. Finality is relatively bad currently. It's just, I think it's like two eras or something like that. So it's between six and 12 minutes. But I think this can be improved at some point. But in any case, so going to a layer 1, you have a little bit more friction. And even beyond that, you can even have inter-layer 1. We can mean just, I don't think Bitcoin, but probably between Solana and Ethereum or between Ethereum and other layer 1s, it could be also some... Let me not call it bridging systems. So let me call it double pegging systems. Because when we talk about bridge, sometimes we see this multi-sig things... I'm talking about systems that are truly...

43:32: Anna Rose:

They're closer almost, right? They're more interconnected.

43:35: Jordi Baylina:

Yeah. I mean, they don't depend on anybody. They are just protocol based. You don't need to trust anybody that's part of the protocol. It's a double-pegging. I mean, this is a good...

43:46: Anna Rose:

What's the word you're saying there?

43:49: Jordi Baylina:

Double peg. I mean, this is the double pegging.

43:50: Anna Rose:

Double pegging?

43:51: Jordi Baylina:

Yeah, double pegging. This is the idea is that you can lock phones in one network, and then you start using in the other side. And then you can burn it in the other network and then unlock it in that. But all that, I mean, it's very much a bridge. But without having a multi-sig or, for example, if you lock that in Bitcoin, okay, who has the multi-sig? Somebody has the keys of the phones that you are locking. So can you do that in a protocol base? Well, if you are Bitcoin, you probably do some fork and it's gonna be complex. But if you are two Ethereum networks or maybe Solana and Ethereum or other two layer 1s, you may want to do that. And that's definitely possible. And for this bridging, the ZK is also very, very, very important because you can build these systems, okay? But this problem is gonna take longer friction. But even with different friction levels, you will have a single space where all the chains will connect with each other. And we are designing that.

44:54: Anna Rose:

Is there two AggLayers then? Is there sort of the AggLayer for the Polygon and then another AggLayer for all?

45:00: Jordi Baylina:

When we refer to AggLayer, we refer to this single liquidity bridge, and the idea is that you don't move the funds in the Layer 1. You just keep the funds in a single place, and you are just pointing one side to the other, and you can go really fast.

45:13: Anna Rose:

It's like a settlement layer above the L1.

45:15 Jordi Baylina:

A settlement layer, yeah.

45:16: Anna Rose:

Okay.

45:17: Jordi Baylina:

Exactly. And then, of course, if you have different settlement layers, then you need some mechanism also to interconnect them. Here we have a layer 1. Here, it's a world where everything needs to be designed here. This is thinking about how to start talking with other teams about these protocols, but it's very, very early stage. I mean, we actually, again, rollups are very, very, very new. We are even putting them, the networks are creating now. They are connecting now. We are start connecting in point, where are start connecting many layers to the system, but it's very early stage, and it's also a lot of opportunities are here, but I mean, it's like a nearly stage.

45:59: Anna Rose:

Yeah. You can't tell yet kind of how it's going to shake out. I want to make one quick note, you mentioned constellations. I think on previous episodes, Tarun had been referring to these different kind of groupings of rollups as like federations almost.

46:13: Jordi Baylina:

Federations. Just take the word you want. I mean, I'm not really good in wording, but that's the idea.

46:19: Anna Rose:

No, just for the listener to maybe make a little link to what we were talking about in those other episodes, because it's like I think of them, at least now, there's people who are deploying kind of chains of a certain nature in, like you said, Constellation or Federation, these sort of spaces in L2s. So you have like the ZK stack rollup builder. So you could build rollups, release them, and they'll be immediately tied into the zkSync ecosystem very easily. But connect them to outside rollups, you might have to do extra stuff. But that seems like a temporary setup, don't you find? I feel like it's right now you have all of the different federations, sort of like new rollups kind of coming out of them. But I do think there's going to need to be some connection, like a reconnection.

47:08: Jordi Baylina:

Yeah, I mean, this is the aggregation layer. This is very much what's that. It's important to understand that the aggregation layer is this layer that allows these different processors, these different rollups, not only to communicate with each other, also to send value between one, with each other. And right now, I mean, two years ago, there was almost no processors, OK? And now we are starting having a bunch of processors. So all this value transfer between the processors and the standards for transferring value between these processors, this will become, every day, more and more and more important. Here, the ZK tech is fundamental because at the end is you have a processor, you want to commit to a state and then you want to prove to another processor that this happened and you are committed and this is what it is and here ZK is important, especially for scaling. I mean, if you have four processors, you can always like try to follow all the rollups and follow all the chains. But if you have a world of many processors, you don't want to follow all the chains. Here the chain will be sovereign, they will generate a ZK and you will be able to import this state in your state, and that means that you will be able to use your, I mean, the funds, the transfers that they already commit to happen here. And that's why it's so important. And this is the aggregation layer that we're building is very much about that. It's about guaranteeing that these transfers and proposing a standard for different chains with different nature by itself to transfer value between one each other.

48:41: Anna Rose:

In the case of the AggLayer, the aggregation layer, like the level that it holds is really that settlement layer. But there are other ideas that have been floated like the shared sequencer model, where I think there they're kind of trying to also unify lots of different L2s, but at a different level. And I'm very curious, do you imagine those things working together? Do you not think that the shared sequencer is the right approach? Like, yeah, how are you thinking about it?

49:09: Jordi Baylina:

I see it, well here there is like two models. I mean, when you want to communicate this, especially when you want to communicate, but even more when you want to transfer values, there is these two models. I mean, the asynchronous model and the synchronous model. Okay? Shared sequencers is very much asynchronous model. Here you have some locking problems and locking problematics. Okay? And here the biggest doubt on the shared sequencers is how will they scale? I mean, I'm very worried on the impact because I mean, if you are waiting for the other network, you need to put some time outs, the other network needs to block, they have some specific times. And these systems, and this is more generic things, in general, the world works better... The world is asynchronous. And the world works better when you don't depend on the others. You're just transferring messages, you're sending messages.

And of course, if you have these mechanisms for you lock yourself, you transfer, you know that these funds are valuable, you have your time and then there is this idea of asynchronous calls or things that you can build on top of this message passing system. We need to see at the end, this is part of these algorithms or these protocols that we want to implement in between chains. What's the right solution? It's difficult to see. I think both of them are complementary. If you ask me, I would bet more for an asynchronous model. I think it's much easier. If we take the idea of internet, I mean, the protocols, there were at the beginning a lot of protocols that were synchronous that didn't work. And here I want to mention, DICOM, Corba, even the RPC, the first Linux, I mean, the first Linux RPC, they never work well on the internet. It was until the REST API, web, that are very much synchronous systems that the internet start working well. Even in the programming language, I mean, synchronous programming works much better when you are in an environment where you cannot trust the other side.

But again, I mean, this is very theoretical discussion. I want to see them to happen. We are very little stage here. We need to be very open to all the ideas. In the case of the Polygon aggregation layer, the idea is very... I mean, it's very much an asynchronous layer, but it allows easily to create these bundles, which is very much a synchronous, it's a synchronous sequencer. And it also allows you to connect to synchronous sequencer. So we are not closing the doors to any model. But again, I mean, this is a current discussion that's happening, a lot of work is happening. I'm super excited to see these things start happening. Right now in the Polygon the aggregation layer, we already can transfer between the chains and we're actually bridging between the chains in a way. But we need to see how all these applications work on this new world, in this new space that we are all building.

52:08: Anna Rose:

And I sort of, I think I said this before, I think things still also need to shake out in a few different levels. Like obviously the tech needs to exist, we need to see how people use it. But I want to talk a little bit about usage in this like L2 present that we live in, where I feel like it's hard to tell often when you're looking at the metrics that are used what the users are there for. Like you have these huge flows of people running into one chain, joining an L2, like they're bridging everything into an L2, spending some time in there. But then, I don't know, another L2 beckons and they all sort of head over there. I'm very curious what you make of that kind of activity. Was this kind of what you thought would happen or was this kind of unexpected that this is the way it's playing out? And how much of it do you think is real? I think that's the real question.

52:57: Jordi Baylina:

I mean, probably there is a lot of these things you are mentioning happening. I mean, here is very easy to put bots generating transactions or just throwing a lot of your own money in your chain just saying, just because you're saying that you have a huge TVL. I mean, well, this is a little bit more difficult because maybe you are risking your money, but besides that, I would say that from the technical perspective, it doesn't add a lot of value. And I would say that's kind of a honeypot. So the only useful thing is that could be a honeypot for somebody else. But if your TVL is based on your token, I don't think it's even useful. For me, what I'm interested in is in real applications, even if they have low TVL, even if they have low transactions, and being able to see how they are using it, what are their real needs and learnings.

In the case of the zkEVM, for example, we have been working in a lot of different projects and listening to them and what are their concerns, and a lot of the hidden work that we did during the last year was very much about solving all these things. For example, we had some issues with accuracy of the timestamp. I mean, there is, for example, a lot of DeFi protocols that the timing is really important.

54:10: Anna Rose:

We need that.

54:11: Jordi Baylina:

Yeah. So here, for example, we did some adjustments on that front in order to have these applications. Also the way that the people is using, I mean, people that's using like a single transaction with many sub-transactions. So who can we have them to do these things better? And the idea is working closer with the different applications and understanding these developers that are building on top of that and making the things better for them, scaling and preparing the things. This is what I'm most worried about. I mean, I mentioned at the beginning about the engineering. Engineering is about solving real problems. Well, an airdrop is an application, but I don't think that building applications for doing airdrops is what we are here for. I mean, it's an application, I respect all applications. This is one interesting applications. It's interesting because it tests... It can test the network and it can stress the network at certain point, but I hope that there are more real applications that will happen.

And for me, I mean, it's very sentimental here, but for me, it's like, okay, you just need to put the systems available and show them, explain them. And slowly the people will pick up real applications, real... People that really needs to use those chains will start coming and using. But it's a matter of time. It can be months, even years that all these applications and new applications sort of start happening on-chain. So slowly, in one side, it's like improving our systems, making the systems more reliable, even making the system more safe, all the auditing, making the systems more stable, making the system more decentralized. This is one path that we are taking. And the other is that option should come from the different chains, from different networks, just create for different applications, just building on top of that. I believe in solid rods, not in fast guru. But again, I mean, we'll see the market is very... Sometimes it's very, it's very...

56:23: Anna Rose:

nment akin to maybe November,:

56:56: Jordi Baylina:

I mean, in one side, I'm a little bit, sometimes a little bit disappointed in the industry, but this is not new, probably, especially in the last years. I mean, you see a lot of projects, applications that they don't care a lot at all about the values. This is a little bit annoying, but I'm an optimistic guy. I always have the hope that it takes some time to get the full point of what this industry is about. I think that people need to go to this phase. I mean, if you come from a different industry, you go here, you think in terms of other industries, how it works, meaning the competitive spirit and products and these things. But there is some point, it can take a while that you realize that what you're building goes beyond normal business product. You're just trying to get richer than the other one and that you are just trying to... Mean, here you are trying to build a different way where all humans organize and you are trying to build these common good, something that just works, something that's there, something that's part of the humanity that don't belong to anybody, and that's something that's useful.

58:10: Anna Rose:

Doesn't belong to a country, doesn't belong to a company.

58:13: Jordi Baylina:

Exactly.

58:14: Anna Rose:

Yeah, it's kind of amazing.

58:15: Jordi Baylina:

And this is... I mean, and when you discover that, and when you see also how far away we are from that, but it's when you discover that, this is when you make the click and then you stop. Okay, you see the things in a very different way. Of course, you need to build the things in a sustainable way, you need to figure out how to do the things, but your view changes very much and then you prioritize other things. It's disappointing to see it, but at the same time you are convinced that, okay, it's a phase. I mean, it's normal that people come here and they see it in that way. All these people in a few years will become ambassadors of this technology and it's part of the understanding process. So there is nobody that the first day fully understand what's this tech about. Decentralization, I mean, these values of decentralization, this censorship resistance, this freedom, this is just there. It's like air. You just take the air and use it. Okay? And nobody will ask you these permissionless systems. I mean, nobody will ask you what's even your name for using Bitcoin or for using Ethereum or for using any of the dApps that we are building.

When you get this point, then your life change. I mean, your vision of the system change, and every time we are more, and this is what we are building. Steps to go there, maybe tough. I mean, sometimes we are building things that looks like they are very centralized, they have systems that you don't really like, and they have centralized sequencers, and they have centralized provers, and they have escape hatch, and they have all these things. But we need to understand what we're building. And I think we will, I think, well, people progressively is getting it. We are a minority, maybe, at this point, but we are a minority very much because a lot of new people came to the industry. So this is the good news.

::

Exactly. It's like a minority that's still growing along with the whole industry, because there's way more people nowadays than there were before. Even talking to you about this landscape and how many things we could talk about all these different ways of looking at what's there and actually spaces that are now there for people to experiment. That's so cool. Jordi, thank you so much for coming on the show.

::

It's a pleasure, Anna.

::

Thanks for sharing an update on all that you're doing, yeah.

::

Thank you very much and see you next time.

::

Sounds good. All right, I want to say thank you to the podcast team, Henrik, Rachel and Tanya. And to our listeners, thanks for listening.