This week, Anna catches up with Tarun Chitra and Sreeram Kannan during a spontaneous session recorded at Devconnect 2023 in Istanbul! They cover a variety of topics seen at the event, including zk toolkits, intents and Data Availability, shedding light on how these ideas are reshaping the framework of digital interaction and governance. Their chat covers the challenges and opportunities these technologies present, highlighting their significance in the current ecosystem. Later, they explore the complexities and nuances of EigenLayer, offering detailed insights into its functionalities, applications, and potential impact on the industry.

Here’s some additional links for this episode:

Aleo is a new Layer-1 blockchain that achieves the programmability of Ethereum, the privacy of Zcash, and the scalability of a rollup.

As Aleo is gearing up for their mainnet launch in Q4, this is an invitation to be part of a transformational ZK journey.

Dive deeper and discover more about Aleo at aleo.org

If you like what we do:

Transcript

00:05: Anna Rose:

Welcome to Zero Knowledge. I'm your host, Anna Rose. In this podcast, we will be exploring the latest in zero knowledge research and the decentralized web, as well as new paradigms that promise to change the way we interact and transact online.

00:28:

This week, we bring you an interview with Tarun, Sreeram, and I. This was a spontaneous one, recorded during the Devconnect event in Istanbul. In fact, we were sitting just outside the Aleo Hackathon for the recording. Like, we were just in the hall, so you as a listener may notice some background noise, people walking by, you know what I mean, general hackathon sounds. To kick off the episode, we did a quick survey of the ideas that we had discussed at Devconnect, including ZK Toolkits, Intents, and DA. And then we spent some time really digging into EigenLayer, something I had yet to properly do. Now, both Tarun and I are investors in EigenLayer, him through Robot Ventures and me through ZK Validator. And while Tarun clearly deeply understands the system, as you will hear leading up to this interview, I did not. So you'll hear me stumbling around a little bit and trying to fill in the gaps in my understanding. And unlike other podcasts, this one was very spontaneous. So we didn't have very much time to prep. We are also recording this at the end of the Devconnect week. We are also a little bit tired. So yeah, here is a more unfiltered version of the show for you.

01:34:

But before we kick this one off, I do want to say a big thank you to all of the folks who came out for the ZK Hack Istanbul event. This was our second IRL hackathon, and it ran from November 10th through 12th. We had over 170 hackers and 59 teams. And 48 of these completed and submitted their project at the end of the weekend. Once again, we were blown away with the creativity, technical skills, and speed of development of these hackers. I will link to a tweet which highlights all of the winners and runner-ups and bounties, but a big shout out to everyone who came through, especially the winning teams, Katz, DamnFair, AnonAbuse, zkVM, and =nil; chronicle. We also had this time around a chewing glass prize, and the winners were O1JS SHA256 and Hello, HyperCube. The hackers got to vote on their favorite project, and so Hacker's Choice went to KZG CEX Solvency. And in the three days following the event, Devfolio, the hackathon platform we'd used for the event, ran a quadratic voting competition where everyone who had participated could vote on all of the teams. And for that, Circom Monolith came in first. So congrats to all of the winners. But yeah, thank you all for coming out.

02:49:

I know the team was so excited about it. We already started planning our next IRL hackathon. We're aiming for May, June, probably summer in Europe. But even before that, we're going to be hosting our next online ZK Hack. So this is us returning to the puzzle hacking competition and multi-week workshops. And we're planning this for mid-January. So keep an eye on the ZK Hack Discord, ZK Hack Twitter for more info. Now, Tanya will share a little bit about this week's sponsor.

03:17: Tanya:

Aleo is a new layer 1 blockchain that achieves the programmability of Ethereum, the privacy of Zcash, and the scalability of a rollup. Driven by a mission of a truly secure internet, Aleo has interwoven ZK proofs into every facet of their stack, resulting in a vertically integrated layer 1 blockchain that's unparalleled in its approach. Aleo is ZK by design. Dive into their programming language, Leo, and see what permissionless development looks like, offering boundless opportunities for developers and innovators to build ZK apps. As Aleo is gearing up for their main net launch in Q4, this is an invitation to be part of a transformational ZK journey. Dive deeper and discover more about Aleo at aleo.org. So thanks again, Aleo. And now here's our episode.

04:06: Anna Rose:

I think we ask this every time we do these live event ones, but what were the themes that you've been hearing? And it doesn't have to be talks. This is maybe what people are talking about. I mean, you just mentioned DA, data availability, maybe in different forms, or yeah, do we need it in the way it's been proposed or not? Maybe let's start with DA, expand a little on what's been discussed, and then we'll talk topics.

04:29: Tarun Chitra:

I think most of the stuff about DA has just been focused on lowering fees for use cases that are not purely financial or like DeFi types of use cases. And I think because Celestia is live, it's now like, it doesn't feel like the only DA is ETH L1, it feels like there's like choice in the marketplace. And I think Sreeram is probably the best to talk through the pros and cons of each of the different models. But I think a lot of it is just focused on reducing cost. But I do feel like there's still, the reason I would say we aren't fully out of the disillusionment side of the distribution, is there just hasn't... It feels like everyone I talked to is basically saying when application, when new application, right? Like there's tons of new infrastructure, but there haven't been too many new applications... Like there's no... The Uniswap moment of this cycle hasn't happened yet.

05:20: Anna Rose:

Let's talk DA for a second though you just said Sreeram is like a good person to break this down.

05:25: Sreeram Kannan:

Yeah.

05:26: Anna Rose:

Can you?

05:26: Sreeram Kannan:

Yeah, sure. I do think, maybe it's the sample of people that I interact with but it seemed like a lot of people are excited about DA at this time. It's really credit to the Celestia people for actually making this happen, because it's such a boring concept.

05:44: Tarun Chitra:

Maybe define it for us.

05:45: Sreeram Kannan:

Okay, so to define it, the idea is when you have the... One of the ways to scale a blockchain, and this is particularly the roadmap that Ethereum committed to, is to use rollups. And what are rollups? Rollups basically offload computation and then create proofs that the computations were done correctly. When you do this, one of the important things you have to do is to publish either the inputs or the outputs to the computation. This data has to be published somewhere. Why does this need to be published? Because if this data is withheld and not made available by the rollup operator, what might happen is funds could get stolen. Because if you don't know the data, then you can't compute what the rollup did. Or if, in case of a zk-rollup, what might happen is, even though you don't need to double check the computation, somebody needs to know the state in order to continue the computation thereafter. So a role of a data availability layer, I think some people are saying maybe it should be called a data publishing layer, a layer which ensures that when data is published, everybody has access to this data. So that's what a data availability layer is, even though it's a kind of critical piece in the blockchain stack, the identification that this could be kind of decoupled and scaled separately is one of the key insights in the Ethereum roadmap, as well as what Celestia took and built around.

07:17: Tarun Chitra:

So one thing, I think, to a person who's never heard of DA before that oftentimes gets used as an analogy, a very imperfect one, but one that's worth maybe going through is a comparison of like dialup or T1 to broadband, right? In terms of like single chain dialup versus broadband, like many asynchronous kind of connections, capturing data. How do you think about that analogy, especially when it comes to this idea that in blockchains, it's not just about how much data you get, it's also execution environment, there's all these other overheads. Yeah, how do you kind of think about that?

07:52: Sreeram Kannan:

Yeah, so one of the things that the modular landscape does is to decouple things like computation throughput from data throughput. So you can execute the... You can separate the quality of data availability in a consensus protocol by just looking purely at how many bytes per second can it transport. And then on top of it, you're slapping a VM on top of it. And then the VM translates that, oh, I can take 200 bytes per transaction, and each transaction on average takes this much computation and you can translate that into a throughput or a transactions per second for a certain kind of transactions. So the fundamental performance metric of DA layers therefore bytes per second. So the comparison with something like a dialup versus a broadband is quite appropriate.

08:42:

But what's actually happening under the hood is the following thing, which is that in most of the blockchains today, you have highly redundant transmission of the data. Every node downloads and stores exactly the same data. Therefore, the entire system's performance is bottlenecked by any one node's bandwidth availability. Whereas scaling data availability fundamentally pertains to the idea that everybody doesn't have to download all the data, everybody downloads only a portion of the data using things like erasure codes and KZG polynomial commitments, even though some fraction of nodes go offline or are malicious, you can still reconstruct every bit of the data from a small fraction of the nodes.

09:27: Tarun Chitra:

Okay, let's explain it to a 15-year-old or high schooler for data availability sampling and also the commitments. Because I think understanding the trade-offs and just the process in which this is done is important before we talk about the security guarantees.

09:43: Sreeram Kannan:

Yeah. Imagine you have a bunch of nodes, and you want to store some data on those nodes, and you don't want everybody to store all the data. One way you do it is you take the data and then split it up into chunks and then say that data item one, I only store on a few nodes, and data item two, I only store it on a few nodes, and so on. If you did this, what would happen is if the nodes that just stored the data item one, if they go offline, then you lose that portion of the data completely. So in simple schemes like this, where you are redundantly storing data, but only on a small subset of nodes, you have this problem that scalability and security are at odds. If you want more scalability, you will say that only a few nodes store the data, each data item, but that means that if those nodes go offline, you lose that data item. So what erasure codes do is instead, they allow you to mix the data item in complex configurations.

10:48:

Imagine X1, X2, X3 are different data items. You send X1 + X2 + X3 to one guy, X1 - X2 + X3 to one guy, X1 + X2 - X3 to another guy, and so on. So that any three nodes, you get three linear combinations, and you can actually find out what is X1, X2, and X3. So that's the basic idea of erasure coding and data availability scaling. I'm going to separate data availability scaling from data availability sampling. The data availability scaling basically means everybody doesn't download all the data, but together, the system is still secure and has all the data, even if some of the nodes or a good fraction of the nodes go offline. What is data availability sampling is, imagine you have a network that is running, and I'm sitting outside the network. And I want to know whether this network's doing its job of actually storing and downloading the data. So how can I verify that?

11:45:

A normal method to verify that would be to say, go and download all the data, and then you will know whether the data item's available. Data availability sampling is this very nice idea that instead of actually downloading all the data, you decide which random samples to query for. And then you say, oh, give me sample 30, give me sample 35, give me sample 50. And if you get all of them, you're like, okay, it seems like it's highly likely that all the samples must be available, because the guy is giving me whatever I asked for. So that's data availability sampling. Data availability sampling is a mechanism to scale verifiability. It is not a mechanism to scale the consensus bandwidth in the network. So you can think of these two things, data availability scaling, which is how do I make sure that no node in the network actually downloads all the data, versus data availability sampling, which is a verifiability scaling method.

12:39: Anna Rose:

When you say both of these, and it might be very off, but it sort of reminds me a little bit about MP3s and the sort of curves. Is there any connection between erasure coding or the sampling and that kind of technology?

12:53: Sreeram Kannan:

Yeah, I mean, definitely. So MP3 is compression technology, but if you look at most of the things like CD-ROMs or where there is a chance that, for example, you may scratch your CD, so data has to be stored in such a way that it's resilient to a certain number of erasures. And that's exactly the same technology that is used in data availability sampling.

13:20: Anna Rose:

Nice. Just sort of bringing it back to the event and the week. Are there any other topics other than DA that you thought were discussed a lot where you were?

13:30: Tarun Chitra:

Restaking, which is what we're going to talk about more in a bit. I may or may not have been also one of the proponents mentioning it a lot. So...

13:40: Anna Rose:

So it's funny, someone said to me, everyone is talking about Intents. But I personally have not actually heard much about it. I heard one project that was doing some ZK Intents project that talked to me, but otherwise I hadn't heard very much.

13:54: Tarun Chitra:

transaction supply chain from:

14:57:

On the other hand, this thing was a combinatorial auction, too hard to do, not centralized. And effectively, what you see is you see a bit of an aggregation event there. So we went from totally disaggregated to somewhat aggregated. Then we went in last year at the merge to proposer-builder separation, where instead of bidding on sequences of transactions, you bid on the entire block. This kind of moving from very disaggregated to aggregated is realistically a way for making Ethereum sustainable. It increased the net revenue to proposers as you did this aggregation process and it took more of that revenue from MEV searchers and people of that form and gave it to proposers. On the other hand now, the users want to rebel against that sort of monopoly power which came from that. And you can argue that we're now about to enter the disaggregation era of Ethereum and Intents and RFQ systems are a way of disaggregating the value extracted by the proposer and returning it partially to the user.

16:01:

And so this sort of aggregation-disaggregation process, you know, when I was five, my mom once told me this very wise coen, which is; capitalism is just bundling and unbundling repeated ad infinitum. And at some level, I think this entire, what we're seeing in the transaction coalescence world is that, and the Intents model is unbundling the MEV in a sort of more peer-to-peer fashion versus kind of giving it out to our proposer. So I think that's the reason people like it. It feels like it's the thing that has the highest growth rate or highest sort of derivative, like people think it's making the most progress. If you look at UniswapX, it's been taking in a lot of Uniswap's volume. On the other hand, it's sort of this thing where it's like a nebulous concept. And of course, disclaimer, I'm spending a lot of time trying to write some research on this because it feels like this type of thing where everyone... It's like there's the pawn, you know it when you see it, Supreme Court aspect to it. Like everyone is like, this thing is an intent, this thing is not an intent.

16:58:

Yet there is no definitions. And so I think trying to make sense of that is like, what I would say is one thing people are doing, people are making all these software frameworks for writing these types of things that effectively, in my mind, are ways of avoiding paying the proposers.

17:13: Anna Rose:

Wow.

17:13: Tarun Chitra:

Sorry for the rant.

17:14: Anna Rose:

but they launched in February:

18:09: Tarun Chitra:

You waded almost into one of the controversies on my zkEVM panel, which was the first question asked, which was who's the first zkEVM? And of course, one of the answers was, oh, we're all in it together. We're all first. And then immediately after that, someone was like, no, no, no, we were first.

18:27: Anna Rose:

Nice. I do want to mention one more, which is on the ZK front, which was this idea of on the ZK front, if we look at the difference between last year and this, it's the fact that there are now maybe still in testnet, but there are now environments, sandboxes, testnets, frameworks, where people can start to build and deploy ZK stuff a lot faster. I think that's also why in April of this year when we did ZK Hack Lisbon, people were able to build anything. I mean, otherwise, they're just doing cryptography implementation, which is very, very challenging. Only a limited number of people can do it. But I think it's been opening, like the space has been opening up to more noobs, kind of... Not first-time hackers, maybe, like experienced developers, but first-time building in ZK. And then this time around, we saw even more of it. And there is even more tools around ZK and how to deploy them faster. And I don't know if we're at the point where there's a lot of debuggers built into systems. I know these languages and these frameworks are still super, super young, but yeah, that's something that I noticed, especially at the hackathon but also kind of all week. Oh, I think last time in Paris, I also asked, what was the best swag that you saw this week?

19:45: Tarun Chitra:

Speaking of Intents, there is a team called Essential, and they give out Essential Oils. And me, as someone who has a house filled with 500 candles and incense burning all the time, I appreciate a scent-based swag much more than a black t-shirt with a logo.

20:04: Anna Rose:

Very nice. All right, I think we could shift the conversation now a little bit over to EigenLayer. I feel we've set the scene in describing DA. I think it's funny because I would often kind of put EigenLayer in the DA camp. Like that somehow it competed with Celestia as it was being proposed as just like a DA layer. But is that wrong to put it...

20:30: Sreeram Kannan:

No, it's not wrong.

20:31: Anna Rose:

Okay, it's correct. And yet, that's a problem... The minute I look at the system, I'm like, is it DA?

20:37: Sreeram Kannan:

It's because we're building two things. We're building EigenLayer, which is a general purpose mechanism for sharing decentralized trust. So you can take the staking and the node operators and the economics underneath the Ethereum network, and EigenLayer lets you share that with anybody who wants to consume it.

20:56: Anna Rose:

To share it.

20:57: Sreeram Kannan:

To share it.

20:58: Anna Rose:

Okay.

20:58: Sreeram Kannan:

Imagine you want to build an Oracle or a data storage network or a new AI inference network or a decentralized prover network for a ZK. Any of these things, you need many nodes, and they need to put in some stake, and then they need to participate in active validation of a certain service. So EigenLayer is a generalized mechanism for anybody to build arbitrary distributed systems on top of the Ethereum trust network. So that's EigenLayer.

21:28: Anna Rose:

Okay. So that's not DA.

21:29: Sreeram Kannan:

That is not at all DA. One of the modules, so we call these AVSs, Actively Validated Services. And anybody can build an AVS. To demonstrate the power of the platform, we built the first actively validated service ourselves. And that is called EigenDA, which is a data availability service.

21:48: Anna Rose:

I see. But it's still kind of in the Ethereum camp, right? Does it model itself as a Celestia-like hub that rollups are supposed to link into, or is it doing a different kind of DA?

22:00: Sreeram Kannan:

Yeah, that's a great question. The way we modeled our data availability system is as an adjunct to the Ethereum blockchain. So you have on the Ethereum blockchain, let's say you're running your own ZK rollup, and you want to post your data somewhere, and Ethereum doesn't have enough bandwidth, or it's too expensive, whatever, and you can post it on EigenDA. EigenDA is not a standalone blockchain, unlike Celestia or Avail or other things. And it was designed from first principles to be purely an adjacent to Ethereum. What this does, we surprisingly found, is it liberates a lot of the trade-offs that exist in building a data availability. Because usually when you're... The other blockchains that are serving to be data availability systems also build an ordering service. And basically, it's a blockchain because there is an inherent ordering of the data blobs that have been posted into that system.

23:01:

What we realized is already rollups have an ordering service. They're relying on Ethereum in the world that we are living in. So what we do is just provide a data attestation service where you write the data to the system, the system gives you a thumbs up saying that the data has been stored and custodied, and then that aggregate signature of the commitment is then posted onto Ethereum. And Ethereum itself has an ordering layer, so you have an implicit ordering of all the data blobs through Ethereum by decoupling consensus and data availability. So we are even more modular than these other solutions. And this comes with benefits and trade-offs. And the benefits are very clear when you are a rollup, which is natively on Ethereum. But these other systems offer you mechanisms where you don't have to be on Ethereum for anything. And then you can be natively on Celestia or Avail or and so on.

24:09: Anna Rose:

So in the Celestia model, though, they do have consensus and data availability.

24:13: Sreeram Kannan:

That is correct.

24:13: Anna Rose:

But they don't have a settlement layer.

24:15: Sreeram Kannan:

Yeah. That's right.

24:16: Anna Rose:

In your case, it's just the data availability.

24:18: Sreeram Kannan:

It's purely just data availability.

24:20: Anna Rose:

Okay. And just one thing, because they had this other project, I think it's Blobstream. Am I saying that right?

24:25: Sreeram Kannan:

Blobstream, yes. From Succinct.

24:25: Anna Rose:

Is that similar?

24:28: Sreeram Kannan:

That's not similar. So what that does is that bridges this information from the Celestia blockchain. So let's say you are an Ethereum rollup, and you still want to consume the Celestia Blob space. So you're a rollup, you go and write your Blob into the Celestia blockchain. But your rollup contract is sitting on Ethereum. So now you need some kind of a bridge which tells you that what has happened in the Celestia universe, and then that information is bridged into Ethereum. That's what Blobstream is.

24:57: Anna Rose:

Got it. Okay.

24:58: Sreeram Kannan:

So one of the key things that these other data availability systems like Avail and Celestia were built on is data availability sampling. And like I was saying, data availability sampling is a mechanism to verify from a third-party point of view that the data is available. And this is really useful and gives you very high trust guarantees on the system when you're natively on Celestia. Because even if all the Celestia validators collude and try to make... Sign on a data item for which they did not publish the data, if you run a light node, you will try to sample the data chunks and you find out, hey, the data chunks... This blocks data chunks are not available. Even though a majority of the validators signed off on the block, you will not accept it, and you'll say, reject this block, because I'm unable to access the data items inherent in it.

25:55:

And if everybody does the same thing, then the blockchain will stall, and you can fork the chain and then retrieve it to a correct state. So this is a superpower that is possible on blockchains that implement data availability sampling. But when this state is bridged into Ethereum, because if I'm a rollup on Ethereum, or an Ethereum smart contract does not have the ability to do data availability sampling. So what that means is essentially, you have to trust this majority of validators from the other network, from the Celestia network or Avail network, and the rollup contract... If the majority of these nodes are malicious, rollup contract still makes progress, and your money is stuck in the rollup. So this can definitely happen. So the benefit of sampling is not prominent, or I would say non-existent, when you are an Ethereum-adjacent layer. And so we built our system around data availability scaling instead of data availability sampling. What these other systems did is they also made trade-offs where even though the system has data availability sampling, there is no scaling of data availability. What it means is every consensus node in Celestia downloads the entire block. And there are lots of technical reasons for this, but that's the architecture. And whereas it is highly scalable for verification, it is not scalable to be a consensus node.

27:29: Tarun Chitra:

never you told me about this,:

28:06: Sreeram Kannan:

Hey, number going up.

28:07: Tarun Chitra:

Great, for sure. But I mean, you have to be able to bootstrap such a network, right? You couldn't just start Telestia tomorrow as a fork and hope that it's secure enough, right? It's actually quite hard to do that. And it's an accomplishment to have gotten to that market cap. Right?

28:25: Sreeram Kannan:

Yeah, absolutely.

28:26: Tarun Chitra:

So first off, but I think the interesting thing about EigenLayer and restaking in general is you don't have to bootstrap a token. You get to use Ethereum's market cap. And the way you're doing it is you're opting into extra slashing rules and getting potential fees but also potential extra slashing. But that allows you to piggyback off of Ethereum. So maybe walk us through the process and why you're able to build DA in this way that doesn't rely on a new consensus.

28:57: Sreeram Kannan:

The process for how a staker or operator ops into EigenLayer, there are two mechanisms. One is called native restaking. You stake in Ethereum natively. And when you stake in Ethereum natively, you have to set who's the withdrawal address. Usually you'll sell it to your own hardware wallet or wherever is the safest place you have, because when you withdraw, that's where the funds go to. Instead, when you opt into EigenLayer, what you do is to add a step in the withdrawal flow. You say, set the withdrawal address to a contract that you create in the EigenLayer system called an EigenPod, which is your own little zone in the EigenLayer universe. And in the EigenPod contract, you then set your withdrawal address to your hardware wallet. So when you trigger withdrawal from Ethereum, the funds go into the EigenLayer contract. And if you didn't do any kind of malicious activity and were subject to EigenLayer slashing, you will be able to then withdraw the funds into your wallet.

29:57: Tarun Chitra:

Plus other fees you may have earned for...

29:59: Sreeram Kannan:

Yes. So you will be able to download your fees in the normal mode every few weeks or whatever from the EigenLayer protocol. That's why you're doing all these things. Why take the trouble of opting into other things and risks is because you are actually earning something for delivering these services. So this is the flow for a native restaking. Now that you do this, you become a native restaker on EigenLayer. And you can go into the EigenLayer contracts, and because you have the EigenPod, you can specify what services you want to opt in and operate yourself. Maybe these services are like, I want to run EigenDA, I want to run an Oracle service, I want to run a bridging service, I want to run an intent-based architecture. Whatever is the set of services that you are opting in to run, you can decide. It's a purely opt-in system for all the sites.

30:55:

So you opt in and say, hey, I'm doing this. And then you have to say, who's the operator who's going to run the service? You could say yourself and say that, yeah, I download and run these softwares. And each of these softwares now are arbitrary software. They are not confined to the EVM or anything like that. It's just a binary or a docker container that you can download and run on any computer. So you can start building general purpose services which have nothing to do with the EVM. So let me explain. So now I mentioned the staker and the operator side. Then there is a service, somebody who's building these new services, they build two distinct things. One is they build a service contract which sits on Ethereum and then talks to the EigenLayer contracts. The service contract does minimal overheads and coordination. What are the coordination things?

31:50:

Number one, who can register into your system? Do they need 32 ETH? Maybe they only need 3 ETH because you're a different system, whatever. So that's number one, registration conditions. Number two, what is the payment conditions? Or if you opt into my data storage service, you store one gigabyte of data, you will get one ETH, whatever, some kind of a payment condition. Number three, slashing condition. If you say that you're storing data, and then I randomly recall you to produce the data, and then you don't do it, you will lose your ETH. Something like that is the slashing condition. So this specifies a service from the service side. So EigenLayer is the coordination mechanism which helps stakers find operators, find services, and then these three together then create a service economy where these services are then offered to consumers. Maybe it's a DeFi app which is using an Oracle service or an Intent service or a bridge or a DA.

32:50:

So that's the overall architecture. I mentioned there are two ways of staking. One is native restaking, which you had to do this withdrawal credential thing. There's also liquid restaking. You can take an LST like the Coinbase LST or the Lido LST or the Rocket Pool LST, and then put it into the EigenLayer contract. It's just like any token that you deposit, now you have a status inside the EigenLayer contracts that lets you participate in this economy.

33:18: Anna Rose:

I mean, you sort of mentioned it as this underlying general purpose, like it's not framework, but it's like a space where you can deploy things. You mentioned the DA level, the DA layer as one of those applications, is the restaking an application on top of it as well?

33:37: Sreeram Kannan:

Restaking is what powers the EigenLayer.

33:42: Anna Rose:

Is EigenLayer DA just kind of like, you want it to show what is possible, or is that meant to be an actual product that's being used?

33:51: Sreeram Kannan:

EigenDA is a product.

33:53: Anna Rose:

Okay.

33:53: Sreeram Kannan:

It was designed not only to be a proof of concept, but also to be a proof of value, which means it's something that is valuable and useful and delivers fees. Because when you have this complex, multi-sided marketplace, you have stakers, operators, services, service consumers, like there are four sides at the minimum in this marketplace, you want to, it's very difficult to bootstrap it. One way to do it is to actually build a powerful service, which is fee earning, so that stakers actually have something to get.

34:25: Anna Rose:

I see. Okay.

34:27: Sreeram Kannan:

So it's not meant purely as a proof of concept, it is a proof of value.

34:32: Anna Rose:

Will you be doing other things like that?

34:34: Sreeram Kannan:

Will we be building other services? We are not intending to be building the other services. We are intending for other people to be building all these services. You briefly asked what types of other services, maybe I can go into that a little bit.

34:46: Anna Rose:

Yeah, yeah. That's kind of what I wanted to find out. First, I wanted to understand what is EigenDA. Because maybe... I mean, my initial thought was, oh, maybe you will be building three of these, and they're all sort of feeding into the system. But what I hear is you're building one to create value, so that it's actually like paying kind of through the system to show its proof. But the other two, would you just put out proposals? Or would you be like, oh, this is what you could build?

35:11: Tarun Chitra:

So anyone can build this. And I think an interesting kind of overheard, which may or may not have had some input from me into is, all L1s either die a hero or live to turn into an L2 using restaking or their own DA layer.

35:35: Sreeram Kannan:

That is an interesting take.

35:36: Tarun Chitra:

But the reason I mentioned that is, as Sreeram can tell you about, there is an L1 who's moving to being an L2 using restaking.

35:43: Anna Rose:

Oh, yeah, yeah. And that's the Celo thing, right?

35:47: Sreeram Kannan:

Celo is actually working with us and using EigenDA to become a rollup on Ethereum. The total data throughput that Celo is operating at was, I think, even greater than the total throughput of Ethereum. So there was no chance for them to be a native rollup posting data to Ethereum, not only for the low fees. Celo has lots of users in Latin America and so on and like very low fees. So there's no chance for them to actually be a native rollup. Whereas, EigenDA has extremely good cost economics, and that enabled them to become part of the Ethereum ecosystem. We are also seeing another trend. For example, NEAR, which is an L1, is working with us to build lots of services for the rollup ecosystem. So for example, rollups need sequencers, and NEAR Protocol already has this consensus protocol and everything already running. So you could actually have the NEAR Blockchain work with Ethereum to provide some of these other services as an adjacent to the Ethereum blockchain.

36:55: Anna Rose:

Can you describe, let's kind of going back to that initial question, what are the services?

36:59: Sreeram Kannan:

What are the categories? So we're seeing actually, it has been amazing to see from, our own vision has been open innovation. So we want to maximize the surface area of permissionless innovation. That's really what motivates us in this project. But when we started, we had a couple of examples of what might be possible as EigenLayer services. And today, in the Restaking Summit, I just gave a talk where I showed 25 new services in five categories.

37:29: Anna Rose:

Oh, neat.

37:29: Sreeram Kannan:

And I'll maybe give a sense of these categories and what some of the most exciting things are.

37:33: Anna Rose:

Are you publishing that somewhere? Because maybe we can add that to the show.

37:37: Tarun Chitra:

Do you record it all the topics?

37:39: Sreeram Kannan:

Yeah, we will have a video.

37:40: Anna Rose:

Nice, nice.

37:41: Sreeram Kannan:

So I can give a link to that or just send a picture. For example, one category which is very obvious and where we see immediate traction is rollup services. So rollups need lots of adjacent services in order to make the rollup economy work. One example I was just mentioning is sequencing. How do you... You know, a single sequencer is like a censorship bottleneck in the rollup system. Do you want to have a small group of decentralized sequencers or a large group of nodes which participate in ordering transactions?

38:15: Anna Rose:

So is that decentralized sequencers?

38:17: Sreeram Kannan:

That is decentralized sequencers.

38:17: Anna Rose:

That's not the shared sequencers.

38:20: Tarun Chitra:

It could be.

38:21: Sreeram Kannan:

It could be a shared sequencer or a non-shared sequencer. But both of them need decentralization, and so all of them can use EigenLayer to actually build these kinds of... We're seeing many different models of decentralized sequencing being built, but Espresso, which is a leading shared sequencer is also working with us in sharing security from Ethereum in addition to their own native token.

38:46: Tarun Chitra:

So as a disclosure, EigenLayer investor, as also Anna is, I think the zeroth order model I had in my head for how this network accrues value despite not having its own token and layer 1 is effectively this idea that if you think about all the rollups in the world, they're eventually going to have to have decentralized sequencers. Those fees have to go back to Ethereum somehow, right? Ethereum's going to be losing a ton of fee revenue as more and more value migrates away. And the main way of, unfortunately, this word is overused, aligning the rollup fees with the proposer incentives and the L1 is to actually have a way for the L1 proposer to also earn the rollup fees. And the natural way to do it is via something like EigenLayer. Because if I use restaking, I'm reusing ETH, I'm earning fees in ETH, I'm in some ways giving the rollups some of my ETH in exchange for some other fees.

39:46:

One interesting thing that I've been thinking about a lot is if you look at this model of EigenLayer for restaking the different rollups plus DAS and ETH just being the place where data is posted, some data is posted, or maybe proofs of validity are posted, you really do start looking like Polkadot without the auctions.

40:06: Anna Rose:

Yeah, actually, Tarun, you kind of drew this out. You said there's like these three pieces that basically make you...

40:13: Tarun Chitra:

It really looks like Polkadot except for the auctions. I think the auctions were just very expensive for the parachains, here this is much more economical.

40:20: Sreeram Kannan:

Yeah. So the comparison to Polkadot is actually accurate in one sense, which is that basically, parachains gave a certain level of programmability while also maintaining shared security. But there was a certain amount of homogeneity which was needed. For example, they all had to be in WASM, and you have to write your virtual machine on top of that. And not only that, you only share security in the Polkadot model for the execution. For example, let's say you want to build a secret sharing service where you take a secret and then encode it into chunks and then send each node a portion of the secret. You cannot really do this in the shared security model of Polkadot. So the way we think about it is, in the history of blockchain, Ethereum was created as this Turing complete general purpose programming language, but it gave you only a programming interface at the level of the virtual machine, and then all the coordination about how the distributed system is managed, how the consensus is managed, and all of it was internalized into the protocol.

41:30:

And what we started, as we were thinking about new ideas for consensus and scaling and so on, what we found is this limited the level of permissionless innovation that could penetrate into these areas. And so if I had a new idea... as an academic we had tens of papers on consensus protocols, and we talked about many of them in the last ZK podcast. And if I were to go build a new blockchain for each new consensus protocol, that would just be a completely non-viable way to do things. Whereas what EigenLayer does is give you the first general purpose programmable distributed trust system. So you can say what each of the nodes in the system have to run, you have complete programmability at the level of the distributed system. So you can start building basically anything that requires decentralized trust.

42:21: Tarun Chitra:

Yeah, for the record, I wasn't saying it's exactly Polkadot, it was just more, it's funny, because I feel like Ethereum was always like, we're never going to have Fishermen and do... And it's sort of indirectly...

42:33: Sreeram Kannan:

All of these things are happening, actually, layer by layer.

42:37: Anna Rose:

Everything converges to Polkadot.

42:39: Tarun Chitra:

I feel like Ethereum is very good at taking the good ideas from different places and then smooshing them together?

42:47: Sreeram Kannan:

You know, I think there's also something similar to be said for Cosmos and how a lot of the ideas from Cosmos also percolated back to Ethereum. There's the idea of interchain security and how EigenLayer is related to that. Yeah, so for sure. Going back to this...

43:05: Anna Rose:

The services; I want to hear more exactly.

43:07: Sreeram Kannan:

Going back to the set of applications, rollup services, some of the examples are, I mentioned decentralized sequencers, bridges, for example, you want to build a super fast bridge between two ZK rollups. Each of the ZK rollups only settling on Ethereum every few hours because of the batching efficiencies, but I still want to interoperate between them at a much faster pace. Can I build some kind of a restake service which knows the state from the other thing and then puts an economic collateral at risk and then starts helping you bridge between rollups? That's an example. You can start thinking about...

43:40: Anna Rose:

Well, could you do something like coprocessors, that sort of coprocessor model?

43:45: Sreeram Kannan:

Well, that's the next category. You're right on target.

43:48: Anna Rose:

No way.

43:49: Sreeram Kannan:

Okay. So...

43:50: Anna Rose:

EigenLayer is taking over everything, though.

43:54: Sreeram Kannan:

So to finish the rollup thing, one more category that we see is like fishermen that Tarun just alluded to in Polkadot. The idea of who's watching in an... if there are a lot of optimistic rollups, somebody needs to be watching these optimistic rollups to trigger fault alerts. And today, there are a handful of major optimistic rollups, and there's lots of extraneous parties whose job involves also watching the network, because you're an RPC, you are an exchange, you're a block explorer. Whatever your job is, you happen to be watching the network. But in the era of thousands of application-specific rollups, and some of the rollups are actually building to be are highly transient, like they just open up, be a roll-up for a few hours and then vanish. You know, do your NFT distribution and then vanish or whatever. So these kinds of rollups, who will be watching? And nobody knows whether there'll be enough people watching. So a watchtower service is being built where a random group of nodes are selected to watch each rollup and you can spin up tasks and so on.

45:00: Anna Rose:

And that's the Fisherman kind of model.

45:02: Sreeram Kannan:

That's a Fisherman-like service on EigenLayer. The coprocessor is the next category, like rollups. The way I define coprocessor is like a serverless Lambda. It's a stateless service. I'm sitting in Ethereum, and then I want to run an AI inference. And then I want to consume the output of the AI inference. Why? Maybe because I want to do intelligent DeFi. Like, I put my money into a Uniswap pool, and I don't want to get raided by UniswapX, basically only sending toxic flow into my liquidity provision. So what I say is, I put my money into a pool and say that the price of this pool is modulated by an AI protocol, and the AI protocol looks at all the history of the trades and then tries to adjust the spread so that the toxicity is contained. But I need this AI inference to be highly trusted because if somebody says, oh, one ETH should be set as $20, and then somebody can come and raid the pool, you don't want that. So you want this AI inference to be highly trusted. So what you could do is run an EigenLayer service where there is enough economic security.

46:08:

Let's say you want to get at least $100 million economic security because in a given day, your trade volume is less than 100 million, then that makes the system fully secure. So this is an example of a coprocessor. Another example of a coprocessor might be, I want to run a Linux box and this particular program on this Linux box and get the output and then promise it on Ethereum. Or I want to run a database, I want to run a SQL query and then get it back. So all of these things could be either done using ZK technologies or they could be done using crypto economic security. And the trade-off here is like what is the excess cost of proving that goes into cryptographic like a ZKML or a ZKSQL or whatever set of solutions. Or can I instead of paying for the cost of brewing, I can pay for the cost of capital and then borrow the security from EigenLayer?

47:06: Anna Rose:

But in moving into the coprocessor space or having that be an option, is there still a reason to create a very from the ground up coprocessor? Like, do they still get some advantage in being able to build it from scratch versus building it on EigenLayer?

47:25: Sreeram Kannan:

Is there an advantage of building a coprocessor from scratch? I mean, the way I think about it is it's not a binary question of if you are building on EigenLayer versus building on your own, because EigenLayer is fully programmable. So whatever you can build on your own, you can also build on EigenLayer.

47:42: Anna Rose:

But does it lose anything by building on EigenLayer?

47:45: Sreeram Kannan:

The constraint that you're suffering...

47:47: Tarun Chitra:

You lose the ability to make a token.

47:50: Sreeram Kannan:

No, no, no. Okay, that's a misconception I want to correct.

47:52: Tarun Chitra:

Yeah, yeah. You lose the need to do it at inception.

47:55: Sreeram Kannan:

Okay, that may be. So what it does is, this is one of the things that everybody asks me is, hey, if you're saying that you don't need a token for your own like, middleware or service or whatever you're building, then what would people do? Like, where are they going to go? But first thing to observe is that's already true for being an application on Ethereum. Every dApp on Ethereum also has a token, the tokens used for governance or other purposes. In EigenLayer, we also provide native support for something called dual staking. So let's say you have a coprocessor and you have a coprocessor token, you can have both the coprocessor token be staked and the Ethereum token be staked. So you're borrowing a sense of economic security, and how much of each you can decide by controlling how much fee you're willing to share between the two layers.

48:46:

And over time, maybe for bootstrapping, you need a lot of ETH, and over time, you decide it's not that beneficial to your system, so you can tune it out to using more of your own token later on. So this is the kind of coprocessor category. But in general, this question shows up a lot, hey, what happens to my token? And the answer is nothing. Actually, fundamentally, if you look at the economic value of your token, it's coming from the future expected rewards, and if in the future expected rewards are maximized by not using ETH security and only using your own token security, you can tune the dual staking all the way to send all my fees to my own token.

49:28: Anna Rose:

Oh, wow.

49:29: Sreeram Kannan:

So you really don't have any loss. It's just optionality, and you can use the optionality in ways that most benefit your own community.

49:37: Anna Rose:

I feel like you already, kind of at the beginning, or like a few minutes ago, you defined restaking really well. But I feel like I still need to go through the motions of what it is.

49:51: Sreeram Kannan:

No but maybe this will help. I think restaking became a meme and a word, so we stuck with it, but really what we're building is permissionless, programmable staking. So when you're staking in any given blockchain protocol, what is happening is you are making a promise to run that blockchain protocol according to the rules. Otherwise, you're liable to lose your ETH. And each blockchain, when the blockchain is created, specifies these rules and they're programmed into the rules of the blockchain. But what we figured is it's just stake and if Ethereum is Turing complete programming language, I can subject your stake to arbitrary slashing conditions so that you now create this general purpose layer for programmable staking where anybody... So it's permissionless programmable staking, because anybody can come and create new staking conditions by writing new programs. And then you can bind yourself to them.

50:53: Anna Rose:

I thought, though, when I first heard restaking, that it was somehow in the category of liquid staking. But it isn't. It's...

51:00: Sreeram Kannan:

A lot of people think that.

51:01: Anna Rose:

They thought that, yeah. But as a staker, can you just walk through what's happening if you stake? Normally, a staker will be running their validator and staking their ETH to the validator, or they'll be like staking through a pool. In this case, they're staking with a new piece of software. Is it a validator as they know it? This is the part I wanted to understand.

51:24: S :

So, we have two roles, a staker and an operator. The operator is the one who's actually running the service.

51:29: Anna Rose:

It's the validator, kind of.

51:30: Sreeram Kannan:

It's the validator. So what the staker does is the staker puts up the money into the contract, and then they specify who their operator is. It could be themselves, or it could be delegated. And why they trust the delegate and so on, it's up to them. And then because the delegate is now the operator downloads and runs all these containers that the staker is opting into. Let's say the staker opts into this Oracle and data storage and so on, then the operator has to actually go and download and run all those services. And by them downloading and running these services, they earn a certain fee. And the staker keeps a certain fraction of the fee, and the operator gets a certain cut of the fee. So it's very much like staking any other thing, except it's an infinitely expansive staking protocol. You stake once, and then you can restake into all these services.

52:28: Anna Rose:

Do you end up at like, do you have to sort of stake more, and then a portion of it is going towards regular staking?

52:34: Sreeram Kannan:

No. You just...

52:35: Anna Rose:

Actually, is a portion of it going towards regular staking on Ethereum?

52:38: Sreeram Kannan:

That is correct. You're subjecting your 32 ETH to multiple conditions.

52:43: Anna Rose:

How do you actually do that under the hood though? Because you still... The operator still needs to have a validator with 32 ETH. It can't change the rules of Ethereum.

52:51: Sreeram Kannan:

That's right. So this was one of the early things we had to figure out, this hack, which is the thing I was explaining about the withdrawal credentials. You stake in Ethereum... Whenever you stake, you have the ability to set the withdrawal credential to your own wallet. But instead, you can set it to a smart contract on Ethereum. And in that smart contract, you set your withdrawal power to yourself. So that smart contract is the EigenLayer smart contract. So you stake in Ethereum and then set the withdrawal address to an EigenLayer smart contract. In the EigenLayer smart contract, you say that, hey, I am the one who has the ability to withdraw this money at the end of the day. And what this does is enables the EigenLayer contract to take away your ETH if you misbehave. So that's what it does.

53:42: Anna Rose:

But still, I guess the thing that isn't solved here is the 32. Does that mean that anyone using the restaking have to put up more than 32 ETH? Or do you pool...

53:52: Sreeram Kannan:

If you're restaking natively, so this is what we call native restaking. But you could take like stETH and put up 3.17 stETH into the EigenLayer contracts and then specify who the operator is.

54:06: Anna Rose:

Okay.

54:07: Tarun Chitra:

is must have been like August:

54:42: Anna Rose:

But I'm just going, so I see this. So using Staked ETH, you're already, let's go through that example. So you're saying either you're going to put up 32 plus something that could be used for the restaking.

54:54: Sreeram Kannan:

There's no plus something. It's just 32.

54:56: Anna Rose:

Just 32. But then... If you just put up 32, doesn't all go through to the validator?

55:00: Sreeram Kannan:

All goes to the validator. You have to set the withdrawal address where...

55:04: Anna Rose:

Withdrawal, it's the slashing that you lose on. Okay.

55:07: Sreeram Kannan:

That's right.

55:07: Anna Rose:

And the gain.

55:08: Sreeram Kannan:

When you withdraw, you will not be able to withdraw your entire money.

55:11: Anna Rose:

And the yield comes because of those services that are paying down through the system. I get it, okay.

55:16: Sreeram Kannan:

So it's like taking a parlay bet, where you lose your bet if any one of those things go wrong.

55:21: Anna Rose:

Got it. It's just... Okay, okay. Now that's the native. Now walk me through...

55:26: Sreeram Kannan:

Liquid staking.

55:26: Anna Rose:

The liquid staking token is like...

55:27: Sreeram Kannan:

Liquid staking is one of the easiest ways to participate in EigenLayer. You just take your stETH token or cbETH token, put it into the EigenLayer contracts, and then specify some operator like Coinbase or whatever, and then let them do their thing.

55:40: Anna Rose:

And again, you have a withdraw access so that if you do something bad, you get slashed.

55:46: Sreeram Kannan:

Yeah, it's not only a withdraw access, your stETH is sitting in the contract.

55:49: Anna Rose:

And yet, basically you allow this to still participate in the system because it's Staked ETH. Like it's already connected to Ethereum and staking.

56:00: Sreeram Kannan:

In some sense, EigenLayer contracts are designed to be highly general purpose. I could put in USDC for all I care. So you can stake anything you want as general purpose staking. So there's nothing specific about stETH or LSETH, it's just that these, the stETH, these are already reward earning, right? Because, I'm already earning the base layer rewards, and in addition, I'm getting these rewards, making it very easy for people to not worry about the opportunity cost of capital. Whereas if I ask them to put an unencumbered ETH or USD or anything, then we have to worry about like, are they making that 5%, 6%? This is like, I already got my 4.5%, 5%, so this is something on top of it. So that's the economics that makes it more favorable for us to ask for... It's just in the staker's favor to actually stake one of these other assets rather than native ETH.

57:03:

Another category that we are super excited about is cryptography, which many of your listeners may be super interested in. So all kinds of interesting new cryptographic systems can be built on EigenLayer. For example, if you want to build a system using Secure Multi-Party Computation. There's no way for you to build that either as a rollup or as a native smart contract on Ethereum because you are specifying what kinds of computation each node should do and what specific information each of those nodes hold. For example, imagine a system like Penumbra, which has state, which is dispersed across these different nodes, and you want to build applications on top of this, you can build this on EigenLayer because you have a decentralized network that you can borrow. So you can build something like Penumbra as an AVS on top of EigenLayer. You can build threshold encryption. So let's say you want to build an encrypted mempool where you send transactions and the transactions are all encrypted to a threshold key, which is in the threshold group.

58:13: Anna Rose:

Yeah, which would prevent sandwich attacks.

58:15: Sreeram Kannan:

on development worked back in:

59:01: Tarun Chitra:

That was just proof that you're from Seattle.

59:06: Sreeram Kannan:

Instead, what's happening in:

59:51:

And on top of the cloud, there are thousands of successful software as a service solutions. And each of them hyper specialized in some particular domain saying, hey, I'm an authorization service for social networks like OAuth. I'm a NoSQL database for enterprise applications. Very, very, very specific in the type of use cases that are being dealt. But these are still more foundational pieces, and what happens is consumer applications integrate a bunch of these SaaS services in the backend and then create an end user application. A typical end user application in the web uses 15 SaaS services in the backend. So when people come and tell us, oh, if you have a lot of modular blockchain, you have to pay a fee on each of these layers, that's exactly how the internet works, if you haven't noticed.

::

Wow. So you're bringing SaaS to blockchain.

::

Yes. Unleashing the SaaS era of blockchain. Because SaaS is actually open innovation. So our core thesis is open innovation, which is somebody who's super specialized in building, let's say FHE for some particular application should just build that. And an end user application should consume these services as they need and they all interoperate through a shared security layer, which is EigenLayer. So that's our vision. And so what we envision is people building on top of us build these more protocol. You can think of it instead of SaaS, it's protocol as a service. Anybody can build an arbitrary protocol and launch it as a service. And now you can concatenate these protocols as a service and then build and use applications. So that's the vision that we're building.

::

One very big different financial piece of this versus SaaS though is that you have dynamic pricing at all times versus like SaaS is oftentimes mainly set, like obviously there's preemptible nodes in cloud, but you need to get to a certain scale for dynamic pricing to work, whereas here you have dynamic pricing from the beginning. And that's like a totally different economic world.

::

Completely agree, and I don't like that. We're trying to change it. So for example, in EigenDA, you have static reserved pricing. So when you think about something like AWS, people say, oh, how much block space do you have? And nobody asks AWS how much cloud space do they have. The cloud space expands to fill the requisite demand. And the reason it does it and does it in a very smooth manner is 70% of the instances running on AWS are actually reserved instances. And there's also the spot instances where you can go and like ask right away, give me something, which is dynamic pricing. But there's also the reserved instances which give you like a long-term coherence, which gives you price certainty, absolutely completely different economics. EigenDA is built on this dual model where you have a spot market where you can go and buy bandwidth in the moment, but as a rollup, I know I need 100 kilobytes per second over the next one year. So I prepay that fee with a very good discount, and then I have access to it.

::

This reminds me of a very old project. I don't know if this ever turned into the gas token. Do you remember this?

::t kind of died because of EIP-:::

Yeah, I think the difference between things like gas token and this thing is the gas token is like spot futures where you're actually trading like what is the spot future. Instead, reservation bandwidth is like contract pricing. You have a contract with your oil supplier for the next one year to supply a barrel of oil at this price. So I think contract pricing is directly happening between the seller and the buyer. So it's actually much more rigid in the types of guarantees that you can provide.

::

Yeah, although I think I would guarantee that if you went and interviewed a bunch of Web2 companies and you were like, hey, would you be willing to pay Cloudflare on dynamic pricing, where most of the time you're actually going to pay 10 times less than what you're paying now, but sometimes you're going to pay 10 times more. I bet you most people would find that appealing. But if you think about how SaaS payment processing works and how there's not really notions of streaming payments, you effectively have this huge overhead for startups. Like Stripe doesn't offer you dynamic pricing, right? You have to get to the scale of Uber or AWS for you to have dynamic pricing. So there's also this thing though, that I think to me has always been the beauty of crypto. It's like you can be arbitrarily small in size, but have dynamic pricing if you need it.

::

Yeah, so that's a really, really good point. If we only have reserved instances, then small rollups will not be able to use EigenLayer. So the availability of these multiple pricing models is useful. But also the cost certainty that a reserved bandwidth gives you is because the cost... Imagine you're a roll up and you have uncertain data availability costs over the next one year, but you have to go tell your users at Coinbase, like how much it's gonna cost on a daily basis. How do you go do this? So it becomes impossible. For example, we know... For example, Ray Dalio helped McDonald's hedge soy futures and so on so that their burgers can actually have a constant price over longer time scales.

::

No way.

::

And so these are mechanisms that are needed to actually build pretty rigid markets.

::

Yeah, I just think the difference is most non-fully native payment mechanisms, which is almost everything, right? Only at really high scale can you do these kind of streaming crazy things. It's always been quite hard to get to that, access to that. And I think to me, crypto, the beauty of it is that you can have that if you want it. Because, obviously, people love static pricing, right? It's like nice. But if I can really offer you way more efficiency on average, there's a lot of people who would love to reduce their Cloudflare bill, for instance, right, if it was dynamic. But it's just a pain in the ass for them to do it because they are not even their own payment provider. They need their payment provider to like offer... And I think that somehow this notion of services that start from being programmable on how they take payment is like there's a fundamental difference between this type of stuff and SaaS. So hopefully we don't have to use the word middleware again.

::

All right. Well, I think we've reached the end of the episode. I know I've reached now the end of my time.

::

It's late night here.

::

Devconnect. Yeah, it is late here. Thank you so much for coming back on, Sreeram.

::

Thank you so much, Anna. Thank you, Tarun. This was super fun chatting about all these things with you and looking forward to hang out with you all at other events.

::

Thanks, Sreeram. I want to say thank you to the podcast team, Henrik, Rachel, and Tanya, and to our listeners, thanks for listening.

Transcript

00:05: Anna Rose:

Welcome to Zero Knowledge. I'm your host, Anna Rose. In this podcast, we will be exploring the latest in zero knowledge research and the decentralized web, as well as new paradigms that promise to change the way we interact and transact online.

00:28:

This week, we bring you an interview with Tarun, Sreeram, and I. This was a spontaneous one, recorded during the Devconnect event in Istanbul. In fact, we were sitting just outside the Aleo Hackathon for the recording. Like, we were just in the hall, so you as a listener may notice some background noise, people walking by, you know what I mean, general hackathon sounds. To kick off the episode, we did a quick survey of the ideas that we had discussed at Devconnect, including ZK Toolkits, Intents, and DA. And then we spent some time really digging into EigenLayer, something I had yet to properly do. Now, both Tarun and I are investors in EigenLayer, him through Robot Ventures and me through ZK Validator. And while Tarun clearly deeply understands the system, as you will hear leading up to this interview, I did not. So you'll hear me stumbling around a little bit and trying to fill in the gaps in my understanding. And unlike other podcasts, this one was very spontaneous. So we didn't have very much time to prep. We are also recording this at the end of the Devconnect week. We are also a little bit tired. So yeah, here is a more unfiltered version of the show for you.

01:34:

But before we kick this one off, I do want to say a big thank you to all of the folks who came out for the ZK Hack Istanbul event. This was our second IRL hackathon, and it ran from November 10th through 12th. We had over 170 hackers and 59 teams. And 48 of these completed and submitted their project at the end of the weekend. Once again, we were blown away with the creativity, technical skills, and speed of development of these hackers. I will link to a tweet which highlights all of the winners and runner-ups and bounties, but a big shout out to everyone who came through, especially the winning teams, Katz, DamnFair, AnonAbuse, zkVM, and =nil; chronicle. We also had this time around a chewing glass prize, and the winners were O1JS SHA256 and Hello, HyperCube. The hackers got to vote on their favorite project, and so Hacker's Choice went to KZG CEX Solvency. And in the three days following the event, Devfolio, the hackathon platform we'd used for the event, ran a quadratic voting competition where everyone who had participated could vote on all of the teams. And for that, Circom Monolith came in first. So congrats to all of the winners. But yeah, thank you all for coming out.

02:49:

I know the team was so excited about it. We already started planning our next IRL hackathon. We're aiming for May, June, probably summer in Europe. But even before that, we're going to be hosting our next online ZK Hack. So this is us returning to the puzzle hacking competition and multi-week workshops. And we're planning this for mid-January. So keep an eye on the ZK Hack Discord, ZK Hack Twitter for more info. Now, Tanya will share a little bit about this week's sponsor.

03:17: Tanya:

Aleo is a new layer 1 blockchain that achieves the programmability of Ethereum, the privacy of Zcash, and the scalability of a rollup. Driven by a mission of a truly secure internet, Aleo has interwoven ZK proofs into every facet of their stack, resulting in a vertically integrated layer 1 blockchain that's unparalleled in its approach. Aleo is ZK by design. Dive into their programming language, Leo, and see what permissionless development looks like, offering boundless opportunities for developers and innovators to build ZK apps. As Aleo is gearing up for their main net launch in Q4, this is an invitation to be part of a transformational ZK journey. Dive deeper and discover more about Aleo at aleo.org. So thanks again, Aleo. And now here's our episode.

04:06: Anna Rose:

I think we ask this every time we do these live event ones, but what were the themes that you've been hearing? And it doesn't have to be talks. This is maybe what people are talking about. I mean, you just mentioned DA, data availability, maybe in different forms, or yeah, do we need it in the way it's been proposed or not? Maybe let's start with DA, expand a little on what's been discussed, and then we'll talk topics.

04:29: Tarun Chitra:

I think most of the stuff about DA has just been focused on lowering fees for use cases that are not purely financial or like DeFi types of use cases. And I think because Celestia is live, it's now like, it doesn't feel like the only DA is ETH L1, it feels like there's like choice in the marketplace. And I think Sreeram is probably the best to talk through the pros and cons of each of the different models. But I think a lot of it is just focused on reducing cost. But I do feel like there's still, the reason I would say we aren't fully out of the disillusionment side of the distribution, is there just hasn't... It feels like everyone I talked to is basically saying when application, when new application, right? Like there's tons of new infrastructure, but there haven't been too many new applications... Like there's no... The Uniswap moment of this cycle hasn't happened yet.

05:20: Anna Rose:

Let's talk DA for a second though you just said Sreeram is like a good person to break this down.

05:25: Sreeram Kannan:

Yeah.

05:26: Anna Rose:

Can you?

05:26: Sreeram Kannan:

Yeah, sure. I do think, maybe it's the sample of people that I interact with but it seemed like a lot of people are excited about DA at this time. It's really credit to the Celestia people for actually making this happen, because it's such a boring concept.

05:44: Tarun Chitra:

Maybe define it for us.

05:45: Sreeram Kannan:

Okay, so to define it, the idea is when you have the... One of the ways to scale a blockchain, and this is particularly the roadmap that Ethereum committed to, is to use rollups. And what are rollups? Rollups basically offload computation and then create proofs that the computations were done correctly. When you do this, one of the important things you have to do is to publish either the inputs or the outputs to the computation. This data has to be published somewhere. Why does this need to be published? Because if this data is withheld and not made available by the rollup operator, what might happen is funds could get stolen. Because if you don't know the data, then you can't compute what the rollup did. Or if, in case of a zk-rollup, what might happen is, even though you don't need to double check the computation, somebody needs to know the state in order to continue the computation thereafter. So a role of a data availability layer, I think some people are saying maybe it should be called a data publishing layer, a layer which ensures that when data is published, everybody has access to this data. So that's what a data availability layer is, even though it's a kind of critical piece in the blockchain stack, the identification that this could be kind of decoupled and scaled separately is one of the key insights in the Ethereum roadmap, as well as what Celestia took and built around.

07:17: Tarun Chitra:

So one thing, I think, to a person who's never heard of DA before that oftentimes gets used as an analogy, a very imperfect one, but one that's worth maybe going through is a comparison of like dialup or T1 to broadband, right? In terms of like single chain dialup versus broadband, like many asynchronous kind of connections, capturing data. How do you think about that analogy, especially when it comes to this idea that in blockchains, it's not just about how much data you get, it's also execution environment, there's all these other overheads. Yeah, how do you kind of think about that?

07:52: Sreeram Kannan:

Yeah, so one of the things that the modular landscape does is to decouple things like computation throughput from data throughput. So you can execute the... You can separate the quality of data availability in a consensus protocol by just looking purely at how many bytes per second can it transport. And then on top of it, you're slapping a VM on top of it. And then the VM translates that, oh, I can take 200 bytes per transaction, and each transaction on average takes this much computation and you can translate that into a throughput or a transactions per second for a certain kind of transactions. So the fundamental performance metric of DA layers therefore bytes per second. So the comparison with something like a dialup versus a broadband is quite appropriate.

08:42:

But what's actually happening under the hood is the following thing, which is that in most of the blockchains today, you have highly redundant transmission of the data. Every node downloads and stores exactly the same data. Therefore, the entire system's performance is bottlenecked by any one node's bandwidth availability. Whereas scaling data availability fundamentally pertains to the idea that everybody doesn't have to download all the data, everybody downloads only a portion of the data using things like erasure codes and KZG polynomial commitments, even though some fraction of nodes go offline or are malicious, you can still reconstruct every bit of the data from a small fraction of the nodes.

09:27: Tarun Chitra:

Okay, let's explain it to a 15-year-old or high schooler for data availability sampling and also the commitments. Because I think understanding the trade-offs and just the process in which this is done is important before we talk about the security guarantees.

09:43: Sreeram Kannan:

Yeah. Imagine you have a bunch of nodes, and you want to store some data on those nodes, and you don't want everybody to store all the data. One way you do it is you take the data and then split it up into chunks and then say that data item one, I only store on a few nodes, and data item two, I only store it on a few nodes, and so on. If you did this, what would happen is if the nodes that just stored the data item one, if they go offline, then you lose that portion of the data completely. So in simple schemes like this, where you are redundantly storing data, but only on a small subset of nodes, you have this problem that scalability and security are at odds. If you want more scalability, you will say that only a few nodes store the data, each data item, but that means that if those nodes go offline, you lose that data item. So what erasure codes do is instead, they allow you to mix the data item in complex configurations.

10:48:

Imagine X1, X2, X3 are different data items. You send X1 + X2 + X3 to one guy, X1 - X2 + X3 to one guy, X1 + X2 - X3 to another guy, and so on. So that any three nodes, you get three linear combinations, and you can actually find out what is X1, X2, and X3. So that's the basic idea of erasure coding and data availability scaling. I'm going to separate data availability scaling from data availability sampling. The data availability scaling basically means everybody doesn't download all the data, but together, the system is still secure and has all the data, even if some of the nodes or a good fraction of the nodes go offline. What is data availability sampling is, imagine you have a network that is running, and I'm sitting outside the network. And I want to know whether this network's doing its job of actually storing and downloading the data. So how can I verify that?

11:45:

A normal method to verify that would be to say, go and download all the data, and then you will know whether the data item's available. Data availability sampling is this very nice idea that instead of actually downloading all the data, you decide which random samples to query for. And then you say, oh, give me sample 30, give me sample 35, give me sample 50. And if you get all of them, you're like, okay, it seems like it's highly likely that all the samples must be available, because the guy is giving me whatever I asked for. So that's data availability sampling. Data availability sampling is a mechanism to scale verifiability. It is not a mechanism to scale the consensus bandwidth in the network. So you can think of these two things, data availability scaling, which is how do I make sure that no node in the network actually downloads all the data, versus data availability sampling, which is a verifiability scaling method.

12:39: Anna Rose:

When you say both of these, and it might be very off, but it sort of reminds me a little bit about MP3s and the sort of curves. Is there any connection between erasure coding or the sampling and that kind of technology?

12:53: Sreeram Kannan:

Yeah, I mean, definitely. So MP3 is compression technology, but if you look at most of the things like CD-ROMs or where there is a chance that, for example, you may scratch your CD, so data has to be stored in such a way that it's resilient to a certain number of erasures. And that's exactly the same technology that is used in data availability sampling.

13:20: Anna Rose:

Nice. Just sort of bringing it back to the event and the week. Are there any other topics other than DA that you thought were discussed a lot where you were?

13:30: Tarun Chitra:

Restaking, which is what we're going to talk about more in a bit. I may or may not have been also one of the proponents mentioning it a lot. So...

13:40: Anna Rose:

So it's funny, someone said to me, everyone is talking about Intents. But I personally have not actually heard much about it. I heard one project that was doing some ZK Intents project that talked to me, but otherwise I hadn't heard very much.

13:54: Tarun Chitra:

transaction supply chain from:

14:57:

On the other hand, this thing was a combinatorial auction, too hard to do, not centralized. And effectively, what you see is you see a bit of an aggregation event there. So we went from totally disaggregated to somewhat aggregated. Then we went in last year at the merge to proposer-builder separation, where instead of bidding on sequences of transactions, you bid on the entire block. This kind of moving from very disaggregated to aggregated is realistically a way for making Ethereum sustainable. It increased the net revenue to proposers as you did this aggregation process and it took more of that revenue from MEV searchers and people of that form and gave it to proposers. On the other hand now, the users want to rebel against that sort of monopoly power which came from that. And you can argue that we're now about to enter the disaggregation era of Ethereum and Intents and RFQ systems are a way of disaggregating the value extracted by the proposer and returning it partially to the user.

16:01:

And so this sort of aggregation-disaggregation process, you know, when I was five, my mom once told me this very wise coen, which is; capitalism is just bundling and unbundling repeated ad infinitum. And at some level, I think this entire, what we're seeing in the transaction coalescence world is that, and the Intents model is unbundling the MEV in a sort of more peer-to-peer fashion versus kind of giving it out to our proposer. So I think that's the reason people like it. It feels like it's the thing that has the highest growth rate or highest sort of derivative, like people think it's making the most progress. If you look at UniswapX, it's been taking in a lot of Uniswap's volume. On the other hand, it's sort of this thing where it's like a nebulous concept. And of course, disclaimer, I'm spending a lot of time trying to write some research on this because it feels like this type of thing where everyone... It's like there's the pawn, you know it when you see it, Supreme Court aspect to it. Like everyone is like, this thing is an intent, this thing is not an intent.

16:58:

Yet there is no definitions. And so I think trying to make sense of that is like, what I would say is one thing people are doing, people are making all these software frameworks for writing these types of things that effectively, in my mind, are ways of avoiding paying the proposers.

17:13: Anna Rose:

Wow.

17:13: Tarun Chitra:

Sorry for the rant.

17:14: Anna Rose:

but they launched in February:

18:09: Tarun Chitra:

You waded almost into one of the controversies on my zkEVM panel, which was the first question asked, which was who's the first zkEVM? And of course, one of the answers was, oh, we're all in it together. We're all first. And then immediately after that, someone was like, no, no, no, we were first.

18:27: Anna Rose:

Nice. I do want to mention one more, which is on the ZK front, which was this idea of on the ZK front, if we look at the difference between last year and this, it's the fact that there are now maybe still in testnet, but there are now environments, sandboxes, testnets, frameworks, where people can start to build and deploy ZK stuff a lot faster. I think that's also why in April of this year when we did ZK Hack Lisbon, people were able to build anything. I mean, otherwise, they're just doing cryptography implementation, which is very, very challenging. Only a limited number of people can do it. But I think it's been opening, like the space has been opening up to more noobs, kind of... Not first-time hackers, maybe, like experienced developers, but first-time building in ZK. And then this time around, we saw even more of it. And there is even more tools around ZK and how to deploy them faster. And I don't know if we're at the point where there's a lot of debuggers built into systems. I know these languages and these frameworks are still super, super young, but yeah, that's something that I noticed, especially at the hackathon but also kind of all week. Oh, I think last time in Paris, I also asked, what was the best swag that you saw this week?

19:45: Tarun Chitra:

Speaking of Intents, there is a team called Essential, and they give out Essential Oils. And me, as someone who has a house filled with 500 candles and incense burning all the time, I appreciate a scent-based swag much more than a black t-shirt with a logo.

20:04: Anna Rose:

Very nice. All right, I think we could shift the conversation now a little bit over to EigenLayer. I feel we've set the scene in describing DA. I think it's funny because I would often kind of put EigenLayer in the DA camp. Like that somehow it competed with Celestia as it was being proposed as just like a DA layer. But is that wrong to put it...

20:30: Sreeram Kannan:

No, it's not wrong.

20:31: Anna Rose:

Okay, it's correct. And yet, that's a problem... The minute I look at the system, I'm like, is it DA?

20:37: Sreeram Kannan:

It's because we're building two things. We're building EigenLayer, which is a general purpose mechanism for sharing decentralized trust. So you can take the staking and the node operators and the economics underneath the Ethereum network, and EigenLayer lets you share that with anybody who wants to consume it.

20:56: Anna Rose:

To share it.

20:57: Sreeram Kannan:

To share it.

20:58: Anna Rose:

Okay.

20:58: Sreeram Kannan:

Imagine you want to build an Oracle or a data storage network or a new AI inference network or a decentralized prover network for a ZK. Any of these things, you need many nodes, and they need to put in some stake, and then they need to participate in active validation of a certain service. So EigenLayer is a generalized mechanism for anybody to build arbitrary distributed systems on top of the Ethereum trust network. So that's EigenLayer.

21:28: Anna Rose:

Okay. So that's not DA.

21:29: Sreeram Kannan:

That is not at all DA. One of the modules, so we call these AVSs, Actively Validated Services. And anybody can build an AVS. To demonstrate the power of the platform, we built the first actively validated service ourselves. And that is called EigenDA, which is a data availability service.

21:48: Anna Rose:

I see. But it's still kind of in the Ethereum camp, right? Does it model itself as a Celestia-like hub that rollups are supposed to link into, or is it doing a different kind of DA?

22:00: Sreeram Kannan:

Yeah, that's a great question. The way we modeled our data availability system is as an adjunct to the Ethereum blockchain. So you have on the Ethereum blockchain, let's say you're running your own ZK rollup, and you want to post your data somewhere, and Ethereum doesn't have enough bandwidth, or it's too expensive, whatever, and you can post it on EigenDA. EigenDA is not a standalone blockchain, unlike Celestia or Avail or other things. And it was designed from first principles to be purely an adjacent to Ethereum. What this does, we surprisingly found, is it liberates a lot of the trade-offs that exist in building a data availability. Because usually when you're... The other blockchains that are serving to be data availability systems also build an ordering service. And basically, it's a blockchain because there is an inherent ordering of the data blobs that have been posted into that system.

23:01:

What we realized is already rollups have an ordering service. They're relying on Ethereum in the world that we are living in. So what we do is just provide a data attestation service where you write the data to the system, the system gives you a thumbs up saying that the data has been stored and custodied, and then that aggregate signature of the commitment is then posted onto Ethereum. And Ethereum itself has an ordering layer, so you have an implicit ordering of all the data blobs through Ethereum by decoupling consensus and data availability. So we are even more modular than these other solutions. And this comes with benefits and trade-offs. And the benefits are very clear when you are a rollup, which is natively on Ethereum. But these other systems offer you mechanisms where you don't have to be on Ethereum for anything. And then you can be natively on Celestia or Avail or and so on.

24:09: Anna Rose:

So in the Celestia model, though, they do have consensus and data availability.

24:13: Sreeram Kannan:

That is correct.

24:13: Anna Rose:

But they don't have a settlement layer.

24:15: Sreeram Kannan:

Yeah. That's right.

24:16: Anna Rose:

In your case, it's just the data availability.

24:18: Sreeram Kannan:

It's purely just data availability.

24:20: Anna Rose:

Okay. And just one thing, because they had this other project, I think it's Blobstream. Am I saying that right?

24:25: Sreeram Kannan:

Blobstream, yes. From Succinct.

24:25: Anna Rose:

Is that similar?

24:28: Sreeram Kannan:

That's not similar. So what that does is that bridges this information from the Celestia blockchain. So let's say you are an Ethereum rollup, and you still want to consume the Celestia Blob space. So you're a rollup, you go and write your Blob into the Celestia blockchain. But your rollup contract is sitting on Ethereum. So now you need some kind of a bridge which tells you that what has happened in the Celestia universe, and then that information is bridged into Ethereum. That's what Blobstream is.

24:57: Anna Rose:

Got it. Okay.

24:58: Sreeram Kannan:

So one of the key things that these other data availability systems like Avail and Celestia were built on is data availability sampling. And like I was saying, data availability sampling is a mechanism to verify from a third-party point of view that the data is available. And this is really useful and gives you very high trust guarantees on the system when you're natively on Celestia. Because even if all the Celestia validators collude and try to make... Sign on a data item for which they did not publish the data, if you run a light node, you will try to sample the data chunks and you find out, hey, the data chunks... This blocks data chunks are not available. Even though a majority of the validators signed off on the block, you will not accept it, and you'll say, reject this block, because I'm unable to access the data items inherent in it.

25:55:

And if everybody does the same thing, then the blockchain will stall, and you can fork the chain and then retrieve it to a correct state. So this is a superpower that is possible on blockchains that implement data availability sampling. But when this state is bridged into Ethereum, because if I'm a rollup on Ethereum, or an Ethereum smart contract does not have the ability to do data availability sampling. So what that means is essentially, you have to trust this majority of validators from the other network, from the Celestia network or Avail network, and the rollup contract... If the majority of these nodes are malicious, rollup contract still makes progress, and your money is stuck in the rollup. So this can definitely happen. So the benefit of sampling is not prominent, or I would say non-existent, when you are an Ethereum-adjacent layer. And so we built our system around data availability scaling instead of data availability sampling. What these other systems did is they also made trade-offs where even though the system has data availability sampling, there is no scaling of data availability. What it means is every consensus node in Celestia downloads the entire block. And there are lots of technical reasons for this, but that's the architecture. And whereas it is highly scalable for verification, it is not scalable to be a consensus node.

27:29: Tarun Chitra:

never you told me about this,:

28:06: Sreeram Kannan:

Hey, number going up.

28:07: Tarun Chitra:

Great, for sure. But I mean, you have to be able to bootstrap such a network, right? You couldn't just start Telestia tomorrow as a fork and hope that it's secure enough, right? It's actually quite hard to do that. And it's an accomplishment to have gotten to that market cap. Right?

28:25: Sreeram Kannan:

Yeah, absolutely.

28:26: Tarun Chitra:

So first off, but I think the interesting thing about EigenLayer and restaking in general is you don't have to bootstrap a token. You get to use Ethereum's market cap. And the way you're doing it is you're opting into extra slashing rules and getting potential fees but also potential extra slashing. But that allows you to piggyback off of Ethereum. So maybe walk us through the process and why you're able to build DA in this way that doesn't rely on a new consensus.

28:57: Sreeram Kannan:

The process for how a staker or operator ops into EigenLayer, there are two mechanisms. One is called native restaking. You stake in Ethereum natively. And when you stake in Ethereum natively, you have to set who's the withdrawal address. Usually you'll sell it to your own hardware wallet or wherever is the safest place you have, because when you withdraw, that's where the funds go to. Instead, when you opt into EigenLayer, what you do is to add a step in the withdrawal flow. You say, set the withdrawal address to a contract that you create in the EigenLayer system called an EigenPod, which is your own little zone in the EigenLayer universe. And in the EigenPod contract, you then set your withdrawal address to your hardware wallet. So when you trigger withdrawal from Ethereum, the funds go into the EigenLayer contract. And if you didn't do any kind of malicious activity and were subject to EigenLayer slashing, you will be able to then withdraw the funds into your wallet.

29:57: Tarun Chitra:

Plus other fees you may have earned for...

29:59: Sreeram Kannan:

Yes. So you will be able to download your fees in the normal mode every few weeks or whatever from the EigenLayer protocol. That's why you're doing all these things. Why take the trouble of opting into other things and risks is because you are actually earning something for delivering these services. So this is the flow for a native restaking. Now that you do this, you become a native restaker on EigenLayer. And you can go into the EigenLayer contracts, and because you have the EigenPod, you can specify what services you want to opt in and operate yourself. Maybe these services are like, I want to run EigenDA, I want to run an Oracle service, I want to run a bridging service, I want to run an intent-based architecture. Whatever is the set of services that you are opting in to run, you can decide. It's a purely opt-in system for all the sites.

30:55:

So you opt in and say, hey, I'm doing this. And then you have to say, who's the operator who's going to run the service? You could say yourself and say that, yeah, I download and run these softwares. And each of these softwares now are arbitrary software. They are not confined to the EVM or anything like that. It's just a binary or a docker container that you can download and run on any computer. So you can start building general purpose services which have nothing to do with the EVM. So let me explain. So now I mentioned the staker and the operator side. Then there is a service, somebody who's building these new services, they build two distinct things. One is they build a service contract which sits on Ethereum and then talks to the EigenLayer contracts. The service contract does minimal overheads and coordination. What are the coordination things?

31:50:

Number one, who can register into your system? Do they need 32 ETH? Maybe they only need 3 ETH because you're a different system, whatever. So that's number one, registration conditions. Number two, what is the payment conditions? Or if you opt into my data storage service, you store one gigabyte of data, you will get one ETH, whatever, some kind of a payment condition. Number three, slashing condition. If you say that you're storing data, and then I randomly recall you to produce the data, and then you don't do it, you will lose your ETH. Something like that is the slashing condition. So this specifies a service from the service side. So EigenLayer is the coordination mechanism which helps stakers find operators, find services, and then these three together then create a service economy where these services are then offered to consumers. Maybe it's a DeFi app which is using an Oracle service or an Intent service or a bridge or a DA.

32:50:

So that's the overall architecture. I mentioned there are two ways of staking. One is native restaking, which you had to do this withdrawal credential thing. There's also liquid restaking. You can take an LST like the Coinbase LST or the Lido LST or the Rocket Pool LST, and then put it into the EigenLayer contract. It's just like any token that you deposit, now you have a status inside the EigenLayer contracts that lets you participate in this economy.

33:18: Anna Rose:

I mean, you sort of mentioned it as this underlying general purpose, like it's not framework, but it's like a space where you can deploy things. You mentioned the DA level, the DA layer as one of those applications, is the restaking an application on top of it as well?

33:37: Sreeram Kannan:

Restaking is what powers the EigenLayer.

33:42: Anna Rose:

Is EigenLayer DA just kind of like, you want it to show what is possible, or is that meant to be an actual product that's being used?

33:51: Sreeram Kannan:

EigenDA is a product.

33:53: Anna Rose:

Okay.

33:53: Sreeram Kannan:

It was designed not only to be a proof of concept, but also to be a proof of value, which means it's something that is valuable and useful and delivers fees. Because when you have this complex, multi-sided marketplace, you have stakers, operators, services, service consumers, like there are four sides at the minimum in this marketplace, you want to, it's very difficult to bootstrap it. One way to do it is to actually build a powerful service, which is fee earning, so that stakers actually have something to get.

34:25: Anna Rose:

I see. Okay.

34:27: Sreeram Kannan:

So it's not meant purely as a proof of concept, it is a proof of value.

34:32: Anna Rose:

Will you be doing other things like that?

34:34: Sreeram Kannan:

Will we be building other services? We are not intending to be building the other services. We are intending for other people to be building all these services. You briefly asked what types of other services, maybe I can go into that a little bit.

34:46: Anna Rose:

Yeah, yeah. That's kind of what I wanted to find out. First, I wanted to understand what is EigenDA. Because maybe... I mean, my initial thought was, oh, maybe you will be building three of these, and they're all sort of feeding into the system. But what I hear is you're building one to create value, so that it's actually like paying kind of through the system to show its proof. But the other two, would you just put out proposals? Or would you be like, oh, this is what you could build?

35:11: Tarun Chitra:

So anyone can build this. And I think an interesting kind of overheard, which may or may not have had some input from me into is, all L1s either die a hero or live to turn into an L2 using restaking or their own DA layer.

35:35: Sreeram Kannan:

That is an interesting take.

35:36: Tarun Chitra:

But the reason I mentioned that is, as Sreeram can tell you about, there is an L1 who's moving to being an L2 using restaking.

35:43: Anna Rose:

Oh, yeah, yeah. And that's the Celo thing, right?

35:47: Sreeram Kannan:

Celo is actually working with us and using EigenDA to become a rollup on Ethereum. The total data throughput that Celo is operating at was, I think, even greater than the total throughput of Ethereum. So there was no chance for them to be a native rollup posting data to Ethereum, not only for the low fees. Celo has lots of users in Latin America and so on and like very low fees. So there's no chance for them to actually be a native rollup. Whereas, EigenDA has extremely good cost economics, and that enabled them to become part of the Ethereum ecosystem. We are also seeing another trend. For example, NEAR, which is an L1, is working with us to build lots of services for the rollup ecosystem. So for example, rollups need sequencers, and NEAR Protocol already has this consensus protocol and everything already running. So you could actually have the NEAR Blockchain work with Ethereum to provide some of these other services as an adjacent to the Ethereum blockchain.

36:55: Anna Rose:

Can you describe, let's kind of going back to that initial question, what are the services?

36:59: Sreeram Kannan:

What are the categories? So we're seeing actually, it has been amazing to see from, our own vision has been open innovation. So we want to maximize the surface area of permissionless innovation. That's really what motivates us in this project. But when we started, we had a couple of examples of what might be possible as EigenLayer services. And today, in the Restaking Summit, I just gave a talk where I showed 25 new services in five categories.

37:29: Anna Rose:

Oh, neat.

37:29: Sreeram Kannan:

And I'll maybe give a sense of these categories and what some of the most exciting things are.

37:33: Anna Rose:

Are you publishing that somewhere? Because maybe we can add that to the show.

37:37: Tarun Chitra:

Do you record it all the topics?

37:39: Sreeram Kannan:

Yeah, we will have a video.

37:40: Anna Rose:

Nice, nice.

37:41: Sreeram Kannan:

So I can give a link to that or just send a picture. For example, one category which is very obvious and where we see immediate traction is rollup services. So rollups need lots of adjacent services in order to make the rollup economy work. One example I was just mentioning is sequencing. How do you... You know, a single sequencer is like a censorship bottleneck in the rollup system. Do you want to have a small group of decentralized sequencers or a large group of nodes which participate in ordering transactions?

38:15: Anna Rose:

So is that decentralized sequencers?

38:17: Sreeram Kannan:

That is decentralized sequencers.

38:17: Anna Rose:

That's not the shared sequencers.

38:20: Tarun Chitra:

It could be.

38:21: Sreeram Kannan:

It could be a shared sequencer or a non-shared sequencer. But both of them need decentralization, and so all of them can use EigenLayer to actually build these kinds of... We're seeing many different models of decentralized sequencing being built, but Espresso, which is a leading shared sequencer is also working with us in sharing security from Ethereum in addition to their own native token.

38:46: Tarun Chitra:

So as a disclosure, EigenLayer investor, as also Anna is, I think the zeroth order model I had in my head for how this network accrues value despite not having its own token and layer 1 is effectively this idea that if you think about all the rollups in the world, they're eventually going to have to have decentralized sequencers. Those fees have to go back to Ethereum somehow, right? Ethereum's going to be losing a ton of fee revenue as more and more value migrates away. And the main way of, unfortunately, this word is overused, aligning the rollup fees with the proposer incentives and the L1 is to actually have a way for the L1 proposer to also earn the rollup fees. And the natural way to do it is via something like EigenLayer. Because if I use restaking, I'm reusing ETH, I'm earning fees in ETH, I'm in some ways giving the rollups some of my ETH in exchange for some other fees.

39:46:

One interesting thing that I've been thinking about a lot is if you look at this model of EigenLayer for restaking the different rollups plus DAS and ETH just being the place where data is posted, some data is posted, or maybe proofs of validity are posted, you really do start looking like Polkadot without the auctions.

40:06: Anna Rose:

Yeah, actually, Tarun, you kind of drew this out. You said there's like these three pieces that basically make you...

40:13: Tarun Chitra:

It really looks like Polkadot except for the auctions. I think the auctions were just very expensive for the parachains, here this is much more economical.

40:20: Sreeram Kannan:

Yeah. So the comparison to Polkadot is actually accurate in one sense, which is that basically, parachains gave a certain level of programmability while also maintaining shared security. But there was a certain amount of homogeneity which was needed. For example, they all had to be in WASM, and you have to write your virtual machine on top of that. And not only that, you only share security in the Polkadot model for the execution. For example, let's say you want to build a secret sharing service where you take a secret and then encode it into chunks and then send each node a portion of the secret. You cannot really do this in the shared security model of Polkadot. So the way we think about it is, in the history of blockchain, Ethereum was created as this Turing complete general purpose programming language, but it gave you only a programming interface at the level of the virtual machine, and then all the coordination about how the distributed system is managed, how the consensus is managed, and all of it was internalized into the protocol.

41:30:

And what we started, as we were thinking about new ideas for consensus and scaling and so on, what we found is this limited the level of permissionless innovation that could penetrate into these areas. And so if I had a new idea... as an academic we had tens of papers on consensus protocols, and we talked about many of them in the last ZK podcast. And if I were to go build a new blockchain for each new consensus protocol, that would just be a completely non-viable way to do things. Whereas what EigenLayer does is give you the first general purpose programmable distributed trust system. So you can say what each of the nodes in the system have to run, you have complete programmability at the level of the distributed system. So you can start building basically anything that requires decentralized trust.

42:21: Tarun Chitra:

Yeah, for the record, I wasn't saying it's exactly Polkadot, it was just more, it's funny, because I feel like Ethereum was always like, we're never going to have Fishermen and do... And it's sort of indirectly...

42:33: Sreeram Kannan:

All of these things are happening, actually, layer by layer.

42:37: Anna Rose:

Everything converges to Polkadot.

42:39: Tarun Chitra:

I feel like Ethereum is very good at taking the good ideas from different places and then smooshing them together?

42:47: Sreeram Kannan:

You know, I think there's also something similar to be said for Cosmos and how a lot of the ideas from Cosmos also percolated back to Ethereum. There's the idea of interchain security and how EigenLayer is related to that. Yeah, so for sure. Going back to this...

43:05: Anna Rose:

The services; I want to hear more exactly.

43:07: Sreeram Kannan:

Going back to the set of applications, rollup services, some of the examples are, I mentioned decentralized sequencers, bridges, for example, you want to build a super fast bridge between two ZK rollups. Each of the ZK rollups only settling on Ethereum every few hours because of the batching efficiencies, but I still want to interoperate between them at a much faster pace. Can I build some kind of a restake service which knows the state from the other thing and then puts an economic collateral at risk and then starts helping you bridge between rollups? That's an example. You can start thinking about...

43:40: Anna Rose:

Well, could you do something like coprocessors, that sort of coprocessor model?

43:45: Sreeram Kannan:

Well, that's the next category. You're right on target.

43:48: Anna Rose:

No way.

43:49: Sreeram Kannan:

Okay. So...

43:50: Anna Rose:

EigenLayer is taking over everything, though.

43:54: Sreeram Kannan:

So to finish the rollup thing, one more category that we see is like fishermen that Tarun just alluded to in Polkadot. The idea of who's watching in an... if there are a lot of optimistic rollups, somebody needs to be watching these optimistic rollups to trigger fault alerts. And today, there are a handful of major optimistic rollups, and there's lots of extraneous parties whose job involves also watching the network, because you're an RPC, you are an exchange, you're a block explorer. Whatever your job is, you happen to be watching the network. But in the era of thousands of application-specific rollups, and some of the rollups are actually building to be are highly transient, like they just open up, be a roll-up for a few hours and then vanish. You know, do your NFT distribution and then vanish or whatever. So these kinds of rollups, who will be watching? And nobody knows whether there'll be enough people watching. So a watchtower service is being built where a random group of nodes are selected to watch each rollup and you can spin up tasks and so on.

45:00: Anna Rose:

And that's the Fisherman kind of model.

45:02: Sreeram Kannan:

That's a Fisherman-like service on EigenLayer. The coprocessor is the next category, like rollups. The way I define coprocessor is like a serverless Lambda. It's a stateless service. I'm sitting in Ethereum, and then I want to run an AI inference. And then I want to consume the output of the AI inference. Why? Maybe because I want to do intelligent DeFi. Like, I put my money into a Uniswap pool, and I don't want to get raided by UniswapX, basically only sending toxic flow into my liquidity provision. So what I say is, I put my money into a pool and say that the price of this pool is modulated by an AI protocol, and the AI protocol looks at all the history of the trades and then tries to adjust the spread so that the toxicity is contained. But I need this AI inference to be highly trusted because if somebody says, oh, one ETH should be set as $20, and then somebody can come and raid the pool, you don't want that. So you want this AI inference to be highly trusted. So what you could do is run an EigenLayer service where there is enough economic security.

46:08:

Let's say you want to get at least $100 million economic security because in a given day, your trade volume is less than 100 million, then that makes the system fully secure. So this is an example of a coprocessor. Another example of a coprocessor might be, I want to run a Linux box and this particular program on this Linux box and get the output and then promise it on Ethereum. Or I want to run a database, I want to run a SQL query and then get it back. So all of these things could be either done using ZK technologies or they could be done using crypto economic security. And the trade-off here is like what is the excess cost of proving that goes into cryptographic like a ZKML or a ZKSQL or whatever set of solutions. Or can I instead of paying for the cost of brewing, I can pay for the cost of capital and then borrow the security from EigenLayer?

47:06: Anna Rose:

But in moving into the coprocessor space or having that be an option, is there still a reason to create a very from the ground up coprocessor? Like, do they still get some advantage in being able to build it from scratch versus building it on EigenLayer?

47:25: Sreeram Kannan:

Is there an advantage of building a coprocessor from scratch? I mean, the way I think about it is it's not a binary question of if you are building on EigenLayer versus building on your own, because EigenLayer is fully programmable. So whatever you can build on your own, you can also build on EigenLayer.

47:42: Anna Rose:

But does it lose anything by building on EigenLayer?

47:45: Sreeram Kannan:

The constraint that you're suffering...

47:47: Tarun Chitra:

You lose the ability to make a token.

47:50: Sreeram Kannan:

No, no, no. Okay, that's a misconception I want to correct.

47:52: Tarun Chitra:

Yeah, yeah. You lose the need to do it at inception.

47:55: Sreeram Kannan:

Okay, that may be. So what it does is, this is one of the things that everybody asks me is, hey, if you're saying that you don't need a token for your own like, middleware or service or whatever you're building, then what would people do? Like, where are they going to go? But first thing to observe is that's already true for being an application on Ethereum. Every dApp on Ethereum also has a token, the tokens used for governance or other purposes. In EigenLayer, we also provide native support for something called dual staking. So let's say you have a coprocessor and you have a coprocessor token, you can have both the coprocessor token be staked and the Ethereum token be staked. So you're borrowing a sense of economic security, and how much of each you can decide by controlling how much fee you're willing to share between the two layers.

48:46:

And over time, maybe for bootstrapping, you need a lot of ETH, and over time, you decide it's not that beneficial to your system, so you can tune it out to using more of your own token later on. So this is the kind of coprocessor category. But in general, this question shows up a lot, hey, what happens to my token? And the answer is nothing. Actually, fundamentally, if you look at the economic value of your token, it's coming from the future expected rewards, and if in the future expected rewards are maximized by not using ETH security and only using your own token security, you can tune the dual staking all the way to send all my fees to my own token.

49:28: Anna Rose:

Oh, wow.

49:29: Sreeram Kannan:

So you really don't have any loss. It's just optionality, and you can use the optionality in ways that most benefit your own community.

49:37: Anna Rose:

I feel like you already, kind of at the beginning, or like a few minutes ago, you defined restaking really well. But I feel like I still need to go through the motions of what it is.

49:51: Sreeram Kannan:

No but maybe this will help. I think restaking became a meme and a word, so we stuck with it, but really what we're building is permissionless, programmable staking. So when you're staking in any given blockchain protocol, what is happening is you are making a promise to run that blockchain protocol according to the rules. Otherwise, you're liable to lose your ETH. And each blockchain, when the blockchain is created, specifies these rules and they're programmed into the rules of the blockchain. But what we figured is it's just stake and if Ethereum is Turing complete programming language, I can subject your stake to arbitrary slashing conditions so that you now create this general purpose layer for programmable staking where anybody... So it's permissionless programmable staking, because anybody can come and create new staking conditions by writing new programs. And then you can bind yourself to them.

50:53: Anna Rose:

I thought, though, when I first heard restaking, that it was somehow in the category of liquid staking. But it isn't. It's...

51:00: Sreeram Kannan:

A lot of people think that.

51:01: Anna Rose:

They thought that, yeah. But as a staker, can you just walk through what's happening if you stake? Normally, a staker will be running their validator and staking their ETH to the validator, or they'll be like staking through a pool. In this case, they're staking with a new piece of software. Is it a validator as they know it? This is the part I wanted to understand.

51:24: S :

So, we have two roles, a staker and an operator. The operator is the one who's actually running the service.

51:29: Anna Rose:

It's the validator, kind of.

51:30: Sreeram Kannan:

It's the validator. So what the staker does is the staker puts up the money into the contract, and then they specify who their operator is. It could be themselves, or it could be delegated. And why they trust the delegate and so on, it's up to them. And then because the delegate is now the operator downloads and runs all these containers that the staker is opting into. Let's say the staker opts into this Oracle and data storage and so on, then the operator has to actually go and download and run all those services. And by them downloading and running these services, they earn a certain fee. And the staker keeps a certain fraction of the fee, and the operator gets a certain cut of the fee. So it's very much like staking any other thing, except it's an infinitely expansive staking protocol. You stake once, and then you can restake into all these services.

52:28: Anna Rose:

Do you end up at like, do you have to sort of stake more, and then a portion of it is going towards regular staking?

52:34: Sreeram Kannan:

No. You just...

52:35: Anna Rose:

Actually, is a portion of it going towards regular staking on Ethereum?

52:38: Sreeram Kannan:

That is correct. You're subjecting your 32 ETH to multiple conditions.

52:43: Anna Rose:

How do you actually do that under the hood though? Because you still... The operator still needs to have a validator with 32 ETH. It can't change the rules of Ethereum.

52:51: Sreeram Kannan:

That's right. So this was one of the early things we had to figure out, this hack, which is the thing I was explaining about the withdrawal credentials. You stake in Ethereum... Whenever you stake, you have the ability to set the withdrawal credential to your own wallet. But instead, you can set it to a smart contract on Ethereum. And in that smart contract, you set your withdrawal power to yourself. So that smart contract is the EigenLayer smart contract. So you stake in Ethereum and then set the withdrawal address to an EigenLayer smart contract. In the EigenLayer smart contract, you say that, hey, I am the one who has the ability to withdraw this money at the end of the day. And what this does is enables the EigenLayer contract to take away your ETH if you misbehave. So that's what it does.

53:42: Anna Rose:

But still, I guess the thing that isn't solved here is the 32. Does that mean that anyone using the restaking have to put up more than 32 ETH? Or do you pool...

53:52: Sreeram Kannan:

If you're restaking natively, so this is what we call native restaking. But you could take like stETH and put up 3.17 stETH into the EigenLayer contracts and then specify who the operator is.

54:06: Anna Rose:

Okay.

54:07: Tarun Chitra:

is must have been like August:

54:42: Anna Rose:

But I'm just going, so I see this. So using Staked ETH, you're already, let's go through that example. So you're saying either you're going to put up 32 plus something that could be used for the restaking.

54:54: Sreeram Kannan:

There's no plus something. It's just 32.

54:56: Anna Rose:

Just 32. But then... If you just put up 32, doesn't all go through to the validator?

55:00: Sreeram Kannan:

All goes to the validator. You have to set the withdrawal address where...

55:04: Anna Rose:

Withdrawal, it's the slashing that you lose on. Okay.

55:07: Sreeram Kannan:

That's right.

55:07: Anna Rose:

And the gain.

55:08: Sreeram Kannan:

When you withdraw, you will not be able to withdraw your entire money.

55:11: Anna Rose:

And the yield comes because of those services that are paying down through the system. I get it, okay.

55:16: Sreeram Kannan:

So it's like taking a parlay bet, where you lose your bet if any one of those things go wrong.

55:21: Anna Rose:

Got it. It's just... Okay, okay. Now that's the native. Now walk me through...

55:26: Sreeram Kannan:

Liquid staking.

55:26: Anna Rose:

The liquid staking token is like...

55:27: Sreeram Kannan:

Liquid staking is one of the easiest ways to participate in EigenLayer. You just take your stETH token or cbETH token, put it into the EigenLayer contracts, and then specify some operator like Coinbase or whatever, and then let them do their thing.

55:40: Anna Rose:

And again, you have a withdraw access so that if you do something bad, you get slashed.

55:46: Sreeram Kannan:

Yeah, it's not only a withdraw access, your stETH is sitting in the contract.

55:49: Anna Rose:

And yet, basically you allow this to still participate in the system because it's Staked ETH. Like it's already connected to Ethereum and staking.

56:00: Sreeram Kannan:

In some sense, EigenLayer contracts are designed to be highly general purpose. I could put in USDC for all I care. So you can stake anything you want as general purpose staking. So there's nothing specific about stETH or LSETH, it's just that these, the stETH, these are already reward earning, right? Because, I'm already earning the base layer rewards, and in addition, I'm getting these rewards, making it very easy for people to not worry about the opportunity cost of capital. Whereas if I ask them to put an unencumbered ETH or USD or anything, then we have to worry about like, are they making that 5%, 6%? This is like, I already got my 4.5%, 5%, so this is something on top of it. So that's the economics that makes it more favorable for us to ask for... It's just in the staker's favor to actually stake one of these other assets rather than native ETH.

57:03:

Another category that we are super excited about is cryptography, which many of your listeners may be super interested in. So all kinds of interesting new cryptographic systems can be built on EigenLayer. For example, if you want to build a system using Secure Multi-Party Computation. There's no way for you to build that either as a rollup or as a native smart contract on Ethereum because you are specifying what kinds of computation each node should do and what specific information each of those nodes hold. For example, imagine a system like Penumbra, which has state, which is dispersed across these different nodes, and you want to build applications on top of this, you can build this on EigenLayer because you have a decentralized network that you can borrow. So you can build something like Penumbra as an AVS on top of EigenLayer. You can build threshold encryption. So let's say you want to build an encrypted mempool where you send transactions and the transactions are all encrypted to a threshold key, which is in the threshold group.

58:13: Anna Rose:

Yeah, which would prevent sandwich attacks.

58:15: Sreeram Kannan:

on development worked back in:

59:01: Tarun Chitra:

That was just proof that you're from Seattle.

59:06: Sreeram Kannan:

Instead, what's happening in:

59:51:

And on top of the cloud, there are thousands of successful software as a service solutions. And each of them hyper specialized in some particular domain saying, hey, I'm an authorization service for social networks like OAuth. I'm a NoSQL database for enterprise applications. Very, very, very specific in the type of use cases that are being dealt. But these are still more foundational pieces, and what happens is consumer applications integrate a bunch of these SaaS services in the backend and then create an end user application. A typical end user application in the web uses 15 SaaS services in the backend. So when people come and tell us, oh, if you have a lot of modular blockchain, you have to pay a fee on each of these layers, that's exactly how the internet works, if you haven't noticed.

::

Wow. So you're bringing SaaS to blockchain.

::

Yes. Unleashing the SaaS era of blockchain. Because SaaS is actually open innovation. So our core thesis is open innovation, which is somebody who's super specialized in building, let's say FHE for some particular application should just build that. And an end user application should consume these services as they need and they all interoperate through a shared security layer, which is EigenLayer. So that's our vision. And so what we envision is people building on top of us build these more protocol. You can think of it instead of SaaS, it's protocol as a service. Anybody can build an arbitrary protocol and launch it as a service. And now you can concatenate these protocols as a service and then build and use applications. So that's the vision that we're building.

::

One very big different financial piece of this versus SaaS though is that you have dynamic pricing at all times versus like SaaS is oftentimes mainly set, like obviously there's preemptible nodes in cloud, but you need to get to a certain scale for dynamic pricing to work, whereas here you have dynamic pricing from the beginning. And that's like a totally different economic world.

::

Completely agree, and I don't like that. We're trying to change it. So for example, in EigenDA, you have static reserved pricing. So when you think about something like AWS, people say, oh, how much block space do you have? And nobody asks AWS how much cloud space do they have. The cloud space expands to fill the requisite demand. And the reason it does it and does it in a very smooth manner is 70% of the instances running on AWS are actually reserved instances. And there's also the spot instances where you can go and like ask right away, give me something, which is dynamic pricing. But there's also the reserved instances which give you like a long-term coherence, which gives you price certainty, absolutely completely different economics. EigenDA is built on this dual model where you have a spot market where you can go and buy bandwidth in the moment, but as a rollup, I know I need 100 kilobytes per second over the next one year. So I prepay that fee with a very good discount, and then I have access to it.

::

This reminds me of a very old project. I don't know if this ever turned into the gas token. Do you remember this?

::t kind of died because of EIP-:::

Yeah, I think the difference between things like gas token and this thing is the gas token is like spot futures where you're actually trading like what is the spot future. Instead, reservation bandwidth is like contract pricing. You have a contract with your oil supplier for the next one year to supply a barrel of oil at this price. So I think contract pricing is directly happening between the seller and the buyer. So it's actually much more rigid in the types of guarantees that you can provide.

::

Yeah, although I think I would guarantee that if you went and interviewed a bunch of Web2 companies and you were like, hey, would you be willing to pay Cloudflare on dynamic pricing, where most of the time you're actually going to pay 10 times less than what you're paying now, but sometimes you're going to pay 10 times more. I bet you most people would find that appealing. But if you think about how SaaS payment processing works and how there's not really notions of streaming payments, you effectively have this huge overhead for startups. Like Stripe doesn't offer you dynamic pricing, right? You have to get to the scale of Uber or AWS for you to have dynamic pricing. So there's also this thing though, that I think to me has always been the beauty of crypto. It's like you can be arbitrarily small in size, but have dynamic pricing if you need it.

::

Yeah, so that's a really, really good point. If we only have reserved instances, then small rollups will not be able to use EigenLayer. So the availability of these multiple pricing models is useful. But also the cost certainty that a reserved bandwidth gives you is because the cost... Imagine you're a roll up and you have uncertain data availability costs over the next one year, but you have to go tell your users at Coinbase, like how much it's gonna cost on a daily basis. How do you go do this? So it becomes impossible. For example, we know... For example, Ray Dalio helped McDonald's hedge soy futures and so on so that their burgers can actually have a constant price over longer time scales.

::

No way.

::

And so these are mechanisms that are needed to actually build pretty rigid markets.

::

Yeah, I just think the difference is most non-fully native payment mechanisms, which is almost everything, right? Only at really high scale can you do these kind of streaming crazy things. It's always been quite hard to get to that, access to that. And I think to me, crypto, the beauty of it is that you can have that if you want it. Because, obviously, people love static pricing, right? It's like nice. But if I can really offer you way more efficiency on average, there's a lot of people who would love to reduce their Cloudflare bill, for instance, right, if it was dynamic. But it's just a pain in the ass for them to do it because they are not even their own payment provider. They need their payment provider to like offer... And I think that somehow this notion of services that start from being programmable on how they take payment is like there's a fundamental difference between this type of stuff and SaaS. So hopefully we don't have to use the word middleware again.

::

All right. Well, I think we've reached the end of the episode. I know I've reached now the end of my time.

::

It's late night here.

::

Devconnect. Yeah, it is late here. Thank you so much for coming back on, Sreeram.

::

Thank you so much, Anna. Thank you, Tarun. This was super fun chatting about all these things with you and looking forward to hang out with you all at other events.

::

Thanks, Sreeram. I want to say thank you to the podcast team, Henrik, Rachel, and Tanya, and to our listeners, thanks for listening.