Summary
In this week’s episode Anna and Tarun chat with Naveen Durvasula about his recent work ‘Robust Restaking Networks’. They discuss Naveen’s early work on matching markets and how this led him to work on mechanism design before exploring how the concepts of restaking were first presented, and how both Naveen and Tarun have been working to better model the mechanisms underpinning restaking, to understand how they work and how they can be optimized.
Further Reading:
- Robust Restaking Networks by Naveen Durvasula and Tim Roughgarden
- EigenLayer: The Restaking Collective EigenLayer Team
- General Bio + Previous Research of Naveen Durvasula
Check out the ZK Jobs Board for the latest jobs in ZK at jobsboard.zeroknowledge.fm
zkSummit12 is happening in Lisbon on Oct 8th! Applications to attend are now open at zksummit.com, apply today as early bird tickets are limited!
Episode Sponsors
Attention, all projects in need of server-side proving, kick start your rollup with Gevulot’s ZkCloud, the first zk-optimized decentralized cloud!
Get started with a free trial plus extended grant opportunities for premier customers until Q1 2025. Register at Gevulot.com.
Aleo is a new Layer-1 blockchain that achieves the programmability of Ethereum, the privacy of Zcash, and the scalability of a rollup.
As Aleo is gearing up for their mainnet launch in Q1, this is an invitation to be part of a transformational ZK journey.
Dive deeper and discover more about Aleo at http://aleo.org/.
If you like what we do:
- Find all our links here! @ZeroKnowledge | Linktree
- Subscribe to our podcast newsletter
- Follow us on Twitter @zeroknowledgefm
- Join us on Telegram
- Catch us on YouTube
Transcript
Welcome to Zero Knowledge. I'm your host, Anna Rose. In this podcast, we will be exploring the latest in zero-knowledge research and the decentralized web, as well as new paradigms that promise to change the way we interact and transact online.
This week, Tarun and I chat with Naveen, a graduate student at Columbia University, about his recent work on restaking. We start with a look into Naveen's early work on matching markets and how this led him to work on mechanism design. We then discuss how the concepts of restaking were first presented to the public and how both Naveen and Tarun have been working to better model the mechanisms underpinning restaking, to understand how they work and to figure out how they could be optimized.
Now, before we kick off, I just want to point you towards the ZK Jobs Board. There you can find job opportunities working with top ZK teams. I also want to encourage teams looking for top talent to post your jobs there as well. We've been hearing from more and more teams that used it that they have found excellent talent through this ZK Jobs Board. So be sure to check it out. I've added the links in the show notes.
Also, quick reminder, the zkSummit12 is coming up. It's happening in Lisbon on October 8th. Be sure to grab your spot as space is limited. This is our one-day ZK-focused event where you can learn about cutting-edge research, new ZK paradigms and products, and the math and cryptographic techniques that underpin ZK systems. All info can be found at zksummit.com and hope to see you there.
Now Tanya will share a little bit about this week's sponsors.
[:Aleo is a new Layer 1 blockchain that achieves the programmability of Ethereum, the privacy of Zcash, and the scalability of a rollup. Driven by a mission for a truly secure Internet, Aleo has interwoven zero-knowledge proofs into every facet of their stack, resulting in a vertically integrated Layer 1 blockchain that's unparalleled in its approach. Aleo is ZK by design. Dive into their programming language, Leo, and see what permissionless development looks like, offering boundless opportunities for developers and innovators to build ZK apps. This is an invitation to be part of a transformational ZK journey. Dive deeper and discover more about Aleo at aleo.org.
Gevulot is the first decentralized proving layer. With Gevulot, users can generate and verify proofs using any proof system for any use case. You can use one of the default provers from projects like Aztec, Starknet and Polygon, or you can deploy your own. Gevulot is on a mission to dramatically decrease the cost of proving by aggregating proving workloads from across the industry to better utilize underlying hardware while not compromising on performance. Gevulot is offering priority access to ZK Podcast listeners. So if you would like to start using high performance proving infrastructure for free, go register on gevulot.com and write ZK Podcasts in the note field of the registration form. So thanks again, Gevulot.
And now here's our episode.
[:Today, Tarun and I are here with Naveen, a graduate student at Columbia University in computer science. He works with Tim Roughgarden. He is also working on mechanism design at Ritual. Welcome, Naveen.
[:Thanks for having me. It's great to be here.
[:Nice. And hey, Tarun.
[:Hey. Excited to be back for our second recording this week.
[:True.
[:It's a record.
[:Yeah. Although to the listener it will be a week later, but we know it's the same week. So today we're going to be talking about restaking and we're going to be revisiting that topic. Tarun, this was actually your idea to invite Naveen on the show, so maybe you could share a little bit about what you have planned or what you want to talk about.
[:Yeah. So I think in my mind there's always this sort of leading/lagging cycle in research where there's oftentimes, things that grow a lot in crypto, but no one knows why they work or how they work. And sometimes, hey, they're total Ponzi schemes, but other times, it's like Uniswap or maybe other L1s are a good example. And I think there's oftentimes this thing where a little bit afterwards people start trying to formalize the reason why something works, either from a distributed systems lens, sort of economic lens, a cryptography lens, whichever direction it may happen to be.
And I think I first ran into Naveen's work before he was doing anything in crypto. Maybe like a year and a half ago or two years ago when he wrote a paper on auction theory as an undergrad. It was more from the lens of sort of online learning and the kind of more the machine learning school of the world. I'm not sure if you find that offensive or not by not at all, because machine learning just means so many things now that some people -- you know, could mean everything from ChatGPT to linear regression. So I'm just trying to... it's an umbrella term, but it was more in that vein. And a lot of times, I think people from that world tend to not like crypto, or maybe view crypto suspiciously, especially in the research side. But Naveen then kind of very quickly wrote one of his first papers in crypto on restaking, formalizing a thing that I think people, even the people who invented it, who had spent a lot of time writing research on it, hadn't quite gotten correct.
So I think his ability to kind of move between fields and generate new research really quickly, I think was quite impressive. And because I think, in general, systems like restaking are going to -- I don't think they're going away. They might have different names and people might do -- might change some of the designs a little bit. But I think the reason there's $15 billion in those systems is people do view those as sort of the new way of having a growing ecosystems and building L1s versus just pure rollups. Because I kind of think like the natural extension of rollups, and L1s ends up being things that look like restaking.
So that's sort of my spiel and preamble of why I think it's kind of interesting. I mean, of course, Naveen's come at this from a totally different angle, where he and I both read the Appendix B of the EigenLayer paper and got two different interpretations of it. And then we both worked on different research problems, but I think they kind of converge. So, anyway, maybe it'd be great to talk a little bit about what you worked on before, how did you get interested in it, and before we talk about how you got into crypto research.
[:Yeah, sure. So, I guess research has been a thing I've been thinking about for quite a bit of time. Initially started working on some stuff, I guess, back in middle and high school at Maryland, primarily focusing on these kidney exchange matching markets. I guess I kind of got into research somewhat as a fluke. Some Science Fair judge thought it was maybe worth some professors' time to work with me. Frankly, I don't know how that worked out, but --
[:When you say kidney, you mean, like kidney kidneys? Like health care kidneys.
[:Yes, yes.
[:Okay.
[:That's exactly right. Yeah.
[:Oh, yeah. Maybe we should talk about what matching markets are in kidney exchange -- sort of a detour.
[:Yep. Yep. So this is the stuff I worked on initially, and the general gist there is that lots of people obviously need organ transplants. And before kidney exchanges were invented, the main way that you would get an organ transplant is through what's known as deceased donor donation, where if you sign up to be an organ donor, you get put on this list. And if you happen to -- if now patient needs a transplant, there is some priority ordering that's determined among people that need transplants. And based on that ordering, they get allocated kidneys.
First of all, the supply of transplants is low. And second of all, deceased donor donation, like deceased organs, are not as high quality as living donor donations. And so, if you did have a living family member or someone that's willing to donate to you, that's obviously preferable. And basically Al Roth came along. He's a professor at Stanford and he worked -- he started kind of the seminal work on kidney exchanges. And the idea there is, let's say I have a kidney -- sorry, I'm a patient and I need a transplant, and then I have someone I know, let's say a friend or a close family member that's willing to donate one to me, but we happen to be biologically incompatible. So in the old model of the world, nothing could happen here, right?
But now you kind of have this idea of, okay, what if you had a bunch of these pairs? So maybe I need a transplant and my sister is willing to donate one to me, but we're incompatible. And then maybe you, Anna, need a transplant and Tarun is willing to donate one to you, but you guys are incompatible. But now let's say that Tarun is actually compatible with me and my sister is compatible with Anna, then we can actually swap. Right? And so kidney exchanges are kind of these matching markets, actually, without money, because there's lots of policy and law around what you can do around organs and money.
[:The one kidney equals one kidney. There's no profit.
[:Exactly. You're trading kidneys for kidneys. There's no money involved, and there's all sorts of other constraints around because you can't place a contract on organ donation. Actually, the donations have to happen simultaneously. If it's kind of a cyclical structure like what I just described, then people also realize you could add some other types of trading structures where, let's say Tarun is just an altruistic guy, he wants to donate his kidney. He can give one to me, and now my sister can kind of choose to pass it on and then donate to someone else. And now you can have kind of an asynchronous chain of transplants.
[:Wow.
[:So this was one of the -- I think one of the pretty motivating examples for me of an example where kind of math and market design could be pretty helpful in the real world. It was a pretty practical problem and I worked on kind of initially more along the lines of how do you do learning for kidney exchanges in terms of if I'm a patient can I get some predictions for what I will -- how long I'll have to wait, what the outcomes will look like, stuff like that. And kind of got gradually more and more into the theoretical side of things. I started thinking more about learning problems in general and I had a brief stint also working with Scott Kominers, who's also now in crypto. And this was back in high school and I worked on some matching market problems with him. So I kind of got more into the standard mechanism design literature as well, while also learning a bit more about these learning stuff.
And there wasn't really a cohesive thing, it was just working on a bunch of these different hodgepodge projects that didn't really touch each other that much. Maybe a general theme of mechanism design and a general theme of maybe learning, but no kind of coherent narrative. And then when I went to Berkeley I worked with Nika Haghtalab and she works on online learning and mechanism design together. So putting together kind of the tools of optimization, statistics and economics. And so that's where I worked on this learning and auction stuff that Tarun was talking about. And there the idea is, let's say I want to run an auction every day, is there a way for me to figure out how to run an auction each day so that in the long run, I'm doing as well as if I kind of picked the best auction in hindsight at the very beginning.
So these were all theory heavy projects and I kind of moved from more practical stuff on the kidney exchange side to more theoretical econ stuff, and the math was super cool. It was great. But then when it came time to apply for PhD programs, I had a bit of a crisis of faith per se, in that --
[:Really? Why?
[:So I was working on all these three problems and I kind of felt like I was working on things in this weird, uncanny valley. The problems had -- when you write the motivation section for the paper you say oh, this applies to the real world, hear all these things that people care about, but the results are kind of too contrived to be clean theoretical results and they're also just way too theoretical for anyone to use this stuff in practice. Right? So the online learning and auctions paper credit to Tarun for actually maybe thinking about how to use it practically. I couldn't think of a way someone could use that in practice. It was a nice theoretical result along the lines of maybe possibility for learning, but the actual algorithm that was proposed would take eons to run properly. And so I kind of decided, okay, I want to either go on the very pragmatic side and do stuff that people actually care about, or go purely on the theoretical side and prove nicer theoretical results --
[:Than these ones.
[:Exactly, exactly.
[:Okay. Which direction did you go? I guess, practical.
[:So I chose to go -- so I guess I still work on theory stuff, but I chose to go fully pragmatic in the sense that I didn't want to work on a problem unless I really knew that there was at least someone that actually cared about this stuff.
[:Okay.
[:ump and was talking about EIP-:[:That's cool.
[:So that was -- once I kind of got sold on the vision, it was not a super hard choice for my PhD process. Yeah, working with Tim has been pretty awesome.
[:I mean, I will say the one thing, maybe one redeeming feature about ML and AI research is at least people care about theoretical guarantees, which is just not true in a lot of the other parts of the world where the theoretical guarantees are so divorced from what people do in practice that it's like they really are mental masturbation. There's really --
[:Right. Yeah, I mean, one of the -- maybe to go on the more theoretical sidetrack, one of the main ways of analyzing, I guess, the performance of a learning algorithm is something called VC dimension, which looks at the worst case. It's a type of worst case analysis for learning. And if you looked at those bounds, what those bounds predict current performance would be for ML systems, like it's really bad. There's a huge divorce between what we're able to prove and what people actually are able to do in practice. And so theory has been playing catch up on the ML side because it's just so hard to analyze stuff, and proving theorems that actually help practitioners is really hard because practitioners are just doing stuff that really just shouldn't be possible if you're looking at a worst case analysis and doing a good average case analysis is quite tough.
[:I guess, now, as we move to kind of what you've worked on within crypto, I think there's two main areas that you've worked on. The first being restaking and restaking risk, and the second being resource pricing. So for our listeners who aren't familiar with both of these areas, which could actually be a quite substantial portion, because a lot of listeners are cryptographers or developers who are maybe not quite so tuned to these areas, could you ‘ELI5’ them and give a kind of high level description of both of these areas and why they're important?
[:Yeah, sure. Restaking, actually, maybe to tie us back into the narrative of how I got here in the first place. When I started working at Columbia and at roughly the same time as when I started working at Ritual too, I wanted to just get some outside engagement to learn about how stuff worked, because I knew nothing about blockchains, frankly about a year ago.
[:Oh, wow.
[:And so I was just trying to learn what's the infrastructure? What do people care about? How can I -- I was just trying to figure out how can I start working on the problems that I think people care about the fastest. And so I first heard about restaking, actually, while I was at Ritual, because Ritual is now working with EigenLayer, and EigenLayer was coming up, restaking was coming up. And the main idea there is, so you have Ethereum, and Ethereum is a very secure blockchain in the sense that there's lots of validators. Maybe if you have other protocols that you might want to run, and the reason why you might want to run them is that perhaps they have functionality that extends beyond what Ethereum can do or any other blockchain.
And you kind of have this issue of if I want to run a protocol with a proof-of-stake system, I need to recruit people that want to put stake in my protocol. And that's a tough ask, because you're essentially asking people to lock up money, and anytime you ask people to lock up money, you need to pay also a cost of capital expense because they could have been using that money for many other things. And so you need to compensate them for that loss that they would be facing. So that's a pretty expensive thing to ask for, and it's also just like, how do you -- building a blockchain is almost like building a city. Like you have to coordinate so many parties. How do you just get these people in the first place?
And so the idea behind restaking, I thought was pretty cool. It was, you have these validators that have already committed stake to Ethereum, let's say. And so in principle, the cost of capital expense for that stake has already been paid to them by Ethereum. We can talk about how that plays out in practice. Obviously, there's different quirks there. But in principle, if I think it's rational for me to put my stake in Ethereum, that means I expect that the rewards from Ethereum outweigh any cost of capital expense. And so there's all this stake that's already locked up in Ethereum. And now, let's say I have some other protocol, and I need stake to secure this protocol. Kind of the simple idea behind restaking is, okay, there's all this stake that already has this cost of capital expense paid off. What if I were to just reuse those validators.
So those validators, they're currently on the hook for Ethereum, the stake obviously serves multiple purposes, one in Sybil resistance, but also in terms of it's a slashing penalty that could exist if they try to do a double-spend attack or any other malicious behavior. And so what if I, as another protocol, could say, hey, there's this existing stake that's already getting its cost of capital expense paid off, all I have to do now is pay those validators the additional operating expense of operating my protocol, and now I can recruit them as well.
And so now, that same stake, it's on the hook not only for Ethereum, but it's also on the hook for this other protocol. And so, when I first heard about this, I kind of had two thoughts. One is, okay, that is a neat way of recruiting security that makes things frictionless, or maybe not frictionless, but with less friction. Other thought was immediately, okay, we've seen lots of weird structures in crypto that have led to financial demise. Is this a safe thing to do? Is there a way this can be done safely? Or does this place an excessive burden on validators that perhaps could lead to some type of catastrophe in a worst case situation?
[:Yeah.
[:And so on the restaking front, kind of the first thing I was thinking about was, okay, is there a way to understand what is the risk that maybe restaking poses, both from a global sense, but also, if I'm just an AVS, a service that's recruiting security, can I understand what the risks are there? Because as soon as you start sharing validators, the consequences that occur on one protocol can start affecting security for another protocol. And so, yeah, essentially, understanding, first, figuring out what is a good model to understand all this stuff, and then second of all, thinking about is there a nice, crisp characterization of under what circumstances can this be done safely, and under what circumstances should you be concerned that things might go awry?
[:In this case, though, so like I mean, I think we're getting to this work that you recently released called Robust Restaking Networks. But was that more like a description of what already existed? Or is that like a -- did you rebuild it from scratch and say this is an ideal way to do this?
[:So when I first started working on this stuff, and Tarun mentioned this too, essentially the only technical document about restaking slash its security was this initial thing put out by EigenLayer. And in particular, there's Appendix B of this thing, which was on crypto economic security. And so that appendix did lay out kind of a very basic model of, there are some kind of validators on one side, there are validators that might want to, or I guess they call it node operators that might want to put their stake in some number of protocols. And on the other side, there are these protocols that want to recruit security, and then they talk about a cost benefit analysis of under what circumstances can you say that the benefits of attacking some set of protocols don't exceed the costs? It's not a super long section. I think it's a pretty nice initial model, but that was essentially all that I kind of could understand about the system as of then.
And so the first thing that I -- I don't know, maybe to go into what I was actually doing in that paper, like half the battle, I think, in doing blockchain research is modeling. Unlike theory work, where the problems are extremely well specified, they're clean mathematical statements, and someone just has to go out and prove a theorem. The challenge here, most of the challenge in some cases, is actually coming up with a good model, a good way of -- a formal way of understanding basically what's going on in practice, so that when you prove theorems, they're relevant to the real world. And so there were kind of two main modeling things to think about. One is how do you characterize risk here? Or what does risk mean? And two is, just what is a way of understanding all the entities and their interactions. And so from that EigenLayer document --
[:Appendix B. Yeah.
[:That's right. That's right. I kind of started thinking about things in terms of a bipartite graph.
[:One thing to note is, I think there was a lot of technical documentation on how the implementation would work and what the code would look like, right? But there wasn't any document on guarantees that you would get for -- you know, and I think the only thing was this kind of tiny little thing. And I think when you read what Twitter was writing about restaking and like, oh, it's going to all blow up, dot dot dot, it's Luna again, whatever. And then you read -- you kind of look at the code, you kind of see a very divergent version of the world. And I think part of this comes from the fact that it's actually quite difficult to describe these guarantees. And I think Naveen probably can talk about why going from what was known to a rigorous but simple formulation is actually kind of a difficult process.
[:Yeah, sure. You're kind of working in the fog here, right? There's so many people saying different things, and the theorems that you prove are not helpful unless there's kind of a first, a general formulation for how to think about everything. And there are many aspects to restaking. The aspect that I was focused on was first just global and systemic risk type of stuff. And so, the model -- I kind of first dialed out a lot of the stuff on Twitter and mostly just thought about, okay, at the very core of this thing, there are these validators, and they're being reused among some services. Right? And kind of following some stuff in Appendix B, you can think of there being some profit from corrupting any service. Right? If I launch a double-spend attack on some service, there's some amount of money I can make. As to whether or not that's a number you can actually know, that's a separate discussion. But for now, we're just trying to put something together, let's say this is a thing you know. And so now, okay, so there's a bunch of these services, each of them have some profit from corrupting them, and on the other side, there are these validators, and each validator has some stake. Right?
[:And by the way, by service, do you mean like, what became sort of these AVSs.
[:AVSs. That's exactly right. Yeah. Service and AVS are interchangeable here. And on the validator side, each validator has some stake, right? You can think of this as maybe their ETH stake. And so now, in a world where there's restaking, each validator might be using that stake for multiple services. So you can think about this as a bipartite graph, where on one side you have these AVSs or services, and on the other side you have these nodes. And you can kind of draw an edge between a validator and a service if that validator is restaking or using their stake for that service, if they're acting as an operator for that service. And so now, on the service side, each service has some profit from corrupting it. Right? That's the amount of money you'd make if you were to launch an attack. And each service, also, depending on how it operates, perhaps if it's a PBFT style consensus system, there's some fraction of stake that's required among its total security, that's required to launch an attack on it.
So let's say, to make things concrete, if there's one service, maybe just Ethereum, and a bunch of these validators, one validator can't launch an attack. You need a third of the stake. Right? So the profit from corruption would be the profit from launching a double-spend attack on Ethereum, the fraction required for corruption would be one third, and each validator has some stake corresponding to their Ethereum stake. Now, in kind of this general system -- so actually, maybe we'll start with that simple system with just Ethereum. You can ask, when are things safe from a cost-benefit analysis point of view? And that's basically when, if you look at one third of the total staked Ethereum, if that is bigger than the profit from corrupting Ethereum, right? Because if the profit from corrupting Ethereum was bigger than one third of the total stake of Ethereum, then it would be profitable for one third of people to come together and launch an attack. And so the baseline thing required for Ethereum to be secure in some sense is that one third of the total stake exceeds the profit from corrupting Ethereum.
So this is one way to think about things, but even that you might not find super satisfying, because there are lots of shocks that just happen in the world, especially in crypto, where there are definitely shocks that happen in the world. These shocks could be, for example, some protocol does unintended slashing on some stake, or some stake just drops out for whatever reason, validators maybe go down, whatever it is, you can't stop small shocks from occurring in the world. And so let's say, it was kind of the case that one third of the total stake on Ethereum was exactly equal to the profit from corrupting Ethereum. Right? We might say it's secure, or maybe it's equal -- maybe one third of the stake is like epsilon more than the profit from corrupting it, so it's kind of just barely secure. And this is kind of not that satisfying, because if there's just any small shock that happens to the stake, then it all of a sudden becomes profitable to launch an attack.
And so you can extend this notion of security to kind of maybe you might call it robust security, which is to say, suppose that you allow some small shock to take place in the ecosystem, and now you want to understand what's the total amount of loss, ultimate loss of stake, that can happen after some attacks. This is kind of what I was thinking about when I was thinking about systemic risk. So to make things maybe more concrete, you could imagine in a situation where there's multiple services, that some small shock happens in the world, and it causes some validators to come together and attack some services, right? And so now, because they attack those services, those validators get slashed, they lose their stake. But those validators might have been restaking for yet other services. And so now those other services are a little bit less secure because people that they -- that were providing security for them have now gotten slashed. And so now, as a result, other validators might come in and attack those services. And so this could continue to go on and on, and a small shock might result in a loss of stake that's much greater than that initial shock.
[:How did you -- like you modeled this, I guess, but this hasn't happened in real life, has it?
[:Hasn't happened.
[:Like you are not -- it's not based on anything. So how do you know that that's how it works? Like that they would --
[:Great question. I mean, yeah -- I mean, even right now, slashing hasn't been, at least to my knowledge, I don't think EigenLayer has started slashing --
[:Some of the other, but now there are multiple restaking.
[:Restaking, yes that's right.
[:Some of them have more slashing and less slash. Like it's actually gotten more complicated than the original model.
[:But, yeah, I mean, again, I think there's lots of facets of how this setup can potentially be -- induce risk. This was one particular one. In general, just looking at cost-benefit analysis and looking at how small shocks affect the system, this type of small shock analysis, like this has been done in the traditional finance literature before, when looking at systemic risk. It is kind of a motivated concept on that side, and it kind of is just the -- if you're just thinking about costs and benefits, it is kind of the first thing you might think of as, okay, how do I characterize risk? There's lots of other risks. You might have, you might overburden a validator, maybe with computational demand or many other things, but in terms of what can you concretely understand, what made a lot of sense to me was to just look at, okay, maybe different bridges have some understanding of what these profits from corruption for their services are. And you can look at this whole graph. This graph exists. You can see who's restaking for whom. You can look at the stake. Yeah. This is kind of the first thing that came to mind in terms of how do you just look at system wide risk? Maybe from Ethereum standpoint of if a small shock occurs, does the presence of all these additional connections between services lead to a greater possible --
[:Make it worse.
[:Ethereum loss? Yeah.
[:One thing to note, and I think this is kind of one of the reasons I think it's good to formalize these things in an abstract math language rather than sort of like something that's more engineering or pure kind of like implementation. Is that a lot of other networks have had things that look very similar to restaking, but they haven't analyzed it formally. For instance, Polkadot Parachains and restaking are actually very similar. The only difference is that instead of this notion of the matching market happening by node operators choosing services, the equivalent of services here would be a parachain. Instead of the node operators choosing parachains, there was this auction, and then they had to validate that particular service.
So the matching was sort of done by an auction versus the matching being done by, I get to choose where I place my stake. But fundamentally they actually had very similar properties. I think the main difference is that when you're doing something like restaking, you can move your stake around and so you don't have to get locked into one service. So like the worst case thing that could happen in sort of the Polkadot world, and I think this is often why people had trouble developing there, is like you, as the service, had to raise a ton of capital to kind of win this auction. And the thing is, you had to raise capital in DOT. So you'd have to sell your token for DOT and then like, yeah, and then you have to do a crowdloan. And the thing is, you could argue that the crowdloan plus auction is as close as possible to what restaking is, except restaking has -- it's just easier for the end user. Operators can just choose to validate one or another.
[:We're validators. ZKV is a validator in Polkadot, and so you're a validator in Polkadot, but you don't just choose which other parachain you're validating on. You also would become a collator on those. Actually, there's another system like Cosmos's ICS or interchain security, which I'm now -- I only am putting it together now that it sounds even more similar because there it's very new, or it's like a year old or something. And as a validator on Cosmos, you are actually actively choosing if you're going to be active on these interchain security chains. I don't know if you've looked at that. Is that similar or is that --
[:So, one main difference between the Polkadot and Cosmos versions of the world and the Ethereum version of the world is that in Ethereum restaking, there's a floor on the amount that I'm earning. So I'm always earning Ethereum staking yield at a minimum. The problem with all of the Polkadot and Cosmos designs is they're very favorable to ATOM and DOT as tokens, and they're very unfavorable to the services because the services have to pay in ATOM or DOT and they don't have capital, they have like a small amount of fees. They have their token. So what they have to do is sell their token, buy DOT or ATOM and then participate. Whereas in ETH staking, you have people who are already staking ETH and earning 3% to 5% yield. And this is viewed as supplemental income versus is the main thing needed from the services, so.
[:I see. I see.
[:I think the economics in Ethereum are just strictly better. And also I think there's a little bit of greediness in both ICS and the DOT auctions, in that they only benefit the validators. They don't -- like the social welfare, like the splitting of revenue between the services and the validators is very unfair in a lot of ways in the DOT and ATOM worlds, whereas the Ethereum one is closer to kind of. And that's the point of these matching markets, right, is like they're trying to do some welfare maximization type of thing between the two. And I guess, Naveen working on that in the past probably inspired this particular model, you know, like what's like the genesis story of how you --
[:What's super weird about the restaking matching market in particular is it's actually a matching market with the negative externalities. Normally in a matching market, as matches take place, you can talk about welfare increasing. But here what's interesting is as there's more matches, there's kind of more potential risk depending on how it's done. And the reason for that being that more things are kind of tied together. And from a systemic risk point of view, you might expect things to get worse. Actually, maybe I can say the conclusion since I only said what the risk measure is and not how to mitigate it.
[:Sure.
[:But the main upshot is that over-collateralization is basically what's both necessary and sufficient to mitigate this risk. So what that means is if you go back to the Ethereum example, where there's just Ethereum and then a bunch of validators, there you kind of require that to make things robustly secure, where even if a shock occurs, things are still safe. You obviously need some buffer between the profit from corrupting Ethereum and one third of the total stake in Ethereum. It turns out that from a global point of view, things are similar. You need it to be the case that for any collection of services and any collection of validators that can attack those services, there needs to be some buffer between what the profit from corrupting those services is and how much stake would be lost in that attack. And you can actually bound -- let's say that that buffer is some multiplicative factor, you can actually bound the total stake loss after any sequence of attacks in terms of that buffer.
So, for example, to make things very concrete, let's say everything was always 10% over-collateralized in the sense that any attack on some services always costed 10% more. The total stake required to attack some services is always 10% more than the profits from attacking those services. Then you can actually say that even in a worst case situation, let's say that a 0.1% of total stake was just lost arbitrarily, then the total ultimate loss after any attacks thereafter is upper bounded by 1.1%. So you can really concretely bound worst case loss in terms of this buffer, which is a cool property. Of course, there's caveats. Like I said earlier, knowledge of what these profits from corruption are might be limited to bridges or folks who are actually running those services. And so even though from a global perspective, it would be nice if you could always have this over-collateralization --
[:You might not know how much you need to over- collateralize.
[:So you, as a bridge, may not know what other folks are doing. This was only a global result of --you know, okay, if there's some random shock that happens in Ethereum, then that's contained. But how do I know that everyone is over-collateralizing? Right? And that's the requirement for this mitigation to hold true. And if I'm a service in particular, how do I make sure that I'm protected? Right? And so the second part of this paper is really analyzing stuff from that perspective of if I'm a particular service I want to mitigate myself against -- I might not even understand shocks happening, like I might not have a good understanding of what shocks might occur outside of my own ecosystem. Right?
[:Yeah, yeah. But how do those shocks affect you anyway --
[:Exactly.
[:Yeah.
[:Right. And so is there a way for me to, in a similar vein, mitigate worst case loss if I -- just from looking at a local perspective, if I only know my own profit from corruption, maybe the profits from corruption of some partners, is there a way for me to mitigate this? And it turns out there actually is. You just have to do a different type of over-collateralization scheme.
[:Anna, to your point about collators, I mean, there's this thing of like, hey, if you have a collator that's shared across multiple parachains and they drop out and messages don't get relayed, and there's no other -- it's sort of a similar type of thing that happens.
[:Although collators tend to be per network. So you have the main validator of Polkadot and then you have a per parachain collator set.
[:Okay. They're never shared.
[:I mean, you might have the same companies running multiple. In our case, we only run Moonbeam, and we don't share it with anything else. I actually wanted to ask you a little bit more on the alternatives to restaking, because I wondered if liquid staking is in any way in the same category. I realize it's different. I mean, liquid staking is like you stake tokens, and then you have synthetics of those tokens, and you can do things. But this is also like, you stake tokens, and then you can do things with those staked tokens. So I just wondered if there's ever, like if the restaking research ever looks into the liquid staking and how that played out.
[:Tarun, you wrote a blog post related to this, right?
[:Yeah. Yeah, a couple of things. One is, the model that Naveen has in his paper doesn't really cover the principal-agent problem, where the node operators don't own the capital that they're restaking. The idea is that for the node operator, the profit that they realize in this model is the profit from corruption of the set of services, minus the total stake that could have got slashed. Now, if you add in the principal-agent behavior of someone is delegating capital, and the node operator, if they get slashed, it's not their money, or maybe only some of it is their money. Like you, as a validator, the majority of your capital is delegated. That's not your capital. So the principal-agent problem naturally leads to this centralizing effect.
ere's sort of two papers from:[:It's like almost our own stake on the line. Like the validator's own stake.
[:Yeah, yeah. You have to have some amount of it. Yeah. And a lot of liquid staking protocols actually do require the operator to put up some stake like I think for Rocket Pool, it's like 2 to 4 ETH, if I remember correctly. And then I think for a lot of liquid restaking protocols, there's a minimum amount also. And so the idea is, how do you choose that minimum? What's the minimum you need to guarantee secure? One way you could do this is extend the bipartite graph description to a tripartite graph, where its one partition is the capital holders who are delegating, the next partition is the node operators, and the next partition is the services. And then you look at the flow across that graph, and you write out a kind of natural, sort of principal-agent problem thing, and show that it has equilibrium such that if the node operators always have at least, say, 5% of the stake is their own, then they're unlikely to deviate from the strategy as if it was their own capital by more than -- by a small amount.
And to be fair, this is actually true in society at large, not just in crypto, that like the agents, as in the people who are acting on behalf of the principal, like the capital holder, often have to put up their own money. So for venture funds, or private equity funds or hedge funds, oftentimes the agents, the partners who are investing money, have to put in their own money. And so I think these types of things naturally show up in liquid staking. It's just more messy to reason about.
[:But bringing it back to restaking, does restaking also have a principal-agent percentage minimum?
[:Yes.
[:It does.
[:There's no doubt that that's still an open problem, I would say, that's not solved.
[:There's a lot of -- yeah, definitely designs under specified currently.
[:eally interesting is like, in:[:It worked, yeah, yeah.
[:It seems to work. We don't really understand why. And that is what led me to try to figure out, okay, maybe you should think of it as this optimization problem, whatever. And I think what Naveen did was like, hey, look at this thing. There's actually secretly this graph problem and matching problem if you squint enough, right? Even though the people who wrote it kind of wrote these sufficient conditions for when it could be safe and not safe, if you zoom out and think of it as this matching problem, then it becomes much more easy to reason about. And I think oftentimes in research like this, a very important thing to do is more important than the actual solution to the problem. Oftentimes is having the right definitions, because there's oftentimes this trade-off in math of like, I either start with no definitions and very simple things, and I prove a really complicated result, or I start with really long complicated definitions, and then the result is sort of trivial because I put all the complexity in the definition of the thing instead of in the answer.
[:Okay. Who did which kind? Who's which in that description?
[:I think the ideal case is that you have a simple definition and then a simple proof.
[:You want something in the middle -- in the middle of.
[:Okay, okay.
[:No. But I think even to your point about this ending up being a very clean graph problem, there's -- for those, anyone who's more on the TCS, theoretical computer science side, actually, in the case where all -- everyone has the same stake, every service has the same profit from corruption, the problem actually, of checking for security becomes checking whether or not this graph is an expanded graph, which happened to be some nice, there's just some nice structure in the problem.
[:I want to go a little bit back to the connection between the matching work and the paper, because, Tarun, you're saying that sort of the technique was turning it into this matching problem, but somehow, for me, I've lost that connection. I heard your beginning, Naveen, where you were talking about what you were doing before, but, yeah, what's matching in here?
[:So, I guess, as it was framed before in Appendix B of the EigenLayer whitepaper, you kind of just had these parties, you had these services, or AVSs, I guess, I think they call them tasks even, and then you have these node operators or validators that had some stake. And you can think about these balanced conditions in this cost-benefit analysis that I was describing of when is it the case that there aren't any validators and tasks for which -- or services for which those validators profit from launching those attacks? The thing that model doesn't quite help you with is looking at, I guess, broader counterfactuals, or looking at the structure between what validators are putting their stake in different services. And so --
[:So is the matching -- is it between --
[:It's between validators and services.
[:And AVS. Okay.
[:Or AVSs, right? And a validator is matched to a service if they're restaking for that service, right? And normally in a matching market, each -- let's say you think about a job market. So you have a bunch of these people that want jobs and a bunch of these employers, and you can kind of normally think of it being the case that, I guess it depends on the setting, but in most common settings, each match is a little bit -- is kind of separate from each other. If I get a job at some company, someone who's getting a job at a different company, it doesn't really matter to them that I got a job at this company. Now, if they're competing with me, then maybe it matters. But in a sense, the payoff that I got is somewhat separable from the payoff that they got. And those things also led to positive externalities. Right? Like, I worked for this employer, the employer thought it was profitable for me to do this, I thought it was profitable for me to do this, there was like -- welfare was received by both parties.
What's interesting about the restaking matching market is that when a validator is matched to a service, and then a validator is also matched to a different service, right? So let's say a validator is already matched to Ethereum in the sense that they're a staker for Ethereum. And now this validator now chooses to also restake that ETH for another service. Them making that choice has consequences for Ethereum, right? Because it imposes a negative externality on Ethereum in the sense that that other service that are restaking for, now has the rights to slash some ETH. And depending on how this is implemented, this could lead to a negative externality on Ethereum in that staked ETH could disappear due to the actions of other services --
[:Yeah, outside of it.
[:Whereas in a world where you couldn't match to these things, that couldn't occur.
[:Interesting. How are the AVSs? Like are they offering -- I'm guessing AVSs offer unique incentives to the validators to get them to validate them.
[:Yep.
[:What are these just native tokens on the AVSs usually?
[:So, actually, this is what my follow-up paper that kind of took Naveen's framework and tried to analyze what happens when there's this reward mechanism. So I think if you look at it from the abstract graph problem, there's sort of some natural way that validators choose services, right? You're just like, you're given a graph, like ZK Validator has chosen eOracle and Eigen DA and none of the other services. That's a choice you made. And then, given that Naveen's paper analyzes like how do you attack that? On the other hand, in practice, what happens is, basically, services give you fees plus native tokens, which you could think of like block rewards from staking. And so they give you some amount of rewards, and then you, as the validator, have to choose the subsets of services you want to operate. Right? So if ZKV was running a bunch of restaking nodes, you would say, hey, Eigen DA is giving me 5 ETH of yield a month, eOracle is giving me 1 ETH. I'm just taking the top three from the live AVSs right now.
[:Also, we should make it clear ZKV is not currently running any of these. These are just theoretical.
[:I'm just -- I was just using this as an example --
[:I, like, don't use me, I don't mind. I don't mind.
[:And then let's say you were -- I'm just going down the list of the live AVSs, like, Witness Chain was offering 15 ETH a month. Well, you say have 100 ETH that you want to restake. How do you determine where you want to put it? Naively, you might say, hey, I want to put it all in Witness Chain, right? Because --
[:That's returning.
[:They're offering me the most yield. But Witness Chain might just be very slashable. Now, to Witness Chain people, I'm not saying you are. This is all hypothetical. I'm just saying like, suppose it turns out it's the most slashable.
[:We have to be careful. We should almost use, like AVS A, AVS B.
[:I just want to make it more concrete.
[:Yeah, yeah.
[:Because I think if you make it concrete, it's a little cleaner. So suppose you get -- you say, okay, I put all my 100 ETH into the Witness Chain thing, I expect to get 115 ETH after a year. But actually, I got slashed on 90 of it. My loss is not just the 90 ETH I lost, there's also the future profits I lost, like, oh, I didn't lose -- I didn't get that 15 because I was expecting 115.
[:Do you get kicked out when you get slashed? Is that why? Like you would never -- or you don't receive your reward?
[:Depends. That's actually up to the service. That's up to the service. The other thing is, I have the opportunity cost of the lost ETH staking yield, right? Because before I had 100 ETH, that was, say, stETH, like, staked ETH, and it was earning 3%. So I expected to get 103 after a year, but now that I've gotten slashed, I've lost that 90. And to Naveen's point, there's this negative externality on Ethereum itself, and that Ethereum lost 90 ETH that was staked in Ethereum. So lost 90 ETH to security due to something outside of Ethereum.
[:Yeah. Because actually, who slashes you? It's the -- Is the AVS that slashes you? No, like who gets to take the ETH? Does it just get burned? What happens to it?
[:Well, it depends, but for the purposes of this paper, it's burnt.
[:It's burnt, okay. No one gets it. It's not like someone take --
[:Yes.
[:Okay, it's gone.
[:But remember, it's reducing the stake supply, right?
[:Yeah, but isn't it kind of good for a network when you burn tokens?
[:I guess, I mean, there is a macroeconomic effect of you're distributing -- if the total market cap stays fixed and you burn tokens, then you're distributing wealth back to token holders.
[:But on the staking front, you're losing stake. So you're losing security, exactly.
[:Right. So you're making it easier to attack in some sense. Right? And so the idea that -- there is this tradeoff between the two, but honestly, the thing is, it does allow these services to get security as if they were themselves in L1 in some ways. And so there's this question of like, if I'm a new service, let's say I'm an oracle, let's say I'm a new rollup that wants a decentralized sequencer and doesn't just live off a multisig forever, I need some way of enforcing penalties on the off-chain actors who enforce some state transitions. So in the case of rollup, a sequencer, in the case of, say, a ZK prover, the people generating the proofs, in the case of Oracle, the people aggregating. The thing is, I could start a new L1, but then I have to bootstrap everything. I have to figure how to get liquidity, I have to get stable coins. I have to do all of this stuff that's very hard to get new validators.
But the question is, when are the economics such that it actually makes sense for me to join a shared network, even though I have to pay rewards such that it is above this over-collateralization threshold versus just completely starting a new one? And the idea is that tradeoff is a combination of understanding the economics of how much you have to pay in rewards, combined with understanding this notion of cast security that Naveen made. Because you want to pay enough such that you don't have those kinds of risks. So you want to be over-collateralized that much. But at the same time, you also don't want to pay so much that it's cheaper for you to start your own L1. Does that make sense? And so that's the tradeoff.
[:That's the balance.
[:That's the balance. And so yeah, our paper is about that. And like how much you have to pay to get that type of security.
[:That's what your paper was, Tarun. And then --
[:We just sort of following on Naveen's paper.
[:Cool.
[:But I think this idea is just going to, is more fundamental to blockchains that like there's some things that can share security and there's some things that need to be isolated. It's very much like lending or perpetuals, like the DeFi aspects of things. And I think the interesting thing about restaking from a theoretical lens is it blends a lot of the analysis of proof-of-stake that has existed for a while that people in consensus think about with the analysis of things in DeFi. And sort of it's exactly in the middle of the two. And I think because of that, it sort of is a superset of a lot of things people are working on, like subsumes doing all these off-chain services like rollups. It subsumes some of like how should you analyze the economics of data availability? It's like a more general framework to analyze all of these many different economic problems, which is one thing I think people don't -- I think restaking, obviously people think about the yield and whatever, but if you zoom out, the real thing you should be thinking about is this type of stuff.
[:I find it so wild that last year or two years ago when the ZK rollup world was announced, I remember Tarun, you being like, man, everything, all paths lead to Polkadot. But actually there was a lot missing from the rollup model to be more like Polkadot. And now I think what you're describing, just going back to what you were saying before, it seems like it's a different version, maybe learning lessons from how Polkadot went, especially in terms of launching these AVSs and like incentivizing them. But it has more, I feel, in common with like that Polkadot model than rollups do.
[:For sure. But you have to remember that the reason that we don't see rollups doing that is that the rollups haven't delivered on all the promises that they were supposed to make, right? Like they haven't gotten -- and when they do that, they have to have ways of penalizing the sequencer. And there is no way of penalizing the sequencer right now, right?
[:Yeah.
[:Fundamentally, I think that is the thing that is missing. Even in a centralized sequencer world, if a fraud proof executes, I should be able to slash the sequencer. I cannot do that right now.
[:No, you can't.
[:There's no penalty. And my point is, the restaking stuff, if you look at it from this matching market lens, kind of subsumes a lot of these different things that people are making specialized economics for, including ZK proving markets, because I think there's a lot of this service versus consumer versus producer relationships and ZK provers, right? Where there's many networks that might be sharing some prover network and they have to figure out how to price things.
[:Theoretically, it doesn't exist.
[:Theoretically. But my point is, restaking is the first live market that is actually trying to solve this directly as the economics problem. I mean, I'm not sure the people working on it think of it this way, but this is what's emergently happening.
[:Actually, maybe to give a second framing of, I guess, all of that in terms of the construction of the matching market, I think Tarun's basically been outlining, I guess, two separate problems. One of them is you can talk about what's the current state of who's restaking to whom. And then you can discuss the security of this, and maybe even the robust security of this. This is what I was working on. And then what Tarun's been talking about is, okay, now what actually is that graph? How does that graph get generated? And what are the processes that dictate who ends up actually restaking to whom? So you can actually make sure that that graph does satisfy the original properties of what's safe and not safe.
[:Yeah. And I think this notion of what safety and not safety is something that, especially from this economic standpoint, is something completely unmodeled or kind of given a lot of delicate care in all these other networks like DA, proving, rollups. Because mainly -- because right now there's just this hunt for demand, for any application to actually use them and generate fees. Whereas with restaking, there actually are more applications that you can imagine paying fees. Like there's a lot of things that are closer to DeFi looking things, like Oracles that have business models. And because of that, I think it's sort of like the proving ground where you can test out how you should think about the economics for all these other areas.
[:Actually, this is a pretty decent seg into resource pricing.
[:Perfect. Yeah. Sorry. Yeah. So maybe we should talk about, you're working with Tim on your PhD, who is kind of, I think one of his first crypto papers was about resource pricing in Ethereum and how to think about it. And you, while working at Ritual, have been thinking a lot about resource pricing in a more abstract sense, which kind of maps to this kind of thing of like, there's service providers and there's capital, and how do they get matched together, and what's the price they should pay? So maybe we could jump into that.
[:mechanisms, and this is like:[:Is that connected to this? Like, is it in the restaking work, or is it totally different?
[:Totally separate from the restaking work. I mean, I guess there are similarities in that they're both matching markets, and they both also apply to similar entities in the sense that, just like how Tarun was talking about restaking being helpful for coordinating incentives for interoperability and stuff like that, in terms of what those exact incentives should be and how they should be determined, this is relevant there. But largely speaking, these are separate things.
[:Got it. I wonder -- okay, so this is a more like perception question, but in the last year, early on in the year, I think everyone is very, very excited about restaking. And then EigenLayer launched, and there was this kind of frustration and anger about all sorts of things. But it seemed like there was almost a sentiment change against restaking, at least on Twitter. And I just wonder in the research, though, and in the real world, is there actually? Because I also know about a lot of teams that are now AVSs and they weren't before. So I'm sort of curious, looking from the research side, if you see any of that or have some opinions.
[:I think on the research side, it's kind of become more clear that, A, there's a lot more interest in restaking as things are starting to go from idea to actually deployment and coming with that is more of an interest in, okay, how do you make sure things are actually secure? So from a research perspective, I think interest in research on secure restaking has definitely become more relevant. In terms of positive or negative spin, I don't know. I mean, I think there's just a lot of parties that are interested and a lot of parties that now care a lot more than did before about the practical implementations of these things playing out properly.
[:I can see when I asked this question, Tarun was kind of like annoyed.
[:of the early days of DeFi in:ryone is just like, oh, I got:But I think if you zoom out, the main point is that it's a way of deciding, having networks where you can either decide to share security in a very precise way that has some covenants and guarantees, which means that you're taking on some risk for doing that, but obviously you're getting some reward. And it gives the end developer the choice of do I want to start a new network from scratch and go through the entire process of doing that, or do I want to be able to use an existing network? And I think the interesting thing about Polkadot was like they had all these ideas there from the beginning. Like this idea that the main chain was the layer-0 for the parachains and they would share it. But the problem is, I'd say the Polkadot model economically was too greedy. It didn't welfare optimize for the whole network. It literally welfare optimized for DOT holders.
And I think the weird thing about restaking is because ETH already kind of had a working economic system, adding this afterwards was like, is more effective than trying to start from scratch with that, if that makes sense. And I would say, I think -- again, I think there is a lot that was learned from the lessons of the Polkadot design and I mean, restaking is still to prove itself, but I do think when you get to this fundamental economic question of I have off-chain services that need to get matched to participants who will secure them. How do you do that? How do you do it efficiently? What should the market structure look like? And like I think we're finally at this point in crypto where all of this stuff that people have talked about for matching markets, it can happen live and iteratively right now.
Another thing about the history of matching markets like kidney exchange or the residency matching programs, or all the stuff that sort of matching markets have historically been associated with is they're usually thought of as these one shot games. Right? There's like everyone who's graduating from med school in one year in the US gives their preferences, and then the hospitals give their preferences, and then they get matched. Right? But that's a one shot thing. Once a year, every year, is a different student. So it's like a totally different thing. But these crypto matching markets are very unique in that they're iterative and constantly and evolving. And that's the thing that I think is very unique about them, that even in normal finance, you don't see as much. Sort of online advertising, you kind of see this, But even then, it's not quite the same because the user doesn't have control over their matching. The matching is done effectively by the platform, like the Google, Facebook, Amazon. They're auto bidding on your behalf, they're kind of doing most of the work. You don't really -- can't change exactly how it works. But a unique thing in crypto is that this is happening live on its own with billions of dollars. And I think --
[:And I think what even complicates that further is that it's halfway in between being a batch system and a continuous streaming system, in that we've discretized the world into blocks. And there are weird consequences of that being a batch process, while at the same time there's a continuous stream of people having demand.
[:Wow.
[:Or use of a blockchain.
[:That's funny. As you describe all this, I start to wonder how or if MEV changes at all through this, like in the restaking world. Does it open up new places where there's crazy arbitrage? Have you looked into that? Has anyone?
[:I think that definitely opens up. There's a huge amount of thought on interoperability in general and consequences for that, for MEVs. So there's people working on shared sequencers and other stuff like that. So, definitely, there are definitely implications for it, but, yeah.
[:Wait, wait, wait, wait. I would have thought you'd have more of an answer, because I feel like the broker mechanism stuff for the resource pricing is kind of thinking about that.
[:Yeah. So there --
[:There's the connection.
[:That's less tied to restaking, but, yeah, I mean, MEV, in terms of interop between different parties, you can talk about maybe different types of MEV. There's MEV that comes more from arbitrage of lag times between different parties that have different information. I think this is definitely a thing that is relevant, maybe harder to study theoretically. And then there's also MEV that comes from just having a richer ecosystem to work with. And MEV is often thought of as a pure net negative. But you could even think about MEV in a different light as it's a type of just-in-time coordination that's going on. Currently, all of these types of coordination that's happening because most of what's happening in crypto is a zero sum game. It's just extracting stuff from the parties -- from other parties. But in cases where there's not a zero sum game, you could perhaps imagine that this leads to some net positive.
[:s, early:[:Sure. Yeah. The restaking stuff is already out, but also have been working on this resource pricing related work that we very lightly touched on. That's probably going to come out in the next, I don't know, month or so if I were to guess.
[:Cool. Well, thank you so much for coming on the show, sharing with us your early research and how it led to you working on this restaking stuff and then going deep on the restaking stuff and letting me ask a lot of questions about restaking that I've always wanted to ask. And this has really helped me to understand much, much more how you're thinking about it, but also how one can think about it a little differently than maybe it's initially been described. So thanks.
[:Just ignore the:[:Thanks so much for having me. It's been awesome to be here.
[:Hey, thanks for coming on. I'm happy that hopefully we can dispel the myths about restaking slowly over time.
[:Yeah. Okay. I want to say thank you to the podcast team, Rachel, Henrik, and Tanya, and to our listeners, thanks for listening.
Transcript
Welcome to Zero Knowledge. I'm your host, Anna Rose. In this podcast, we will be exploring the latest in zero-knowledge research and the decentralized web, as well as new paradigms that promise to change the way we interact and transact online.
This week, Tarun and I chat with Naveen, a graduate student at Columbia University, about his recent work on restaking. We start with a look into Naveen's early work on matching markets and how this led him to work on mechanism design. We then discuss how the concepts of restaking were first presented to the public and how both Naveen and Tarun have been working to better model the mechanisms underpinning restaking, to understand how they work and to figure out how they could be optimized.
Now, before we kick off, I just want to point you towards the ZK Jobs Board. There you can find job opportunities working with top ZK teams. I also want to encourage teams looking for top talent to post your jobs there as well. We've been hearing from more and more teams that used it that they have found excellent talent through this ZK Jobs Board. So be sure to check it out. I've added the links in the show notes.
Also, quick reminder, the zkSummit12 is coming up. It's happening in Lisbon on October 8th. Be sure to grab your spot as space is limited. This is our one-day ZK-focused event where you can learn about cutting-edge research, new ZK paradigms and products, and the math and cryptographic techniques that underpin ZK systems. All info can be found at zksummit.com and hope to see you there.
Now Tanya will share a little bit about this week's sponsors.
[:Aleo is a new Layer 1 blockchain that achieves the programmability of Ethereum, the privacy of Zcash, and the scalability of a rollup. Driven by a mission for a truly secure Internet, Aleo has interwoven zero-knowledge proofs into every facet of their stack, resulting in a vertically integrated Layer 1 blockchain that's unparalleled in its approach. Aleo is ZK by design. Dive into their programming language, Leo, and see what permissionless development looks like, offering boundless opportunities for developers and innovators to build ZK apps. This is an invitation to be part of a transformational ZK journey. Dive deeper and discover more about Aleo at aleo.org.
Gevulot is the first decentralized proving layer. With Gevulot, users can generate and verify proofs using any proof system for any use case. You can use one of the default provers from projects like Aztec, Starknet and Polygon, or you can deploy your own. Gevulot is on a mission to dramatically decrease the cost of proving by aggregating proving workloads from across the industry to better utilize underlying hardware while not compromising on performance. Gevulot is offering priority access to ZK Podcast listeners. So if you would like to start using high performance proving infrastructure for free, go register on gevulot.com and write ZK Podcasts in the note field of the registration form. So thanks again, Gevulot.
And now here's our episode.
[:Today, Tarun and I are here with Naveen, a graduate student at Columbia University in computer science. He works with Tim Roughgarden. He is also working on mechanism design at Ritual. Welcome, Naveen.
[:Thanks for having me. It's great to be here.
[:Nice. And hey, Tarun.
[:Hey. Excited to be back for our second recording this week.
[:True.
[:It's a record.
[:Yeah. Although to the listener it will be a week later, but we know it's the same week. So today we're going to be talking about restaking and we're going to be revisiting that topic. Tarun, this was actually your idea to invite Naveen on the show, so maybe you could share a little bit about what you have planned or what you want to talk about.
[:Yeah. So I think in my mind there's always this sort of leading/lagging cycle in research where there's oftentimes, things that grow a lot in crypto, but no one knows why they work or how they work. And sometimes, hey, they're total Ponzi schemes, but other times, it's like Uniswap or maybe other L1s are a good example. And I think there's oftentimes this thing where a little bit afterwards people start trying to formalize the reason why something works, either from a distributed systems lens, sort of economic lens, a cryptography lens, whichever direction it may happen to be.
And I think I first ran into Naveen's work before he was doing anything in crypto. Maybe like a year and a half ago or two years ago when he wrote a paper on auction theory as an undergrad. It was more from the lens of sort of online learning and the kind of more the machine learning school of the world. I'm not sure if you find that offensive or not by not at all, because machine learning just means so many things now that some people -- you know, could mean everything from ChatGPT to linear regression. So I'm just trying to... it's an umbrella term, but it was more in that vein. And a lot of times, I think people from that world tend to not like crypto, or maybe view crypto suspiciously, especially in the research side. But Naveen then kind of very quickly wrote one of his first papers in crypto on restaking, formalizing a thing that I think people, even the people who invented it, who had spent a lot of time writing research on it, hadn't quite gotten correct.
So I think his ability to kind of move between fields and generate new research really quickly, I think was quite impressive. And because I think, in general, systems like restaking are going to -- I don't think they're going away. They might have different names and people might do -- might change some of the designs a little bit. But I think the reason there's $15 billion in those systems is people do view those as sort of the new way of having a growing ecosystems and building L1s versus just pure rollups. Because I kind of think like the natural extension of rollups, and L1s ends up being things that look like restaking.
So that's sort of my spiel and preamble of why I think it's kind of interesting. I mean, of course, Naveen's come at this from a totally different angle, where he and I both read the Appendix B of the EigenLayer paper and got two different interpretations of it. And then we both worked on different research problems, but I think they kind of converge. So, anyway, maybe it'd be great to talk a little bit about what you worked on before, how did you get interested in it, and before we talk about how you got into crypto research.
[:Yeah, sure. So, I guess research has been a thing I've been thinking about for quite a bit of time. Initially started working on some stuff, I guess, back in middle and high school at Maryland, primarily focusing on these kidney exchange matching markets. I guess I kind of got into research somewhat as a fluke. Some Science Fair judge thought it was maybe worth some professors' time to work with me. Frankly, I don't know how that worked out, but --
[:When you say kidney, you mean, like kidney kidneys? Like health care kidneys.
[:Yes, yes.
[:Okay.
[:That's exactly right. Yeah.
[:Oh, yeah. Maybe we should talk about what matching markets are in kidney exchange -- sort of a detour.
[:Yep. Yep. So this is the stuff I worked on initially, and the general gist there is that lots of people obviously need organ transplants. And before kidney exchanges were invented, the main way that you would get an organ transplant is through what's known as deceased donor donation, where if you sign up to be an organ donor, you get put on this list. And if you happen to -- if now patient needs a transplant, there is some priority ordering that's determined among people that need transplants. And based on that ordering, they get allocated kidneys.
First of all, the supply of transplants is low. And second of all, deceased donor donation, like deceased organs, are not as high quality as living donor donations. And so, if you did have a living family member or someone that's willing to donate to you, that's obviously preferable. And basically Al Roth came along. He's a professor at Stanford and he worked -- he started kind of the seminal work on kidney exchanges. And the idea there is, let's say I have a kidney -- sorry, I'm a patient and I need a transplant, and then I have someone I know, let's say a friend or a close family member that's willing to donate one to me, but we happen to be biologically incompatible. So in the old model of the world, nothing could happen here, right?
But now you kind of have this idea of, okay, what if you had a bunch of these pairs? So maybe I need a transplant and my sister is willing to donate one to me, but we're incompatible. And then maybe you, Anna, need a transplant and Tarun is willing to donate one to you, but you guys are incompatible. But now let's say that Tarun is actually compatible with me and my sister is compatible with Anna, then we can actually swap. Right? And so kidney exchanges are kind of these matching markets, actually, without money, because there's lots of policy and law around what you can do around organs and money.
[:The one kidney equals one kidney. There's no profit.
[:Exactly. You're trading kidneys for kidneys. There's no money involved, and there's all sorts of other constraints around because you can't place a contract on organ donation. Actually, the donations have to happen simultaneously. If it's kind of a cyclical structure like what I just described, then people also realize you could add some other types of trading structures where, let's say Tarun is just an altruistic guy, he wants to donate his kidney. He can give one to me, and now my sister can kind of choose to pass it on and then donate to someone else. And now you can have kind of an asynchronous chain of transplants.
[:Wow.
[:So this was one of the -- I think one of the pretty motivating examples for me of an example where kind of math and market design could be pretty helpful in the real world. It was a pretty practical problem and I worked on kind of initially more along the lines of how do you do learning for kidney exchanges in terms of if I'm a patient can I get some predictions for what I will -- how long I'll have to wait, what the outcomes will look like, stuff like that. And kind of got gradually more and more into the theoretical side of things. I started thinking more about learning problems in general and I had a brief stint also working with Scott Kominers, who's also now in crypto. And this was back in high school and I worked on some matching market problems with him. So I kind of got more into the standard mechanism design literature as well, while also learning a bit more about these learning stuff.
And there wasn't really a cohesive thing, it was just working on a bunch of these different hodgepodge projects that didn't really touch each other that much. Maybe a general theme of mechanism design and a general theme of maybe learning, but no kind of coherent narrative. And then when I went to Berkeley I worked with Nika Haghtalab and she works on online learning and mechanism design together. So putting together kind of the tools of optimization, statistics and economics. And so that's where I worked on this learning and auction stuff that Tarun was talking about. And there the idea is, let's say I want to run an auction every day, is there a way for me to figure out how to run an auction each day so that in the long run, I'm doing as well as if I kind of picked the best auction in hindsight at the very beginning.
So these were all theory heavy projects and I kind of moved from more practical stuff on the kidney exchange side to more theoretical econ stuff, and the math was super cool. It was great. But then when it came time to apply for PhD programs, I had a bit of a crisis of faith per se, in that --
[:Really? Why?
[:So I was working on all these three problems and I kind of felt like I was working on things in this weird, uncanny valley. The problems had -- when you write the motivation section for the paper you say oh, this applies to the real world, hear all these things that people care about, but the results are kind of too contrived to be clean theoretical results and they're also just way too theoretical for anyone to use this stuff in practice. Right? So the online learning and auctions paper credit to Tarun for actually maybe thinking about how to use it practically. I couldn't think of a way someone could use that in practice. It was a nice theoretical result along the lines of maybe possibility for learning, but the actual algorithm that was proposed would take eons to run properly. And so I kind of decided, okay, I want to either go on the very pragmatic side and do stuff that people actually care about, or go purely on the theoretical side and prove nicer theoretical results --
[:Than these ones.
[:Exactly, exactly.
[:Okay. Which direction did you go? I guess, practical.
[:So I chose to go -- so I guess I still work on theory stuff, but I chose to go fully pragmatic in the sense that I didn't want to work on a problem unless I really knew that there was at least someone that actually cared about this stuff.
[:Okay.
[:ump and was talking about EIP-:[:That's cool.
[:So that was -- once I kind of got sold on the vision, it was not a super hard choice for my PhD process. Yeah, working with Tim has been pretty awesome.
[:I mean, I will say the one thing, maybe one redeeming feature about ML and AI research is at least people care about theoretical guarantees, which is just not true in a lot of the other parts of the world where the theoretical guarantees are so divorced from what people do in practice that it's like they really are mental masturbation. There's really --
[:Right. Yeah, I mean, one of the -- maybe to go on the more theoretical sidetrack, one of the main ways of analyzing, I guess, the performance of a learning algorithm is something called VC dimension, which looks at the worst case. It's a type of worst case analysis for learning. And if you looked at those bounds, what those bounds predict current performance would be for ML systems, like it's really bad. There's a huge divorce between what we're able to prove and what people actually are able to do in practice. And so theory has been playing catch up on the ML side because it's just so hard to analyze stuff, and proving theorems that actually help practitioners is really hard because practitioners are just doing stuff that really just shouldn't be possible if you're looking at a worst case analysis and doing a good average case analysis is quite tough.
[:I guess, now, as we move to kind of what you've worked on within crypto, I think there's two main areas that you've worked on. The first being restaking and restaking risk, and the second being resource pricing. So for our listeners who aren't familiar with both of these areas, which could actually be a quite substantial portion, because a lot of listeners are cryptographers or developers who are maybe not quite so tuned to these areas, could you ‘ELI5’ them and give a kind of high level description of both of these areas and why they're important?
[:Yeah, sure. Restaking, actually, maybe to tie us back into the narrative of how I got here in the first place. When I started working at Columbia and at roughly the same time as when I started working at Ritual too, I wanted to just get some outside engagement to learn about how stuff worked, because I knew nothing about blockchains, frankly about a year ago.
[:Oh, wow.
[:And so I was just trying to learn what's the infrastructure? What do people care about? How can I -- I was just trying to figure out how can I start working on the problems that I think people care about the fastest. And so I first heard about restaking, actually, while I was at Ritual, because Ritual is now working with EigenLayer, and EigenLayer was coming up, restaking was coming up. And the main idea there is, so you have Ethereum, and Ethereum is a very secure blockchain in the sense that there's lots of validators. Maybe if you have other protocols that you might want to run, and the reason why you might want to run them is that perhaps they have functionality that extends beyond what Ethereum can do or any other blockchain.
And you kind of have this issue of if I want to run a protocol with a proof-of-stake system, I need to recruit people that want to put stake in my protocol. And that's a tough ask, because you're essentially asking people to lock up money, and anytime you ask people to lock up money, you need to pay also a cost of capital expense because they could have been using that money for many other things. And so you need to compensate them for that loss that they would be facing. So that's a pretty expensive thing to ask for, and it's also just like, how do you -- building a blockchain is almost like building a city. Like you have to coordinate so many parties. How do you just get these people in the first place?
And so the idea behind restaking, I thought was pretty cool. It was, you have these validators that have already committed stake to Ethereum, let's say. And so in principle, the cost of capital expense for that stake has already been paid to them by Ethereum. We can talk about how that plays out in practice. Obviously, there's different quirks there. But in principle, if I think it's rational for me to put my stake in Ethereum, that means I expect that the rewards from Ethereum outweigh any cost of capital expense. And so there's all this stake that's already locked up in Ethereum. And now, let's say I have some other protocol, and I need stake to secure this protocol. Kind of the simple idea behind restaking is, okay, there's all this stake that already has this cost of capital expense paid off. What if I were to just reuse those validators.
So those validators, they're currently on the hook for Ethereum, the stake obviously serves multiple purposes, one in Sybil resistance, but also in terms of it's a slashing penalty that could exist if they try to do a double-spend attack or any other malicious behavior. And so what if I, as another protocol, could say, hey, there's this existing stake that's already getting its cost of capital expense paid off, all I have to do now is pay those validators the additional operating expense of operating my protocol, and now I can recruit them as well.
And so now, that same stake, it's on the hook not only for Ethereum, but it's also on the hook for this other protocol. And so, when I first heard about this, I kind of had two thoughts. One is, okay, that is a neat way of recruiting security that makes things frictionless, or maybe not frictionless, but with less friction. Other thought was immediately, okay, we've seen lots of weird structures in crypto that have led to financial demise. Is this a safe thing to do? Is there a way this can be done safely? Or does this place an excessive burden on validators that perhaps could lead to some type of catastrophe in a worst case situation?
[:Yeah.
[:And so on the restaking front, kind of the first thing I was thinking about was, okay, is there a way to understand what is the risk that maybe restaking poses, both from a global sense, but also, if I'm just an AVS, a service that's recruiting security, can I understand what the risks are there? Because as soon as you start sharing validators, the consequences that occur on one protocol can start affecting security for another protocol. And so, yeah, essentially, understanding, first, figuring out what is a good model to understand all this stuff, and then second of all, thinking about is there a nice, crisp characterization of under what circumstances can this be done safely, and under what circumstances should you be concerned that things might go awry?
[:In this case, though, so like I mean, I think we're getting to this work that you recently released called Robust Restaking Networks. But was that more like a description of what already existed? Or is that like a -- did you rebuild it from scratch and say this is an ideal way to do this?
[:So when I first started working on this stuff, and Tarun mentioned this too, essentially the only technical document about restaking slash its security was this initial thing put out by EigenLayer. And in particular, there's Appendix B of this thing, which was on crypto economic security. And so that appendix did lay out kind of a very basic model of, there are some kind of validators on one side, there are validators that might want to, or I guess they call it node operators that might want to put their stake in some number of protocols. And on the other side, there are these protocols that want to recruit security, and then they talk about a cost benefit analysis of under what circumstances can you say that the benefits of attacking some set of protocols don't exceed the costs? It's not a super long section. I think it's a pretty nice initial model, but that was essentially all that I kind of could understand about the system as of then.
And so the first thing that I -- I don't know, maybe to go into what I was actually doing in that paper, like half the battle, I think, in doing blockchain research is modeling. Unlike theory work, where the problems are extremely well specified, they're clean mathematical statements, and someone just has to go out and prove a theorem. The challenge here, most of the challenge in some cases, is actually coming up with a good model, a good way of -- a formal way of understanding basically what's going on in practice, so that when you prove theorems, they're relevant to the real world. And so there were kind of two main modeling things to think about. One is how do you characterize risk here? Or what does risk mean? And two is, just what is a way of understanding all the entities and their interactions. And so from that EigenLayer document --
[:Appendix B. Yeah.
[:That's right. That's right. I kind of started thinking about things in terms of a bipartite graph.
[:One thing to note is, I think there was a lot of technical documentation on how the implementation would work and what the code would look like, right? But there wasn't any document on guarantees that you would get for -- you know, and I think the only thing was this kind of tiny little thing. And I think when you read what Twitter was writing about restaking and like, oh, it's going to all blow up, dot dot dot, it's Luna again, whatever. And then you read -- you kind of look at the code, you kind of see a very divergent version of the world. And I think part of this comes from the fact that it's actually quite difficult to describe these guarantees. And I think Naveen probably can talk about why going from what was known to a rigorous but simple formulation is actually kind of a difficult process.
[:Yeah, sure. You're kind of working in the fog here, right? There's so many people saying different things, and the theorems that you prove are not helpful unless there's kind of a first, a general formulation for how to think about everything. And there are many aspects to restaking. The aspect that I was focused on was first just global and systemic risk type of stuff. And so, the model -- I kind of first dialed out a lot of the stuff on Twitter and mostly just thought about, okay, at the very core of this thing, there are these validators, and they're being reused among some services. Right? And kind of following some stuff in Appendix B, you can think of there being some profit from corrupting any service. Right? If I launch a double-spend attack on some service, there's some amount of money I can make. As to whether or not that's a number you can actually know, that's a separate discussion. But for now, we're just trying to put something together, let's say this is a thing you know. And so now, okay, so there's a bunch of these services, each of them have some profit from corrupting them, and on the other side, there are these validators, and each validator has some stake. Right?
[:And by the way, by service, do you mean like, what became sort of these AVSs.
[:AVSs. That's exactly right. Yeah. Service and AVS are interchangeable here. And on the validator side, each validator has some stake, right? You can think of this as maybe their ETH stake. And so now, in a world where there's restaking, each validator might be using that stake for multiple services. So you can think about this as a bipartite graph, where on one side you have these AVSs or services, and on the other side you have these nodes. And you can kind of draw an edge between a validator and a service if that validator is restaking or using their stake for that service, if they're acting as an operator for that service. And so now, on the service side, each service has some profit from corrupting it. Right? That's the amount of money you'd make if you were to launch an attack. And each service, also, depending on how it operates, perhaps if it's a PBFT style consensus system, there's some fraction of stake that's required among its total security, that's required to launch an attack on it.
So let's say, to make things concrete, if there's one service, maybe just Ethereum, and a bunch of these validators, one validator can't launch an attack. You need a third of the stake. Right? So the profit from corruption would be the profit from launching a double-spend attack on Ethereum, the fraction required for corruption would be one third, and each validator has some stake corresponding to their Ethereum stake. Now, in kind of this general system -- so actually, maybe we'll start with that simple system with just Ethereum. You can ask, when are things safe from a cost-benefit analysis point of view? And that's basically when, if you look at one third of the total staked Ethereum, if that is bigger than the profit from corrupting Ethereum, right? Because if the profit from corrupting Ethereum was bigger than one third of the total stake of Ethereum, then it would be profitable for one third of people to come together and launch an attack. And so the baseline thing required for Ethereum to be secure in some sense is that one third of the total stake exceeds the profit from corrupting Ethereum.
So this is one way to think about things, but even that you might not find super satisfying, because there are lots of shocks that just happen in the world, especially in crypto, where there are definitely shocks that happen in the world. These shocks could be, for example, some protocol does unintended slashing on some stake, or some stake just drops out for whatever reason, validators maybe go down, whatever it is, you can't stop small shocks from occurring in the world. And so let's say, it was kind of the case that one third of the total stake on Ethereum was exactly equal to the profit from corrupting Ethereum. Right? We might say it's secure, or maybe it's equal -- maybe one third of the stake is like epsilon more than the profit from corrupting it, so it's kind of just barely secure. And this is kind of not that satisfying, because if there's just any small shock that happens to the stake, then it all of a sudden becomes profitable to launch an attack.
And so you can extend this notion of security to kind of maybe you might call it robust security, which is to say, suppose that you allow some small shock to take place in the ecosystem, and now you want to understand what's the total amount of loss, ultimate loss of stake, that can happen after some attacks. This is kind of what I was thinking about when I was thinking about systemic risk. So to make things maybe more concrete, you could imagine in a situation where there's multiple services, that some small shock happens in the world, and it causes some validators to come together and attack some services, right? And so now, because they attack those services, those validators get slashed, they lose their stake. But those validators might have been restaking for yet other services. And so now those other services are a little bit less secure because people that they -- that were providing security for them have now gotten slashed. And so now, as a result, other validators might come in and attack those services. And so this could continue to go on and on, and a small shock might result in a loss of stake that's much greater than that initial shock.
[:How did you -- like you modeled this, I guess, but this hasn't happened in real life, has it?
[:Hasn't happened.
[:Like you are not -- it's not based on anything. So how do you know that that's how it works? Like that they would --
[:Great question. I mean, yeah -- I mean, even right now, slashing hasn't been, at least to my knowledge, I don't think EigenLayer has started slashing --
[:Some of the other, but now there are multiple restaking.
[:Restaking, yes that's right.
[:Some of them have more slashing and less slash. Like it's actually gotten more complicated than the original model.
[:But, yeah, I mean, again, I think there's lots of facets of how this setup can potentially be -- induce risk. This was one particular one. In general, just looking at cost-benefit analysis and looking at how small shocks affect the system, this type of small shock analysis, like this has been done in the traditional finance literature before, when looking at systemic risk. It is kind of a motivated concept on that side, and it kind of is just the -- if you're just thinking about costs and benefits, it is kind of the first thing you might think of as, okay, how do I characterize risk? There's lots of other risks. You might have, you might overburden a validator, maybe with computational demand or many other things, but in terms of what can you concretely understand, what made a lot of sense to me was to just look at, okay, maybe different bridges have some understanding of what these profits from corruption for their services are. And you can look at this whole graph. This graph exists. You can see who's restaking for whom. You can look at the stake. Yeah. This is kind of the first thing that came to mind in terms of how do you just look at system wide risk? Maybe from Ethereum standpoint of if a small shock occurs, does the presence of all these additional connections between services lead to a greater possible --
[:Make it worse.
[:Ethereum loss? Yeah.
[:One thing to note, and I think this is kind of one of the reasons I think it's good to formalize these things in an abstract math language rather than sort of like something that's more engineering or pure kind of like implementation. Is that a lot of other networks have had things that look very similar to restaking, but they haven't analyzed it formally. For instance, Polkadot Parachains and restaking are actually very similar. The only difference is that instead of this notion of the matching market happening by node operators choosing services, the equivalent of services here would be a parachain. Instead of the node operators choosing parachains, there was this auction, and then they had to validate that particular service.
So the matching was sort of done by an auction versus the matching being done by, I get to choose where I place my stake. But fundamentally they actually had very similar properties. I think the main difference is that when you're doing something like restaking, you can move your stake around and so you don't have to get locked into one service. So like the worst case thing that could happen in sort of the Polkadot world, and I think this is often why people had trouble developing there, is like you, as the service, had to raise a ton of capital to kind of win this auction. And the thing is, you had to raise capital in DOT. So you'd have to sell your token for DOT and then like, yeah, and then you have to do a crowdloan. And the thing is, you could argue that the crowdloan plus auction is as close as possible to what restaking is, except restaking has -- it's just easier for the end user. Operators can just choose to validate one or another.
[:We're validators. ZKV is a validator in Polkadot, and so you're a validator in Polkadot, but you don't just choose which other parachain you're validating on. You also would become a collator on those. Actually, there's another system like Cosmos's ICS or interchain security, which I'm now -- I only am putting it together now that it sounds even more similar because there it's very new, or it's like a year old or something. And as a validator on Cosmos, you are actually actively choosing if you're going to be active on these interchain security chains. I don't know if you've looked at that. Is that similar or is that --
[:So, one main difference between the Polkadot and Cosmos versions of the world and the Ethereum version of the world is that in Ethereum restaking, there's a floor on the amount that I'm earning. So I'm always earning Ethereum staking yield at a minimum. The problem with all of the Polkadot and Cosmos designs is they're very favorable to ATOM and DOT as tokens, and they're very unfavorable to the services because the services have to pay in ATOM or DOT and they don't have capital, they have like a small amount of fees. They have their token. So what they have to do is sell their token, buy DOT or ATOM and then participate. Whereas in ETH staking, you have people who are already staking ETH and earning 3% to 5% yield. And this is viewed as supplemental income versus is the main thing needed from the services, so.
[:I see. I see.
[:I think the economics in Ethereum are just strictly better. And also I think there's a little bit of greediness in both ICS and the DOT auctions, in that they only benefit the validators. They don't -- like the social welfare, like the splitting of revenue between the services and the validators is very unfair in a lot of ways in the DOT and ATOM worlds, whereas the Ethereum one is closer to kind of. And that's the point of these matching markets, right, is like they're trying to do some welfare maximization type of thing between the two. And I guess, Naveen working on that in the past probably inspired this particular model, you know, like what's like the genesis story of how you --
[:What's super weird about the restaking matching market in particular is it's actually a matching market with the negative externalities. Normally in a matching market, as matches take place, you can talk about welfare increasing. But here what's interesting is as there's more matches, there's kind of more potential risk depending on how it's done. And the reason for that being that more things are kind of tied together. And from a systemic risk point of view, you might expect things to get worse. Actually, maybe I can say the conclusion since I only said what the risk measure is and not how to mitigate it.
[:Sure.
[:But the main upshot is that over-collateralization is basically what's both necessary and sufficient to mitigate this risk. So what that means is if you go back to the Ethereum example, where there's just Ethereum and then a bunch of validators, there you kind of require that to make things robustly secure, where even if a shock occurs, things are still safe. You obviously need some buffer between the profit from corrupting Ethereum and one third of the total stake in Ethereum. It turns out that from a global point of view, things are similar. You need it to be the case that for any collection of services and any collection of validators that can attack those services, there needs to be some buffer between what the profit from corrupting those services is and how much stake would be lost in that attack. And you can actually bound -- let's say that that buffer is some multiplicative factor, you can actually bound the total stake loss after any sequence of attacks in terms of that buffer.
So, for example, to make things very concrete, let's say everything was always 10% over-collateralized in the sense that any attack on some services always costed 10% more. The total stake required to attack some services is always 10% more than the profits from attacking those services. Then you can actually say that even in a worst case situation, let's say that a 0.1% of total stake was just lost arbitrarily, then the total ultimate loss after any attacks thereafter is upper bounded by 1.1%. So you can really concretely bound worst case loss in terms of this buffer, which is a cool property. Of course, there's caveats. Like I said earlier, knowledge of what these profits from corruption are might be limited to bridges or folks who are actually running those services. And so even though from a global perspective, it would be nice if you could always have this over-collateralization --
[:You might not know how much you need to over- collateralize.
[:So you, as a bridge, may not know what other folks are doing. This was only a global result of --you know, okay, if there's some random shock that happens in Ethereum, then that's contained. But how do I know that everyone is over-collateralizing? Right? And that's the requirement for this mitigation to hold true. And if I'm a service in particular, how do I make sure that I'm protected? Right? And so the second part of this paper is really analyzing stuff from that perspective of if I'm a particular service I want to mitigate myself against -- I might not even understand shocks happening, like I might not have a good understanding of what shocks might occur outside of my own ecosystem. Right?
[:Yeah, yeah. But how do those shocks affect you anyway --
[:Exactly.
[:Yeah.
[:Right. And so is there a way for me to, in a similar vein, mitigate worst case loss if I -- just from looking at a local perspective, if I only know my own profit from corruption, maybe the profits from corruption of some partners, is there a way for me to mitigate this? And it turns out there actually is. You just have to do a different type of over-collateralization scheme.
[:Anna, to your point about collators, I mean, there's this thing of like, hey, if you have a collator that's shared across multiple parachains and they drop out and messages don't get relayed, and there's no other -- it's sort of a similar type of thing that happens.
[:Although collators tend to be per network. So you have the main validator of Polkadot and then you have a per parachain collator set.
[:Okay. They're never shared.
[:I mean, you might have the same companies running multiple. In our case, we only run Moonbeam, and we don't share it with anything else. I actually wanted to ask you a little bit more on the alternatives to restaking, because I wondered if liquid staking is in any way in the same category. I realize it's different. I mean, liquid staking is like you stake tokens, and then you have synthetics of those tokens, and you can do things. But this is also like, you stake tokens, and then you can do things with those staked tokens. So I just wondered if there's ever, like if the restaking research ever looks into the liquid staking and how that played out.
[:Tarun, you wrote a blog post related to this, right?
[:Yeah. Yeah, a couple of things. One is, the model that Naveen has in his paper doesn't really cover the principal-agent problem, where the node operators don't own the capital that they're restaking. The idea is that for the node operator, the profit that they realize in this model is the profit from corruption of the set of services, minus the total stake that could have got slashed. Now, if you add in the principal-agent behavior of someone is delegating capital, and the node operator, if they get slashed, it's not their money, or maybe only some of it is their money. Like you, as a validator, the majority of your capital is delegated. That's not your capital. So the principal-agent problem naturally leads to this centralizing effect.
ere's sort of two papers from:[:It's like almost our own stake on the line. Like the validator's own stake.
[:Yeah, yeah. You have to have some amount of it. Yeah. And a lot of liquid staking protocols actually do require the operator to put up some stake like I think for Rocket Pool, it's like 2 to 4 ETH, if I remember correctly. And then I think for a lot of liquid restaking protocols, there's a minimum amount also. And so the idea is, how do you choose that minimum? What's the minimum you need to guarantee secure? One way you could do this is extend the bipartite graph description to a tripartite graph, where its one partition is the capital holders who are delegating, the next partition is the node operators, and the next partition is the services. And then you look at the flow across that graph, and you write out a kind of natural, sort of principal-agent problem thing, and show that it has equilibrium such that if the node operators always have at least, say, 5% of the stake is their own, then they're unlikely to deviate from the strategy as if it was their own capital by more than -- by a small amount.
And to be fair, this is actually true in society at large, not just in crypto, that like the agents, as in the people who are acting on behalf of the principal, like the capital holder, often have to put up their own money. So for venture funds, or private equity funds or hedge funds, oftentimes the agents, the partners who are investing money, have to put in their own money. And so I think these types of things naturally show up in liquid staking. It's just more messy to reason about.
[:But bringing it back to restaking, does restaking also have a principal-agent percentage minimum?
[:Yes.
[:It does.
[:There's no doubt that that's still an open problem, I would say, that's not solved.
[:There's a lot of -- yeah, definitely designs under specified currently.
[:eally interesting is like, in:[:It worked, yeah, yeah.
[:It seems to work. We don't really understand why. And that is what led me to try to figure out, okay, maybe you should think of it as this optimization problem, whatever. And I think what Naveen did was like, hey, look at this thing. There's actually secretly this graph problem and matching problem if you squint enough, right? Even though the people who wrote it kind of wrote these sufficient conditions for when it could be safe and not safe, if you zoom out and think of it as this matching problem, then it becomes much more easy to reason about. And I think oftentimes in research like this, a very important thing to do is more important than the actual solution to the problem. Oftentimes is having the right definitions, because there's oftentimes this trade-off in math of like, I either start with no definitions and very simple things, and I prove a really complicated result, or I start with really long complicated definitions, and then the result is sort of trivial because I put all the complexity in the definition of the thing instead of in the answer.
[:Okay. Who did which kind? Who's which in that description?
[:I think the ideal case is that you have a simple definition and then a simple proof.
[:You want something in the middle -- in the middle of.
[:Okay, okay.
[:No. But I think even to your point about this ending up being a very clean graph problem, there's -- for those, anyone who's more on the TCS, theoretical computer science side, actually, in the case where all -- everyone has the same stake, every service has the same profit from corruption, the problem actually, of checking for security becomes checking whether or not this graph is an expanded graph, which happened to be some nice, there's just some nice structure in the problem.
[:I want to go a little bit back to the connection between the matching work and the paper, because, Tarun, you're saying that sort of the technique was turning it into this matching problem, but somehow, for me, I've lost that connection. I heard your beginning, Naveen, where you were talking about what you were doing before, but, yeah, what's matching in here?
[:So, I guess, as it was framed before in Appendix B of the EigenLayer whitepaper, you kind of just had these parties, you had these services, or AVSs, I guess, I think they call them tasks even, and then you have these node operators or validators that had some stake. And you can think about these balanced conditions in this cost-benefit analysis that I was describing of when is it the case that there aren't any validators and tasks for which -- or services for which those validators profit from launching those attacks? The thing that model doesn't quite help you with is looking at, I guess, broader counterfactuals, or looking at the structure between what validators are putting their stake in different services. And so --
[:So is the matching -- is it between --
[:It's between validators and services.
[:And AVS. Okay.
[:Or AVSs, right? And a validator is matched to a service if they're restaking for that service, right? And normally in a matching market, each -- let's say you think about a job market. So you have a bunch of these people that want jobs and a bunch of these employers, and you can kind of normally think of it being the case that, I guess it depends on the setting, but in most common settings, each match is a little bit -- is kind of separate from each other. If I get a job at some company, someone who's getting a job at a different company, it doesn't really matter to them that I got a job at this company. Now, if they're competing with me, then maybe it matters. But in a sense, the payoff that I got is somewhat separable from the payoff that they got. And those things also led to positive externalities. Right? Like, I worked for this employer, the employer thought it was profitable for me to do this, I thought it was profitable for me to do this, there was like -- welfare was received by both parties.
What's interesting about the restaking matching market is that when a validator is matched to a service, and then a validator is also matched to a different service, right? So let's say a validator is already matched to Ethereum in the sense that they're a staker for Ethereum. And now this validator now chooses to also restake that ETH for another service. Them making that choice has consequences for Ethereum, right? Because it imposes a negative externality on Ethereum in the sense that that other service that are restaking for, now has the rights to slash some ETH. And depending on how this is implemented, this could lead to a negative externality on Ethereum in that staked ETH could disappear due to the actions of other services --
[:Yeah, outside of it.
[:Whereas in a world where you couldn't match to these things, that couldn't occur.
[:Interesting. How are the AVSs? Like are they offering -- I'm guessing AVSs offer unique incentives to the validators to get them to validate them.
[:Yep.
[:What are these just native tokens on the AVSs usually?
[:So, actually, this is what my follow-up paper that kind of took Naveen's framework and tried to analyze what happens when there's this reward mechanism. So I think if you look at it from the abstract graph problem, there's sort of some natural way that validators choose services, right? You're just like, you're given a graph, like ZK Validator has chosen eOracle and Eigen DA and none of the other services. That's a choice you made. And then, given that Naveen's paper analyzes like how do you attack that? On the other hand, in practice, what happens is, basically, services give you fees plus native tokens, which you could think of like block rewards from staking. And so they give you some amount of rewards, and then you, as the validator, have to choose the subsets of services you want to operate. Right? So if ZKV was running a bunch of restaking nodes, you would say, hey, Eigen DA is giving me 5 ETH of yield a month, eOracle is giving me 1 ETH. I'm just taking the top three from the live AVSs right now.
[:Also, we should make it clear ZKV is not currently running any of these. These are just theoretical.
[:I'm just -- I was just using this as an example --
[:I, like, don't use me, I don't mind. I don't mind.
[:And then let's say you were -- I'm just going down the list of the live AVSs, like, Witness Chain was offering 15 ETH a month. Well, you say have 100 ETH that you want to restake. How do you determine where you want to put it? Naively, you might say, hey, I want to put it all in Witness Chain, right? Because --
[:That's returning.
[:They're offering me the most yield. But Witness Chain might just be very slashable. Now, to Witness Chain people, I'm not saying you are. This is all hypothetical. I'm just saying like, suppose it turns out it's the most slashable.
[:We have to be careful. We should almost use, like AVS A, AVS B.
[:I just want to make it more concrete.
[:Yeah, yeah.
[:Because I think if you make it concrete, it's a little cleaner. So suppose you get -- you say, okay, I put all my 100 ETH into the Witness Chain thing, I expect to get 115 ETH after a year. But actually, I got slashed on 90 of it. My loss is not just the 90 ETH I lost, there's also the future profits I lost, like, oh, I didn't lose -- I didn't get that 15 because I was expecting 115.
[:Do you get kicked out when you get slashed? Is that why? Like you would never -- or you don't receive your reward?
[:Depends. That's actually up to the service. That's up to the service. The other thing is, I have the opportunity cost of the lost ETH staking yield, right? Because before I had 100 ETH, that was, say, stETH, like, staked ETH, and it was earning 3%. So I expected to get 103 after a year, but now that I've gotten slashed, I've lost that 90. And to Naveen's point, there's this negative externality on Ethereum itself, and that Ethereum lost 90 ETH that was staked in Ethereum. So lost 90 ETH to security due to something outside of Ethereum.
[:Yeah. Because actually, who slashes you? It's the -- Is the AVS that slashes you? No, like who gets to take the ETH? Does it just get burned? What happens to it?
[:Well, it depends, but for the purposes of this paper, it's burnt.
[:It's burnt, okay. No one gets it. It's not like someone take --
[:Yes.
[:Okay, it's gone.
[:But remember, it's reducing the stake supply, right?
[:Yeah, but isn't it kind of good for a network when you burn tokens?
[:I guess, I mean, there is a macroeconomic effect of you're distributing -- if the total market cap stays fixed and you burn tokens, then you're distributing wealth back to token holders.
[:But on the staking front, you're losing stake. So you're losing security, exactly.
[:Right. So you're making it easier to attack in some sense. Right? And so the idea that -- there is this tradeoff between the two, but honestly, the thing is, it does allow these services to get security as if they were themselves in L1 in some ways. And so there's this question of like, if I'm a new service, let's say I'm an oracle, let's say I'm a new rollup that wants a decentralized sequencer and doesn't just live off a multisig forever, I need some way of enforcing penalties on the off-chain actors who enforce some state transitions. So in the case of rollup, a sequencer, in the case of, say, a ZK prover, the people generating the proofs, in the case of Oracle, the people aggregating. The thing is, I could start a new L1, but then I have to bootstrap everything. I have to figure how to get liquidity, I have to get stable coins. I have to do all of this stuff that's very hard to get new validators.
But the question is, when are the economics such that it actually makes sense for me to join a shared network, even though I have to pay rewards such that it is above this over-collateralization threshold versus just completely starting a new one? And the idea is that tradeoff is a combination of understanding the economics of how much you have to pay in rewards, combined with understanding this notion of cast security that Naveen made. Because you want to pay enough such that you don't have those kinds of risks. So you want to be over-collateralized that much. But at the same time, you also don't want to pay so much that it's cheaper for you to start your own L1. Does that make sense? And so that's the tradeoff.
[:That's the balance.
[:That's the balance. And so yeah, our paper is about that. And like how much you have to pay to get that type of security.
[:That's what your paper was, Tarun. And then --
[:We just sort of following on Naveen's paper.
[:Cool.
[:But I think this idea is just going to, is more fundamental to blockchains that like there's some things that can share security and there's some things that need to be isolated. It's very much like lending or perpetuals, like the DeFi aspects of things. And I think the interesting thing about restaking from a theoretical lens is it blends a lot of the analysis of proof-of-stake that has existed for a while that people in consensus think about with the analysis of things in DeFi. And sort of it's exactly in the middle of the two. And I think because of that, it sort of is a superset of a lot of things people are working on, like subsumes doing all these off-chain services like rollups. It subsumes some of like how should you analyze the economics of data availability? It's like a more general framework to analyze all of these many different economic problems, which is one thing I think people don't -- I think restaking, obviously people think about the yield and whatever, but if you zoom out, the real thing you should be thinking about is this type of stuff.
[:I find it so wild that last year or two years ago when the ZK rollup world was announced, I remember Tarun, you being like, man, everything, all paths lead to Polkadot. But actually there was a lot missing from the rollup model to be more like Polkadot. And now I think what you're describing, just going back to what you were saying before, it seems like it's a different version, maybe learning lessons from how Polkadot went, especially in terms of launching these AVSs and like incentivizing them. But it has more, I feel, in common with like that Polkadot model than rollups do.
[:For sure. But you have to remember that the reason that we don't see rollups doing that is that the rollups haven't delivered on all the promises that they were supposed to make, right? Like they haven't gotten -- and when they do that, they have to have ways of penalizing the sequencer. And there is no way of penalizing the sequencer right now, right?
[:Yeah.
[:Fundamentally, I think that is the thing that is missing. Even in a centralized sequencer world, if a fraud proof executes, I should be able to slash the sequencer. I cannot do that right now.
[:No, you can't.
[:There's no penalty. And my point is, the restaking stuff, if you look at it from this matching market lens, kind of subsumes a lot of these different things that people are making specialized economics for, including ZK proving markets, because I think there's a lot of this service versus consumer versus producer relationships and ZK provers, right? Where there's many networks that might be sharing some prover network and they have to figure out how to price things.
[:Theoretically, it doesn't exist.
[:Theoretically. But my point is, restaking is the first live market that is actually trying to solve this directly as the economics problem. I mean, I'm not sure the people working on it think of it this way, but this is what's emergently happening.
[:Actually, maybe to give a second framing of, I guess, all of that in terms of the construction of the matching market, I think Tarun's basically been outlining, I guess, two separate problems. One of them is you can talk about what's the current state of who's restaking to whom. And then you can discuss the security of this, and maybe even the robust security of this. This is what I was working on. And then what Tarun's been talking about is, okay, now what actually is that graph? How does that graph get generated? And what are the processes that dictate who ends up actually restaking to whom? So you can actually make sure that that graph does satisfy the original properties of what's safe and not safe.
[:Yeah. And I think this notion of what safety and not safety is something that, especially from this economic standpoint, is something completely unmodeled or kind of given a lot of delicate care in all these other networks like DA, proving, rollups. Because mainly -- because right now there's just this hunt for demand, for any application to actually use them and generate fees. Whereas with restaking, there actually are more applications that you can imagine paying fees. Like there's a lot of things that are closer to DeFi looking things, like Oracles that have business models. And because of that, I think it's sort of like the proving ground where you can test out how you should think about the economics for all these other areas.
[:Actually, this is a pretty decent seg into resource pricing.
[:Perfect. Yeah. Sorry. Yeah. So maybe we should talk about, you're working with Tim on your PhD, who is kind of, I think one of his first crypto papers was about resource pricing in Ethereum and how to think about it. And you, while working at Ritual, have been thinking a lot about resource pricing in a more abstract sense, which kind of maps to this kind of thing of like, there's service providers and there's capital, and how do they get matched together, and what's the price they should pay? So maybe we could jump into that.
[:mechanisms, and this is like:[:Is that connected to this? Like, is it in the restaking work, or is it totally different?
[:Totally separate from the restaking work. I mean, I guess there are similarities in that they're both matching markets, and they both also apply to similar entities in the sense that, just like how Tarun was talking about restaking being helpful for coordinating incentives for interoperability and stuff like that, in terms of what those exact incentives should be and how they should be determined, this is relevant there. But largely speaking, these are separate things.
[:Got it. I wonder -- okay, so this is a more like perception question, but in the last year, early on in the year, I think everyone is very, very excited about restaking. And then EigenLayer launched, and there was this kind of frustration and anger about all sorts of things. But it seemed like there was almost a sentiment change against restaking, at least on Twitter. And I just wonder in the research, though, and in the real world, is there actually? Because I also know about a lot of teams that are now AVSs and they weren't before. So I'm sort of curious, looking from the research side, if you see any of that or have some opinions.
[:I think on the research side, it's kind of become more clear that, A, there's a lot more interest in restaking as things are starting to go from idea to actually deployment and coming with that is more of an interest in, okay, how do you make sure things are actually secure? So from a research perspective, I think interest in research on secure restaking has definitely become more relevant. In terms of positive or negative spin, I don't know. I mean, I think there's just a lot of parties that are interested and a lot of parties that now care a lot more than did before about the practical implementations of these things playing out properly.
[:I can see when I asked this question, Tarun was kind of like annoyed.
[:of the early days of DeFi in:ryone is just like, oh, I got:But I think if you zoom out, the main point is that it's a way of deciding, having networks where you can either decide to share security in a very precise way that has some covenants and guarantees, which means that you're taking on some risk for doing that, but obviously you're getting some reward. And it gives the end developer the choice of do I want to start a new network from scratch and go through the entire process of doing that, or do I want to be able to use an existing network? And I think the interesting thing about Polkadot was like they had all these ideas there from the beginning. Like this idea that the main chain was the layer-0 for the parachains and they would share it. But the problem is, I'd say the Polkadot model economically was too greedy. It didn't welfare optimize for the whole network. It literally welfare optimized for DOT holders.
And I think the weird thing about restaking is because ETH already kind of had a working economic system, adding this afterwards was like, is more effective than trying to start from scratch with that, if that makes sense. And I would say, I think -- again, I think there is a lot that was learned from the lessons of the Polkadot design and I mean, restaking is still to prove itself, but I do think when you get to this fundamental economic question of I have off-chain services that need to get matched to participants who will secure them. How do you do that? How do you do it efficiently? What should the market structure look like? And like I think we're finally at this point in crypto where all of this stuff that people have talked about for matching markets, it can happen live and iteratively right now.
Another thing about the history of matching markets like kidney exchange or the residency matching programs, or all the stuff that sort of matching markets have historically been associated with is they're usually thought of as these one shot games. Right? There's like everyone who's graduating from med school in one year in the US gives their preferences, and then the hospitals give their preferences, and then they get matched. Right? But that's a one shot thing. Once a year, every year, is a different student. So it's like a totally different thing. But these crypto matching markets are very unique in that they're iterative and constantly and evolving. And that's the thing that I think is very unique about them, that even in normal finance, you don't see as much. Sort of online advertising, you kind of see this, But even then, it's not quite the same because the user doesn't have control over their matching. The matching is done effectively by the platform, like the Google, Facebook, Amazon. They're auto bidding on your behalf, they're kind of doing most of the work. You don't really -- can't change exactly how it works. But a unique thing in crypto is that this is happening live on its own with billions of dollars. And I think --
[:And I think what even complicates that further is that it's halfway in between being a batch system and a continuous streaming system, in that we've discretized the world into blocks. And there are weird consequences of that being a batch process, while at the same time there's a continuous stream of people having demand.
[:Wow.
[:Or use of a blockchain.
[:That's funny. As you describe all this, I start to wonder how or if MEV changes at all through this, like in the restaking world. Does it open up new places where there's crazy arbitrage? Have you looked into that? Has anyone?
[:I think that definitely opens up. There's a huge amount of thought on interoperability in general and consequences for that, for MEVs. So there's people working on shared sequencers and other stuff like that. So, definitely, there are definitely implications for it, but, yeah.
[:Wait, wait, wait, wait. I would have thought you'd have more of an answer, because I feel like the broker mechanism stuff for the resource pricing is kind of thinking about that.
[:Yeah. So there --
[:There's the connection.
[:That's less tied to restaking, but, yeah, I mean, MEV, in terms of interop between different parties, you can talk about maybe different types of MEV. There's MEV that comes more from arbitrage of lag times between different parties that have different information. I think this is definitely a thing that is relevant, maybe harder to study theoretically. And then there's also MEV that comes from just having a richer ecosystem to work with. And MEV is often thought of as a pure net negative. But you could even think about MEV in a different light as it's a type of just-in-time coordination that's going on. Currently, all of these types of coordination that's happening because most of what's happening in crypto is a zero sum game. It's just extracting stuff from the parties -- from other parties. But in cases where there's not a zero sum game, you could perhaps imagine that this leads to some net positive.
[:s, early:[:Sure. Yeah. The restaking stuff is already out, but also have been working on this resource pricing related work that we very lightly touched on. That's probably going to come out in the next, I don't know, month or so if I were to guess.
[:Cool. Well, thank you so much for coming on the show, sharing with us your early research and how it led to you working on this restaking stuff and then going deep on the restaking stuff and letting me ask a lot of questions about restaking that I've always wanted to ask. And this has really helped me to understand much, much more how you're thinking about it, but also how one can think about it a little differently than maybe it's initially been described. So thanks.
[:Just ignore the:[:Thanks so much for having me. It's been awesome to be here.
[:Hey, thanks for coming on. I'm happy that hopefully we can dispel the myths about restaking slowly over time.
[:Yeah. Okay. I want to say thank you to the podcast team, Rachel, Henrik, and Tanya, and to our listeners, thanks for listening.