Summary

This week, Anna catches up with cohosts, Tarun, Nico and Guillermo to do a look back at the zk research and applications that came out in 2024. They go on to discuss the challenges facing the ecosystem. And they wrap with a brief look forward, covering what to expect in ZK in 2025.

This will be the last ep of 2024, thank you for joining us this year!

Further Reading:

Check out the ZK Jobs Board for the latest jobs in ZK at jobsboard.zeroknowledge.fm

If you like what we do:

Transcript
[:

Welcome to Zero Knowledge. I'm your host, Anna Rose. In this podcast we will be exploring the latest in zero-knowledge research and the decentralized web, as well as new paradigms that promise to change the way we interact and transact online.

to return for an end-of-year:[:

Yay.

[:

Hello.

[:

Yoh.

[:

What's up?

[:

We are only missing Kobi. I think we've actually done this almost every year. Sometimes we've broken this down into two episodes, doing like a lookback episode and then a look-forward episode. This time we're going to try to combine the two of them. We're going to try to focus less on like a month by month, play-by-play, which we did do once, and a little bit more on like general themes. What went well, what didn't go as well, and what could be coming in the future. So I put together a few little -- actually, ChatGPT helped me put together a few little questions about the past.

[:t went well? Anna using AI in:[:

Yeah, exactly. Who would have thought that's really a change?

[:Well, we used it in:[:

It was more -- you were more --

[:

That one time.

[:

You were less gave in -- you didn't give in to the Borg then. You were freaked out by the Borg.

[:

True. I was scared.

[:

AI God was not sparking joy, but apparently it sparked enough joy that you actually used it like not ironically this time, which is impressive.

[:significant advances in ZK in:[:

There was a lot of movement in like hash-based SNARKs and a lot of new developments. So we had the whole Circle STARK saga.

[:

Oh wait, that was this year, shit! It feels like a year, like more than a year ago. That's crazy.

[:e you sure that wasn't end of:[:

So I checked before.

[:

Oh, you did? Okay.

[:in:[:

Yeah.

[:

More recently there was Blaze. So there's a lot of work in this area of research.

[:

And that's good.

[:

And that is good. Yeah. Well actually it's also sparked a whole other narrative this year which was the hash versus curved and like hash-based SNARKs or curve-based SNARKs. And it's nice to have like hash has won.

[:

Wow.

[:

And I think that's interesting to look at, revisit chat about, I don't know.

[:

Yeah.

[:

I remember -- I mean, we had an episode, I believe with David Wong where he had a hot spicy prediction that he was like hash-based SNARKs are dead. And I was like, are you sure about that?

[:

Complete contrarian take.

[:

Yeah. He should have waited till the end of the year for that one.

[:

Yeah. I mean, no. I think it was a great prediction because it was kind of fun to get into the weeds of it. Why the thoughts were -- but it's funny because it's like, these are not like things that are exclusive, right? Like they're not -- like you don't just get to like work on one and to the detriment of the other or something. They're kind of like just different things. So it's funny seeing some wars around them. Anyways, this is just --

[:

Yeah. And they coexist all the time. Like every single rollup we see on Ethereum rolls their proofs into a Groth16 or like a Plonk proof.

[:

Right.

[:

So they do coexist and they're almost like they need to coexist.

[:

And it seems like it got pretty spicy at the end here. Right? With the Google announcement. I don't know. Tarun actually probably has some context on that, more context than I do.

[:

Yeah.

[:

Actually paid attention to the thing but --

[:

Well, I just thought the marketing in the blog post was a little bit incendiary. So I feel like that already turned me off to how accurate the result was because the paper is like a four-page PRL paper, and it's like doesn't really say that much, or at least the original paper. I mean, the nature paper is also a little bit marketing. So it's like I'm sure there's something there. I'm sure that Google is slightly exaggerating and like the multiverse thing in that blog post was embarrassing. Also citing David Deutsch for that, are you fucking kidding me? You're not setting like Heisenberg or Dirac or someone who came up with this shit actually, and not a guy who wrote a pop sci book. Like the entire thing is like pandering to Twitter Huberman Bros, like that article. Anyway, sorry, that was --

[:ack though to the research of:[:

So I was actually going to add to close this hash versus curve thing. There were also developments in the curve-based land this year, including as you mentioned, Polymath, Garuda and Pari, which are all SNARKs that are smaller than Groth16. Interestingly, I'm not sure their on-chain verification cost is cheaper, but the proof itself is smaller. And I don't know, that's a cool development in itself.

[:d that be something of like a:[:The big one that came out in:[:with LaBRADOR, which is from:[:

Oh cool. For the lattice-based SNARK STARKs, they -- at least in that episode, I understood that this makes SNARKs more compatible with FHE technology potentially. But like lattices are -- those are -- this is broader than just FHE. Right? Like, I'm assuming this all of a sudden opens it up to a bunch of new tech.

[:

Yeah, absolutely.

[:

Does it also open it up to like, AI stuff? Like ML stuff?

[:

Potentially.

[:

Tarun's making a face.

[:

Yeah, yeah. I'm like, that's not really related. That's not really related.

[:

VC talk. Get out of here.

[:

No, I'm just like, that's -- I don't know. Those are very unrelated.

[:

Oh, they are. Oh.

[:

I mean, the reason we're using lattices is because we're trying to find hard problems. We're trying to find things that are easy to set up for computers and hard to break. And that's how we do cryptography. And so, yeah, that's how we do FHE. That's how we do -- we can do ZK. Some people are looking to do like obfuscation.

[:

Crucially, most post-quantum things are lattices.

[:

Oh, I see. Okay, so it's that --

[:

Like the most best known post-quantum things. Yeah.

[:

And actually there was a fun paper that came out this year, a big scare. So in April, there was this paper called Quantum Algorithms for Lattice Problems, in which the author, Yilei Chen says, like, hey, here are algorithms that break these lattice problems, like quantum algorithms that break these problems. And for a period of one week, everyone was freaking out, because this is a very respected author, and everyone was reading through the paper, trying to find a bug, being like, holy shit, is everything broken? Do we have to give up on lattices? Especially because lattices were being used in a lot of the NIST standardization efforts for post-quantum encryption. And so nine days later, someone reaches out to Yilei Chen and he updates the paper saying, like, hey, there's actually a bug and I can't fix this. So the paper is still up for inspiration if people want to think about these problems, but lattices are not broken yet, as far as we know.

[:

Yet. Ooh. Do you think they will be, though? Was that a hint that they could be?

[:

It was a close call. I think there were a few people trying to find bugs, and Yilei fixed a few of them. So it started to feel like maybe there's nothing we can do here. I don't know these things well enough to have any form of intuition.

[:

Yeah. What I think was really, for me, interesting, in that whole debacle, there was this magical organization that happened. I don't mean organization in the sense of an org that does stuff. Like an organization of like organization of humans that just happened. Like I don't know. Discord servers were launched and people were just like -- the Discord server was only talking about this paper. And the channels were different pages of the paper and different theorems people would discuss the -- it was super interesting to watch how people spontaneously organize themselves against major results in kind of like post-Covid. Right? I feel like these things kind of happen a little bit in weird niche-esoteric channels, but seeing it be so public.

Right? Like outside of labs, like outside of just people who work at Google and no math or whatever were just spinning up these servers and be like, dude, come join. Like, we need whoever we can get to understand this stuff. If you can explain it, great. If you can't explain it, done. Like, let's just try to figure out what the hell this thing is saying, like, how to do it. And it was very fun seeing that. And it's kind of an interesting -- where's the future of research going, I think, which is what was kind of limited to Groups in Blackboard and whatever. The Stanford Physics Department feels to be like a very global thing now. And that -- I don't know, it was just very -- just as an anthropological exercise, it was very fun to see.

[:

I think it helps that it was a theoretical problem and a theoretical break.

[:

Sure.

[:

Like, had it been concrete, I think this would have been locked up Department of Defense, like, on it.

[:

Well, I think there's also this philosophical issue that comes with these -- with generally quantum algorithms that I think is worth pointing out as to why it takes forever to figure out if these things are correct or not when someone posts them. Which is like, in most cryptography, whether it's algebraic or whether it's hash-based, I'm separating those very broadly. I know there's obviously overlap, but like there's sort of a sense in which there's -- they're very like strongly discrete objects. There's a certain number of objects, and there's a certain subset of those objects that are valid. And you want the number that are valid to be very small relative to the number that are invalid.

The problem is quantum algorithms are in continuous time and continuous space. Like, all the operations are with some probability, you do this operation and this particular state update happens. And the thing is, you're trying to make this probabilistic statement, like after a certain number of operations with high probability, I've concentrated onto the valid states, like the things that are the private keys or whatever. The problem is, sometimes those continuous processes, they'll always have placed some probability on the right answer. But generally the hard part is showing that it's like the right amount of probability.

And oftentimes how you measure that is extremely difficult in quantum algorithms. Whereas in discrete algorithms, it's like, probability is very easy. Right? It's like, I count one set of objects, I count another set of objects, and I divide them. I know it sounds very stupid to say it that way, but that is literally what you're doing in discrete land. But in continuous land, it's not quite that. And Guillermo hates continuous math. So I point this out just more as a -- he's a noted hater of probability and continuous space.

[:

No, I hate measure theory. I did my entire PhD in continuous land and physics. This is BS.

[:

But a lot of the quantum algorithm stuff actually boils down to getting that type of stuff. Right? Unfortunately.

[:

I have thoughts about this because the state space is countable, and so we don't need measure theory. But that's different.

[:

Maybe. Maybe.

[:

This is more of a philosophical qualm.

[:

But I guess my -- the reason I bring this up is like there's two different cultures. There's the discrete land cryptography culture, and then there's the continuous space and time quantum algorithms culture. And so, like, any paper that makes some claim like this, has to use tools from both. And you need people who can validate both in order for it to work, which might take a lot longer than if you wrote something that's purely discrete or purely continuous.

[:

Wow.

[:

But that's more like a philosophy of science thing of, like, oh, like, how do you know whether a result is right or not? And like, how many people do you need to validate it? Whatever. Like, how many viewers?

[:

I think my version of this, which is just like standard Guillermo rant, is like, actually we just don't have good abstractions for quantum algorithms.

[:

I feel like that's like giving up.

[:

No, no. I mean that in a positive sense of like, we should just make good abstractions for quantum algorithms. We just don't have them. Right? Right now it's kind of -- like, quantum algorithms is like a --

[:

I do feel like that's like throwing away -- that's throwing away decades of hard work people have done and saying, oh no, everything they've done is useless. Like it feels like it's not worth.

[:

I'm not saying it's useless. I'm just saying it's bad. Like bad aesthetics. And you know I have a ranking of esthetics. Physicists are not at the top. And in fact, they're not even close to the middle. But, happy to discuss further on that. I think cryptographers, funnily enough, are really at the bottom, actually, which is also kind of funny on this Tarun.

[:

See again, again, the Guillermo trash talk hour has started. Right?

[:

Yeah. Clearly.

[:

You got to have a little respect for how hard people work on certain things --

[:

No. And this is not to say that that's not important or interesting or anything like that. I just think it's bad aesthetics.

[:

Not to say that your PhD and postdoc --

[:

Your life's work.

[:

In 20 years of research aren't important, but --

[:

Correct. But it's just -- it's bad aesthetics.

[:

You're like Kanye West but for academics, you know, like when he --

[:

I mean, hey, I'll take whatever you -- you know, as much as I sling insults, I hope that someone slings similar insults back. Like this is not a one way road. But unfortunately, I still think that I am right on this. You know, this opinion is correct.

[:opic I wanted to touch on for:[:

So if we're going to be lenient on timelines, then I think we can talk about like zkTLS and I hate that naming for it, but like web attestation.

[:

Oh my God.

[:

Web Proofs.

[:

Yeah. Wait, wait, wait, wait. Just because, like, I don't know if you've done an episode on this.

[:

We did an episode on Web Proofs.

[:

Yeah. Can we disambiguate the term? Because I feel like it's a wrong term.

[:

zkTLS specifically, so I haven't -- yeah.

[:

It's good explaining why it's wrong.

[:

Okay. I don't think I've ever done a full episode just on that term, but it has been mentioned a bunch of times.

[:

Good.

[:

So yeah, what is the deal with zkTLS then?

[:

So the idea is you'd have some kind of plugin in your browser or you have a modified browser that allows you to make proofs about a TLS session. And even actually more specific than TLS, some cases it's like HTTPs. So it's like you go to your bank's website, it's secured with HTTPs. Great. Now you can make a proof that your account has a balance that's higher than some number. The difficulty here is the cryptography that's used is symmetric. You don't have this property, which is non-repudiation. You don't know for sure whether or not the messages sent between you and the bank were signed by you or by the bank. Right? And so to convince a third party that it was the bank and not myself, we need to include someone else in this sort of proving mechanism. And that's like these notaries or these proxies or these MPC nodes. So this is why I usually don't like the term zkTLS, because ZK usually refers to things that have a prover, a verifier, and the zero knowledge property. Hopefully if you're using ZK correctly. And then here we're actually talking about a protocol with a lot more parties. And so it could be a bit like disingenuous people.

[:

MPC TLS.

[:

Yeah. it's a lot more like -- yeah, MPC TLS or you don't have to use MPC. You can do this with just someone that sits on the network between you and the bank. And then it's kind of like proxy TLS. And this way talking about TLS attestations is, in my opinion, a bit more generic, doesn't have the downsides and the pitfalls of calling it zkTLS, but also isn't specific about how you do it. I don't know what your thoughts are.

[:

This is the kindest dunk I've ever heard.

[:

Yeah.

[:

On the podcast.

[:

Hey, I'm not here to pick fights. I'm just here to decode some jargon. You know, that's what I do.

[:

That's where you're wrong kiddo.

[:

All right. Are there any other things that have emerged this year? I do have a couple.

[:

I mean, if we're going to pick fights, zkVMs.

[:

Actually, yeah. So this is what I'm -- this is what I'm going to say, which is, the fact that a lot of people have gone from working on applications to working on zkVMs is a sign that the search process to find applications has failed. And everyone wants to outsource it to people building stuff on their VMs. So then they end up building VMS instead of applications.

[:

Well, I mean, we did learn this year that, like fundraising for applications, ZK applications has been hard. But I don't know if that's the fault of the state of the kind of application space or the fault of the funders at this time. Just because, like it's clear --

[:

I just mean that if a lot of people pivot to making a VM, you make a VM because you're like, so I want someone else to find the application and use my substrate as opposed to like, I have an application. Right? And so if everyone's going to a VM, that means the search process through the application space is not going very well, whereas the search process in the VM space is easier.

[:

You really don't think it's just because there's more of an upside, like there's more of a business, like a case that investors like and investors are kind of pushing these teams to create zkMs instead of using somebody else's?

[:

I mean, if anything, I'd say it's harder to make money building a zkVM than it is building an application, because you're now competing with a lot of other zkVMs. Everything is open source. So it's very easy to fork one and make your own zkVM. Well, very easy. Relatively easy compared to what it was like a year ago, two years ago. And so I'm not sure there is a business case to be made there.

[:

Well, it reminds me quite a lot of the L1 booms of yesteryear, or the L2s for that matter, where a team would have sort of a really novel application idea and then go shop it on the market. And sometimes the tech was missing for them to truly build the thing that they wanted to build, and then they had to build down the stack, and then they ended up building more of the infrastructure and then just focused on the infrastructure structure.

But other times I do think it was just like -- I really do think sometimes it was just like, that was a more compelling business case for that team to be doing at that time than joining an ecosystem, where you definitely get like a lower tier status. Like you can't -- you know, you're not the L2, you're not the L1, you're not the overarching network ecosystem anymore. You're part of somebody else's. I mean, I think a lot of projects have had that problem in attracting app developers, which is like, you know, I feel like the apps should be valued very highly, but they're still sort of seen as a little bit like a secondary like, oh, it's part of my ecosystem. It's part of an ecosystem.

[:

Well, actually, I would dispute that a little bit in terms of other crypto applications that are not ZK, where there's a ton of them making a lot of money. Like all these Telegram bots making 100 million a year plus. So all I have to say is like, I think if applications find a way to make money, they will do it regardless of the ecosystem.

[:

True.

[:

Which is like what these Telegram bot type of things have been kind of proving. So I think it's just more like people haven't been able to get any traction and the zkVM is like a safe fallback. Because you know what you're supposed to be building. And you know, if you have a particular vector direction of like these are the things I want to change, then you have a thing to keep moving on versus getting stuck in the like 'no one wants to use my thing.'

[:

Yeah. It's kind of funny, but zkVM feels like it's become a well trod path versus the applications which is kind of an open thing. And it's like, okay, cool, if we build a zkVM, we'll find maybe some applications. Someone will figure out some applications for it versus being like, okay, but --

[:

It's funny because this all happened this year which we're saying of like building of zkVM is now an easy task.

[:

Yeah, true.

[:is year. Up until like end of:[:

Yeah. So when I say like easy task, I don't mean like easy in a physical sense. I mean, easy in like the spiritual sense of like this is now a thing that like is a thing, right? I know it's a kind of wooey or whatever, but it is like a thing that people now do. Right? Which is a funny thing to say about such a kind of niche, esoteric topic relative to -- not in crypto, but just in the rest of the world like, I guess, the --

[:er what we said last year for:[:

Yeah, we did. I'm pretty sure. I can even look it up.

[:

The idea emerged and then within a few months, a few other teams were doing it, and then by the end of the year, there was like, a list version very, very long. And I think a lot of those teams have since sort of -- like that's their application, but they've moved a lot of their effort into like another part of the stack that they're building.

[:

Well, that's what we just saw with Axiom, right? Announcing OpenVM. So Axiom started out as a coprocessor, and now they're leading in one of these like VM efforts.

[:

That's crazy. I actually didn't know this. When did this get announced?

[:

Yesterday.

[:

Okay. Fair, fair. I'm allowed not to know this. I feel like I was completely out of the loop. I was like, what? Okay, Yesterday.

[:

No, no.

[:

A lot of other coprocessors then ended up moving more in the prover network side of things. And prover networks we definitely talked about at the end of last year. That was when we were first seeing those projects emerge. But in the last year, we've seen a lot of teams that had sort of initially pitched themselves as one type of application, include a prover network to widen their offering, basically.

[:ke it's literally that's what:[:

I mean, there's one -- so actually, thinking back to our predictions that we made earlier this year, we were talking about identity stuff and scanning passports, and that's actually something that has been built a lot this year.

[:

True.

[:

There's a lot of different teams that have been able to scan passports and make ZK proofs around them. The application built on top of it, I haven't seen many. We had heard of -- I think it was the Rarimo team using it to do shadow elections in specific countries. I think Aztec, some of their early wallets are looking to integrate this like from the very beginning in the wallet, but I don't know if we saw it yet like fully fledged identity systems and identity applications. We're on the way.

[:

Nico, you and I talked a lot about this in our last -- in sort of the last episode of the regular Weekly cadence. We did a lookback at some of the events that had happened, and we mentioned the Cursive presentation at ZK Summit there. There, there's some really interesting ideas, but this is in the -- to me, that's in like ideation phase. Although they are doing experiments with all of these. So you might get a chance to use this, but it's not like it's in production yet in any sort of large scale way.

[:

Yeah. But I guess that's exactly the kind of application research that Tarun was mentioning that some others have tried and haven't been fruitful with. And hopefully, some team will find something eventually.

[:

I mea, there are a few things that have popped up, but I don't know if they fall under the application category exactly, but things like this like Celestia ZK Accounts, there's all these BTC zkRollups, which is not new but it's different. I mean it's like rollups are not new but they're using -- I mean, they're going about it in a different way. And then coSNARKs, which is obviously more like an infrastructure piece. It's not an application per se, but I mean, at least in the conversations it sounded like that could open up some cool things.

[:

I would say one thing that hopefully will come out of all the VMs, a little bit like your analogy of like the 5 million L1s, is like, A, the VM that you don't expect will win, if it's like the L1s, right, where it's like --

[:

The Solana?

[:

Yeah, exactly. It's going to be like someone who you don't expect but who like developer -- builds better developer tooling. And then the other thing is like, I'm just hoping people kind of like upstream ZK into already successful applications versus trying to build totally new ones from scratch. Like use ZK tools as features within existing applications that already have usage.

hat and my prediction is like:[:

Yeah. On that point, I feel the same way about ZK as I do about blockchain. It's just like fundamentally, it is like a cool, insanely sick like fucking wild technology that is possible. But otherwise, like nobody's debating what database use -- like no front-facing consumer gives a shit about what database you're using in the backend of your application. Right? Like you're not like, ah, like use MongoDB. I'm not going to use you because like that. Sorry. You know, fundamentally it is like a boring technology.

[:

You're against my values. Yeah, yeah.

[:

Yeah. It's just like a boring -- a very fundamentally boring technology for like -- it's like okay, cool. Yeah, we use this cool cryptography stuff to ensure that my privacy is preserved. Sick. And it is very cool. Like for what's worth, it's insanely cool. Like the fact that we can even do it is insane. But it is not the point. It's kind of like an instrument to doing the thing rather than the thing itself. And so to Tarun's point, I think it's a healthy thing to see where it's like cool, the tooling has become good enough that I can just incorporate this in the way I want or the way I find interesting into the app. And now I never have to think about it again, except insofar as it's like the properties I get are cool or useful or whatever.

[:

I just realized Guillermo and Nico, you guys released some work recently and I don't know what category that falls under.

[:

Yeah.

[:

Is that an application or is that research?

[:

I have no idea. I'm glad you asked.

[:

That's a great question.

[:

And I do think we probably deserve a lot of -- you know, a lot of time and space to really dive into what you guys have recently released. But do you want to just quickly summarize it and then we can maybe do a full episode on it sometime soon?

[:

Okay. I'm going to give a quick setup, and then I'm going to say a thing. So there is this funny thing that I've been ranting on, and I believe I've done it here on the podcast, which is, right now we have multiple ways of building zero-knowledge proofs, one of which is we use a VM, a zkVM. Another one is -- I'm sorry, when I say zk, I mean zk in the royal sense of zk, you know, the lowercase zk. So succinct proofs. And I've been ranting about, like look, there are things that we use all the damn time, like literally so much that it is unbelievable that we are going to just eat all of this overhead to prove that a certain thing was done correctly.

n a second is correct is like:[:

More? Like more time?

[:

More. Yeah. More time.

[:

Okay. Yeah. That's bad.

[:

So like proving the thing is somehow -- even though it's kind of an incidental thing, like I just want to show you that what I did was correct, now it takes thousands, ten thousands, hundreds of thousands longer than just doing the thing itself, which is kind of silly.

[:

Wow.

[:

And, yeah, it's easier to prove -- easier to verify and stuff, but fundamentally, it's that. So one of the rants I've always had is like, okay, in some sense, there are certain -- you know, it is easier to build on a zkVM, which is why everyone's building zkVMs. So but you should be able to kind of go down to the bare metal, and for certain very important applications, we should be able to construct specific proving -- you know, I don't know what to call them, and like not a system because it doesn't prove everything, but a specific algorithm such that the amount of time you spend doing the thing and the amount of time you spend proving that the thing you've done is correct is very small. Right?

[:

Cool. And that's what you guys have built?

[:

So specifically, yeah. In the case of DA, we've built what we think is the first purpose specific succinct proof, which actually does a thing that people are using right now, like they're doing right now in the case of data availability. Which is, we want to prove that something is not just close to a correct encoding, but indeed it has this tensor property of this weird encoding and it has these additional data availability properties or whatever.

[:

Yeah. Just to be clear, like not exactly the first, Guillermo. Right? There's these great papers from Ethereum Foundation, like Benedikt Wagner, Mark Simkin and I think, Mathias Hall-Andersen who look at this problem. But the way they do it incurs a constant overhead cost on the network. So in their system, the network is going to be sending the same thing to every single light node, whereas we sort of relaxed a bit of definitions, looked at it, maybe I'd argue a bit more holistically and realized we could get rid of that overhead. And so that's sort of the main point of the work.

[:

Cool.

[:

Correct. Right. Yeah. So we're not the first ones to do it overall. I think we're the first and specifically is to do it in a way that's zero overhead, where constructing the proof and having done the thing takes the same amount of time, and takes the same amount of space. So which is weird and like, not obviously you can do. Anyways, that's a whole other thing for a later problem, we could talk.

[:

About another episode potentially.

[:

Yes.

[:e're almost at the end of the:[:

I mean, we know it's going to happen, right? It's just a matter of timeline.

[:

It's when?

[:

And yeah, I think we still have a lot of time.

[:

One thing that oftentimes happens, I think in research where you have to build physical devices, is if it's not really a general purpose device, it's built around solving a particular problem really fast. But it might not be particularly fast for other problems. Like maybe the only problems it can encode are things that are represented as an Ising model. Well, a lot of computations, if I have to embed them as an Ising model, it's extremely slow, gigantic, high overhead, et cetera. Right?

So my point is like there's a lot of devils in the detail to like how the device is implemented, what it's built around being used for, what the benchmarks are, and how its real world performance on lateral problem solving, like problems that aren't exactly the one thing it was designed for, work. And generally the quantum Fourier transform/Shor's algorithm type of stuff is not the benchmark everyone cares about. They generally care about these hardness circuits, like these worst case circuits. You know, if you've -- for the cryptography audience, if you've ever studied like IO or garbled circuits, you could think of there being a quantum version of that that represents a random garbled circuit. And you need to solve something that looks like satisfiability for it. That's considered the gold standard or the hard to prove that you've done something that a classical computer can't do.

And those types of things are very different than the practical things. So you might have built a device around like winning the benchmark, like beating the benchmark. But that device might actually be still kind of not super practical for doing like finding Satoshi's keys.

[:

But isn't there motivation to -- and not because -- I mean, bitcoin keys is like the least of -- it's like, think about all sorts of encryption that would destroy states. I feel like, is there enough motivation?

[:en Guillermo was my intern in:[:

That's right.

[:

Like yeah, the stuff we were working on, like you could do that like 10 to the 15th times faster or whatever. Right? That's like that's kind of why people want quantum computers. So Google is actually focused on the science side much more than the cryptography side. And that's also why I think like -- I find the marketing aspect of it, very unsavory. I get it, but the idea that your CEO is going on TV saying, like this proves the multiverse exists is a little embarrassing to me. This is like --

[:

Did they actually say that. This is to -- that's actually why I like paid for.

[:

Go read the blog post. It's like, it is embarrassing --

[:

Is what they released actually a thing? Like, so maybe the marketing's wrong, maybe like -- but is it a first step towards something, or is it all fluff?

[:

I mean, for what it's worth, I've only very casually read the stuff. I was interested in quantum computing, doing a master and part of the PhD, which is the reason why I did some of the stuff I did, but I don't -- I mean, I'm sure it's like a nice step towards the same things that everyone has cared about for a while, but it is a step. It's not obvious to me that this is like an enormous -- I think the biggest thing that was very interesting was kind of a positive claim that they made, which has kind of fallen by the website -- wayside, which is this, in quantum systems, there's a big problem with errors that kind of -- they get accumulated in the system, and at a certain point, your signal becomes mostly noise, right? So you can't use it in any useful way if too much noise gets into this. I don't want to get too much into detail, but too much noise gets into the signal, right? And then the point is, like what they showed is, what normally happens as you make the systems larger, you get more error.

[:

Yeah. But here, because of error correction --

[:

What's weird --

[:

They were able to not get more error or like less.

[:

But it's not just that. So what's interesting is here, as the system gets larger, you actually get less error. And that's what's --

[:

And error is what you're saying, right? E-R-R-O-R.

[:

Error, like error. Like the -- E-R-R-O-R.

[:

Not air.

[:

Not air.

[:

Not air. Yeah, yeah. No. Not air, like generic oxygen or like nitrogen mixed with oxygen.

[:

Or STARK AIR.

[overlapping conversations]

[:

-- no naming confusions.

[:

Or that it's going to be it.

[:

Okay.

[:

Yeah. So what that was -- what was weird? That was like, what was like, oh, shit. This maybe is like, very --

[:

Do you know anyone who's checking this or how does this get verified?

[:

The fun part about --

[:

There's a lot of Trust Me Bro in hardware. Like, that's just a fundamental truism, right? It's like you can't really verify the hardware.

[:

The fun part about physics is actually just -- it's shocking how much of it does not reproduce, which is very weird because everyone holds physics as the golden center of reversibility, because sometimes yeah, it's --

[:

Well, you used to not be able to be become Stanford's president by faking your results for 20 years. Unfortunately that's true nowadays.

[:

To be fair, yeah. That was -- well, that was like biochemistry. I don't know. I'm not sure.

[:

I know, I know. But like, I mean it's a societal value thing that like --

[:

Other challenges and setbacks. I'm going to just say like, I think we just highlighted one which was like applications almost like reverting back to infrastructure, which was not something we were hoping for this year. We were hoping to see like standalone application. Is there anything else?

[:

Yeah. I found that competition got a bit nasty.

[:

True.

[:

Maybe it already was, but it got a bit nasty --

[:

It always was. It just wasn't on the surface.

[:

Well, it always was. It was just in different categories like the L1s or the rollups with last year's battle. Right?

[:

Yeah, yeah. But communications got a bit nasty around it and that's, I think, unfortunate for the whole space. And I think it is a challenge moving forward. Like how are we going to make sure that it doesn't happen again.

[:

Or maybe just the nature of the size of the ecosystem, sadly. And like the money on the line, it attracts more of that, I think. Tarun, you had sort of hinted at the rollup roadmap in our pre conversation here.

[:

I basically think like fundamentally, and I'm biased now because I'm writing some small note on paper on this, but there's actually this fundamental annoying problem about rollups, which is, if I have a system that has n rollups, in the worst case, the amount of transaction demand that I need to make up for the lost revenue to the L1 grows with the number of rollups. So like in a world with a thousand rollups, I basically need a thousand times the transaction volume to compensate for like L1 lost revenue due to congestion.

[:

Which makes total sense actually.

[:harder of a claim to make in:

And I just generally think like there, as great as a lot of things that are within Ethereum are, like there is a sense, and like I'm -- again, this is sort of like one of the reasons I've been thinking a lot about this and writing about this is like this. Everyone on Twitter will be fighting about are rollups parasitic to Ethereum or not? Are they removing all the volume and demand and then no one wants to hold the asset, whatever. There's a sense in which it's true, but only if the transaction demand for the whole network is below some threshold. And the problem is we're kind of below that threshold. Like we just don't -- we don't have enough demand to justify. If you look empirically, it's like kind of actually kind of weird. And that's why you see kind of, I think, the success of the Solana and Move type of stuff right now compared to some of the things In the --

[:

Although this isn't ZK specific.

[:

But it not that it can't change.

[:

I guess, this is just like one of the these -- this is --

[:

This isn't ZK. Yeah. This is --

[:

This is zk-Rollups included.

[:

But I think the zk-Rollups have an even worse version of this, right? They have a higher cost basis.

[:

Yeah.

[:

To go hurdle rate. They have to like have all the proving working. They have to have that be cheap, on top of all the other things that the current ones have. So that means they need even more transaction demand. And if you look at them, they have much less.

[:

Although don't they make money through these provers, like having proof -- like doing proofs on-chain kind of?

[:

Okay. Okay. So --

[:

I don't think it cost more.

[:

I think like the world, it does --

[:

Like make more via one. Yeah.

[:

The real question is, will ZK get more in my head and this is sort of like again more philosophical question. But will ZK get actually used in applications when the cost of proving is -- and the actual proof generation is the valuable part or will it actually get used when proving basically cost zero?

[:

So you're saying like we're heading towards a zero cost anyway? So like the -- it won't --

[:that seems very unfeasible in:[:

I think, yeah. The hard part for me is betting on ZK costs remaining high and then people paying for it, feels very much like betting against the improvement of technology and the improvement of proof systems, which if we've learned anything over the past literal four years, is that those things feel like they're going to zero extremely quickly. Maybe there's some fundamental limitation why this time it's different and all the stuff. But I do have a hard time with this. It's like if you're depending fundamentally on the economics of proofs costing some minimal amount, I feel like I have bad news betting against technology. We have literal supercomputers in our pockets.

[:

This puts into question lots of projects actually. Because aren't prover networks sort of based on that too?

[:

I'm not saying they won't be valuable necessarily, I'm just trying to say the following statement, which is like in proof-of-work, the value of the hardness of the process is important to the network.

[:

That's right. That's right.

[:

It's -- in fact one where like by not adding in new technology to the network --

[:

That's right.

[:its transaction demand like a:[:ng, I really want to touch on:[:

I think for me it's more of a wishlist than a prediction. But I want to see proving for people. No more proving for big servers, for rollups. I want to see proving for people. I want to see it happen on your phone or if it's not happening on your phone, I want to see it delegated privately.

[:

Yes.

[:

But yeah, I want people to actually build on phones. Kind of in the same way it's easy to fall back to building a VM, it's also easy to fall back to building --

[:

Server side.

[:

Proof systems or applications -- well, no tools. Yeah, for big beefy servers. It's harder to build them for phones. That's my number one item on the wishlist. And I think that's also what a lot of people are going to be targeting.

[:

Yeah. I mean, I think I had this prediction last year too, which is like client-side proving. Like, come on, what are we doing? What are we doing here? This is supposed to be privacy software, not like SaaS, right? We don't need a rack somewhere to do the things for us. We have supercomputers in our pockets. We should just use the supercomputer in our pockets. Do the thing. Like come on. Anyways, I'm very mad about this and I agree -

[:

You want to put the ZK back into ZK.

[:

Correct.

[:I see:[:

I'll give one instead of a predict -- you know, I gave,I guess, an earlier prediction that an existing application will be a better user than any new application. And I count rollups in the new application. But let me tell you the most successful usage of cryptographic techniques in terms of a money making application in crypto this year.

[:

I think I know what you're gonna say.

[:

Take a guess. Each of you.

[:

I know what it is.

[:

You get one guess.

[:

I think it is a project I'm not a big fan of. Okay. Okay.

[:

I don't know if you're a big fan of.

[:

No. Then it's not what I thought. Okay. It's not what -- you would have known what I meant. Okay.

[:

I don't think -- I actually don't -- I don't know. I don't know. I don't know your opinion.

[:

Although, I do think it is that. But well, for me. But --

[:

Okay, guess. Give me your guess. Give me your guess.

[:

Total number of proofs produced?

[:

Like economic.

[:

No, no. In terms of using cryptographic tools on a mobile phone for an end user.

[:

Oh, then it's Worldcoin, isn't it?

[:

I think Worldcoin.

[:

BONKbot make makes way more money.

[:

What?

[:

BONKbot is done completely in TEEs on mobile devices. Made $300 million last year.

[:

Although TEEs.

[:

Holy shit. Wait, that's incredible.

[:

Yes. You should read the BONKbot TEE architecture. It's actually quite -- I was quite impressed.

[:

Hilarious.

[:

For something that's like doing memecoin trading. So this is where my point is that like all the stuff that people are spending money on, I think eventually we'll see -- you'll get the seep in of mobile privacy. Especially as people are using way more mobile apps for crypto. Like, as weird as it is to say, no one has gotten more mobile app usage of wallets than memecoins, which is kind of scary and sad at the same time.

[:s the worst outcome for ZK in:[:

Rollups are the only consumers of proofs.

[:

No applications.

[:

I mean it is a worst case outcome, but I don't know --

[:

Yeah. I don't think that's going to happen.

[:

Like for me the probability that happens is zero.

[:

You haven't talked to enough zkVM teams. They all think that rollups are the best customers.

[:

No, they're by far the worst. Like, I don't know, I find it hard to believe that we're not just going to be slinging proofs from our phones all the time in the next 10 years or something, you know.

[:

I also do think it might be.

[:

We said next year, not 10 years. Next year.

[:

Oh, next year. Okay, fine. Yeah, maybe next year, fine. All right --

[:

Okay. I'll give -- wait, I'll give another best one. Best one would be like -- well, kind of best. I don't know if this is that great, but some existing huge monolith technology company starts to be deep, like ZK becomes like a deep part of their stack, which is exciting because it would be like, then the number of people touching it would be very cool. And this tech would go further. The bad thing is obviously, the whole spirit would change potentially. And the fix -- the focus, it would almost be like that company could almost absorb all of this work we've been doing, building out the ideas around this, and then just kind of take it sort of as their own, which would be a bummer.

[:

I mean, you were in music before Anna, so maybe this analogy is going to help resonate with you. Like when you're in Underground Indie circles, you love the kind of music you're doing, and you want it to be pushed out to as many people as you can, but suddenly when it becomes mainstream and when mainstream music sort of appropriates that kind of stuff, it becomes uncool.

[:

Totally.

[:

And it is what you wanted in the first place. But then when you get it, you're very sad about it.

[:s, indie from the:[:

Yeah, yeah, yeah. I think we should be okay with it. I'd be happy to see it go mainstream. I'd be happy to see Apple wallets have a crypto wallet in it and you can sign with FaceID. Super great. Uses TEEs on the device already. Wonderful. Kind of sucks for the Indie developers, but it's good for the end user.

[:

Yep. There may or may not be at least one large company doing this, but it's fine.

[:

Yeah. I feel like there already are. Like Cloudflare, I know --

[:

They always have though.

[:

Cloudflare. I would actually put money on Cloudflare using ZK first.

[:

But they did. They always did. They just used it in a really tiny way.

[:

But as in like using more, like using recursion, for instance. I would actually, if anyone uses it first, it's probably them.

[:

Another worst outcome then is like a very bad big project or big company or big entity or state or something using it. Like maybe that would be the opposite.

[:

It's permissionless, Anna. There's no good or bad here.

[:

Actually, worst outcome, some kind of huge breakthrough but gets patented. And then like no one cares.

[:

Oh that one is –

[:

That's what I worry about FHE for this, by the way, guys. Too many patents in FHE not going to name names, but someone. I think they might kill their own industry if they don't like open it up a little bit more. Is there anything you guys want to share about what's coming up?

[:

For me, nothing specific, just more research hopefully.

[:

Honestly, just like -- I feel like I've spent the last year falling out of closer to cryptography stuff and getting back more into DeFi stuff. I think that's -- because I think it really is back.

[:

DeFi-pilled once more.

Before we sign off, I just wanted to share a bit about this show as well, because as you know, we've been on break, and we haven't been releasing as regularly, although we have released one a month, which is cool. So yeah, the plan is after taking a bit of time off, I had a chance to -- I don't know, get a little bit of inspiration going. A lot of these new use cases and directions, these are things that I've been reading about. And so, I'm really excited to come back to these interviews and to start up the show again.

The plan right now is to start releasing again around mid January. I don't have anything scheduled yet or recorded. So maybe we have to push that back by a week, but that's what we're looking at sort of mid January. I'm still not sure if I come back weekly, like every week, all the time, but I'm sort of playing with the cadence and we'll figure that out over the next few months.

But yeah, I mean, I just -- I feel like taking some time away. What it made me do really was it made me start to miss these interviews and obviously seeing you guys, my co-host, but also digging into these topics. I find it always very challenging intellectually. It's broadening in a lot of ways instead of sort of narrowly focusing in on my day-to-day. It's really great.

I think one of the challenges I've had is being super, super hands-on and operational on the events front and running these three companies. But yeah, I think in the new year, I mean, I have an amazing team around me with Gaylord and Quentin over at ZK Hack, Agni and Rachel and Henrik and everyone over at the podcast, also the entire ZKV team, who's really rolling right now. So I feel like, yeah, in the new year, I'm going to be able to focus more on these episodes, which is something I really want to do.

ng up. So in the new year, in:ast time we were there was in:

Yeah. So I hope I'll see a lot of you there. Put that in your calendar. Just so you know that we won't even be opening applications or anything until February. We want to get sort of fresh research for the talks. And also just generally we tend to do that in more of like a three month window. But if you're gonna be there, if you're planning on being there, just put that in your calendars already, and stay tuned on all the channels, because that's where I'll be sharing more.

All right, guys, thank you to all of you for joining this sort of finale to the year on the ZK Podcast, this special episode where we looked back and looked forward. Yeah. Thanks for being on.

[:

Thank you.

[:

Glad to be here.

[:

Thank you for having us.

[:

I want to say thank you to the podcast team, Henrik, Rachel and Tanya, and to our listeners. Thanks for listening.

Transcript
[:

Welcome to Zero Knowledge. I'm your host, Anna Rose. In this podcast we will be exploring the latest in zero-knowledge research and the decentralized web, as well as new paradigms that promise to change the way we interact and transact online.

to return for an end-of-year:[:

Yay.

[:

Hello.

[:

Yoh.

[:

What's up?

[:

We are only missing Kobi. I think we've actually done this almost every year. Sometimes we've broken this down into two episodes, doing like a lookback episode and then a look-forward episode. This time we're going to try to combine the two of them. We're going to try to focus less on like a month by month, play-by-play, which we did do once, and a little bit more on like general themes. What went well, what didn't go as well, and what could be coming in the future. So I put together a few little -- actually, ChatGPT helped me put together a few little questions about the past.

[:t went well? Anna using AI in:[:

Yeah, exactly. Who would have thought that's really a change?

[:Well, we used it in:[:

It was more -- you were more --

[:

That one time.

[:

You were less gave in -- you didn't give in to the Borg then. You were freaked out by the Borg.

[:

True. I was scared.

[:

AI God was not sparking joy, but apparently it sparked enough joy that you actually used it like not ironically this time, which is impressive.

[:significant advances in ZK in:[:

There was a lot of movement in like hash-based SNARKs and a lot of new developments. So we had the whole Circle STARK saga.

[:

Oh wait, that was this year, shit! It feels like a year, like more than a year ago. That's crazy.

[:e you sure that wasn't end of:[:

So I checked before.

[:

Oh, you did? Okay.

[:in:[:

Yeah.

[:

More recently there was Blaze. So there's a lot of work in this area of research.

[:

And that's good.

[:

And that is good. Yeah. Well actually it's also sparked a whole other narrative this year which was the hash versus curved and like hash-based SNARKs or curve-based SNARKs. And it's nice to have like hash has won.

[:

Wow.

[:

And I think that's interesting to look at, revisit chat about, I don't know.

[:

Yeah.

[:

I remember -- I mean, we had an episode, I believe with David Wong where he had a hot spicy prediction that he was like hash-based SNARKs are dead. And I was like, are you sure about that?

[:

Complete contrarian take.

[:

Yeah. He should have waited till the end of the year for that one.

[:

Yeah. I mean, no. I think it was a great prediction because it was kind of fun to get into the weeds of it. Why the thoughts were -- but it's funny because it's like, these are not like things that are exclusive, right? Like they're not -- like you don't just get to like work on one and to the detriment of the other or something. They're kind of like just different things. So it's funny seeing some wars around them. Anyways, this is just --

[:

Yeah. And they coexist all the time. Like every single rollup we see on Ethereum rolls their proofs into a Groth16 or like a Plonk proof.

[:

Right.

[:

So they do coexist and they're almost like they need to coexist.

[:

And it seems like it got pretty spicy at the end here. Right? With the Google announcement. I don't know. Tarun actually probably has some context on that, more context than I do.

[:

Yeah.

[:

Actually paid attention to the thing but --

[:

Well, I just thought the marketing in the blog post was a little bit incendiary. So I feel like that already turned me off to how accurate the result was because the paper is like a four-page PRL paper, and it's like doesn't really say that much, or at least the original paper. I mean, the nature paper is also a little bit marketing. So it's like I'm sure there's something there. I'm sure that Google is slightly exaggerating and like the multiverse thing in that blog post was embarrassing. Also citing David Deutsch for that, are you fucking kidding me? You're not setting like Heisenberg or Dirac or someone who came up with this shit actually, and not a guy who wrote a pop sci book. Like the entire thing is like pandering to Twitter Huberman Bros, like that article. Anyway, sorry, that was --

[:ack though to the research of:[:

So I was actually going to add to close this hash versus curve thing. There were also developments in the curve-based land this year, including as you mentioned, Polymath, Garuda and Pari, which are all SNARKs that are smaller than Groth16. Interestingly, I'm not sure their on-chain verification cost is cheaper, but the proof itself is smaller. And I don't know, that's a cool development in itself.

[:d that be something of like a:[:The big one that came out in:[:with LaBRADOR, which is from:[:

Oh cool. For the lattice-based SNARK STARKs, they -- at least in that episode, I understood that this makes SNARKs more compatible with FHE technology potentially. But like lattices are -- those are -- this is broader than just FHE. Right? Like, I'm assuming this all of a sudden opens it up to a bunch of new tech.

[:

Yeah, absolutely.

[:

Does it also open it up to like, AI stuff? Like ML stuff?

[:

Potentially.

[:

Tarun's making a face.

[:

Yeah, yeah. I'm like, that's not really related. That's not really related.

[:

VC talk. Get out of here.

[:

No, I'm just like, that's -- I don't know. Those are very unrelated.

[:

Oh, they are. Oh.

[:

I mean, the reason we're using lattices is because we're trying to find hard problems. We're trying to find things that are easy to set up for computers and hard to break. And that's how we do cryptography. And so, yeah, that's how we do FHE. That's how we do -- we can do ZK. Some people are looking to do like obfuscation.

[:

Crucially, most post-quantum things are lattices.

[:

Oh, I see. Okay, so it's that --

[:

Like the most best known post-quantum things. Yeah.

[:

And actually there was a fun paper that came out this year, a big scare. So in April, there was this paper called Quantum Algorithms for Lattice Problems, in which the author, Yilei Chen says, like, hey, here are algorithms that break these lattice problems, like quantum algorithms that break these problems. And for a period of one week, everyone was freaking out, because this is a very respected author, and everyone was reading through the paper, trying to find a bug, being like, holy shit, is everything broken? Do we have to give up on lattices? Especially because lattices were being used in a lot of the NIST standardization efforts for post-quantum encryption. And so nine days later, someone reaches out to Yilei Chen and he updates the paper saying, like, hey, there's actually a bug and I can't fix this. So the paper is still up for inspiration if people want to think about these problems, but lattices are not broken yet, as far as we know.

[:

Yet. Ooh. Do you think they will be, though? Was that a hint that they could be?

[:

It was a close call. I think there were a few people trying to find bugs, and Yilei fixed a few of them. So it started to feel like maybe there's nothing we can do here. I don't know these things well enough to have any form of intuition.

[:

Yeah. What I think was really, for me, interesting, in that whole debacle, there was this magical organization that happened. I don't mean organization in the sense of an org that does stuff. Like an organization of like organization of humans that just happened. Like I don't know. Discord servers were launched and people were just like -- the Discord server was only talking about this paper. And the channels were different pages of the paper and different theorems people would discuss the -- it was super interesting to watch how people spontaneously organize themselves against major results in kind of like post-Covid. Right? I feel like these things kind of happen a little bit in weird niche-esoteric channels, but seeing it be so public.

Right? Like outside of labs, like outside of just people who work at Google and no math or whatever were just spinning up these servers and be like, dude, come join. Like, we need whoever we can get to understand this stuff. If you can explain it, great. If you can't explain it, done. Like, let's just try to figure out what the hell this thing is saying, like, how to do it. And it was very fun seeing that. And it's kind of an interesting -- where's the future of research going, I think, which is what was kind of limited to Groups in Blackboard and whatever. The Stanford Physics Department feels to be like a very global thing now. And that -- I don't know, it was just very -- just as an anthropological exercise, it was very fun to see.

[:

I think it helps that it was a theoretical problem and a theoretical break.

[:

Sure.

[:

Like, had it been concrete, I think this would have been locked up Department of Defense, like, on it.

[:

Well, I think there's also this philosophical issue that comes with these -- with generally quantum algorithms that I think is worth pointing out as to why it takes forever to figure out if these things are correct or not when someone posts them. Which is like, in most cryptography, whether it's algebraic or whether it's hash-based, I'm separating those very broadly. I know there's obviously overlap, but like there's sort of a sense in which there's -- they're very like strongly discrete objects. There's a certain number of objects, and there's a certain subset of those objects that are valid. And you want the number that are valid to be very small relative to the number that are invalid.

The problem is quantum algorithms are in continuous time and continuous space. Like, all the operations are with some probability, you do this operation and this particular state update happens. And the thing is, you're trying to make this probabilistic statement, like after a certain number of operations with high probability, I've concentrated onto the valid states, like the things that are the private keys or whatever. The problem is, sometimes those continuous processes, they'll always have placed some probability on the right answer. But generally the hard part is showing that it's like the right amount of probability.

And oftentimes how you measure that is extremely difficult in quantum algorithms. Whereas in discrete algorithms, it's like, probability is very easy. Right? It's like, I count one set of objects, I count another set of objects, and I divide them. I know it sounds very stupid to say it that way, but that is literally what you're doing in discrete land. But in continuous land, it's not quite that. And Guillermo hates continuous math. So I point this out just more as a -- he's a noted hater of probability and continuous space.

[:

No, I hate measure theory. I did my entire PhD in continuous land and physics. This is BS.

[:

But a lot of the quantum algorithm stuff actually boils down to getting that type of stuff. Right? Unfortunately.

[:

I have thoughts about this because the state space is countable, and so we don't need measure theory. But that's different.

[:

Maybe. Maybe.

[:

This is more of a philosophical qualm.

[:

But I guess my -- the reason I bring this up is like there's two different cultures. There's the discrete land cryptography culture, and then there's the continuous space and time quantum algorithms culture. And so, like, any paper that makes some claim like this, has to use tools from both. And you need people who can validate both in order for it to work, which might take a lot longer than if you wrote something that's purely discrete or purely continuous.

[:

Wow.

[:

But that's more like a philosophy of science thing of, like, oh, like, how do you know whether a result is right or not? And like, how many people do you need to validate it? Whatever. Like, how many viewers?

[:

I think my version of this, which is just like standard Guillermo rant, is like, actually we just don't have good abstractions for quantum algorithms.

[:

I feel like that's like giving up.

[:

No, no. I mean that in a positive sense of like, we should just make good abstractions for quantum algorithms. We just don't have them. Right? Right now it's kind of -- like, quantum algorithms is like a --

[:

I do feel like that's like throwing away -- that's throwing away decades of hard work people have done and saying, oh no, everything they've done is useless. Like it feels like it's not worth.

[:

I'm not saying it's useless. I'm just saying it's bad. Like bad aesthetics. And you know I have a ranking of esthetics. Physicists are not at the top. And in fact, they're not even close to the middle. But, happy to discuss further on that. I think cryptographers, funnily enough, are really at the bottom, actually, which is also kind of funny on this Tarun.

[:

See again, again, the Guillermo trash talk hour has started. Right?

[:

Yeah. Clearly.

[:

You got to have a little respect for how hard people work on certain things --

[:

No. And this is not to say that that's not important or interesting or anything like that. I just think it's bad aesthetics.

[:

Not to say that your PhD and postdoc --

[:

Your life's work.

[:

In 20 years of research aren't important, but --

[:

Correct. But it's just -- it's bad aesthetics.

[:

You're like Kanye West but for academics, you know, like when he --

[:

I mean, hey, I'll take whatever you -- you know, as much as I sling insults, I hope that someone slings similar insults back. Like this is not a one way road. But unfortunately, I still think that I am right on this. You know, this opinion is correct.

[:opic I wanted to touch on for:[:

So if we're going to be lenient on timelines, then I think we can talk about like zkTLS and I hate that naming for it, but like web attestation.

[:

Oh my God.

[:

Web Proofs.

[:

Yeah. Wait, wait, wait, wait. Just because, like, I don't know if you've done an episode on this.

[:

We did an episode on Web Proofs.

[:

Yeah. Can we disambiguate the term? Because I feel like it's a wrong term.

[:

zkTLS specifically, so I haven't -- yeah.

[:

It's good explaining why it's wrong.

[:

Okay. I don't think I've ever done a full episode just on that term, but it has been mentioned a bunch of times.

[:

Good.

[:

So yeah, what is the deal with zkTLS then?

[:

So the idea is you'd have some kind of plugin in your browser or you have a modified browser that allows you to make proofs about a TLS session. And even actually more specific than TLS, some cases it's like HTTPs. So it's like you go to your bank's website, it's secured with HTTPs. Great. Now you can make a proof that your account has a balance that's higher than some number. The difficulty here is the cryptography that's used is symmetric. You don't have this property, which is non-repudiation. You don't know for sure whether or not the messages sent between you and the bank were signed by you or by the bank. Right? And so to convince a third party that it was the bank and not myself, we need to include someone else in this sort of proving mechanism. And that's like these notaries or these proxies or these MPC nodes. So this is why I usually don't like the term zkTLS, because ZK usually refers to things that have a prover, a verifier, and the zero knowledge property. Hopefully if you're using ZK correctly. And then here we're actually talking about a protocol with a lot more parties. And so it could be a bit like disingenuous people.

[:

MPC TLS.

[:

Yeah. it's a lot more like -- yeah, MPC TLS or you don't have to use MPC. You can do this with just someone that sits on the network between you and the bank. And then it's kind of like proxy TLS. And this way talking about TLS attestations is, in my opinion, a bit more generic, doesn't have the downsides and the pitfalls of calling it zkTLS, but also isn't specific about how you do it. I don't know what your thoughts are.

[:

This is the kindest dunk I've ever heard.

[:

Yeah.

[:

On the podcast.

[:

Hey, I'm not here to pick fights. I'm just here to decode some jargon. You know, that's what I do.

[:

That's where you're wrong kiddo.

[:

All right. Are there any other things that have emerged this year? I do have a couple.

[:

I mean, if we're going to pick fights, zkVMs.

[:

Actually, yeah. So this is what I'm -- this is what I'm going to say, which is, the fact that a lot of people have gone from working on applications to working on zkVMs is a sign that the search process to find applications has failed. And everyone wants to outsource it to people building stuff on their VMs. So then they end up building VMS instead of applications.

[:

Well, I mean, we did learn this year that, like fundraising for applications, ZK applications has been hard. But I don't know if that's the fault of the state of the kind of application space or the fault of the funders at this time. Just because, like it's clear --

[:

I just mean that if a lot of people pivot to making a VM, you make a VM because you're like, so I want someone else to find the application and use my substrate as opposed to like, I have an application. Right? And so if everyone's going to a VM, that means the search process through the application space is not going very well, whereas the search process in the VM space is easier.

[:

You really don't think it's just because there's more of an upside, like there's more of a business, like a case that investors like and investors are kind of pushing these teams to create zkMs instead of using somebody else's?

[:

I mean, if anything, I'd say it's harder to make money building a zkVM than it is building an application, because you're now competing with a lot of other zkVMs. Everything is open source. So it's very easy to fork one and make your own zkVM. Well, very easy. Relatively easy compared to what it was like a year ago, two years ago. And so I'm not sure there is a business case to be made there.

[:

Well, it reminds me quite a lot of the L1 booms of yesteryear, or the L2s for that matter, where a team would have sort of a really novel application idea and then go shop it on the market. And sometimes the tech was missing for them to truly build the thing that they wanted to build, and then they had to build down the stack, and then they ended up building more of the infrastructure and then just focused on the infrastructure structure.

But other times I do think it was just like -- I really do think sometimes it was just like, that was a more compelling business case for that team to be doing at that time than joining an ecosystem, where you definitely get like a lower tier status. Like you can't -- you know, you're not the L2, you're not the L1, you're not the overarching network ecosystem anymore. You're part of somebody else's. I mean, I think a lot of projects have had that problem in attracting app developers, which is like, you know, I feel like the apps should be valued very highly, but they're still sort of seen as a little bit like a secondary like, oh, it's part of my ecosystem. It's part of an ecosystem.

[:

Well, actually, I would dispute that a little bit in terms of other crypto applications that are not ZK, where there's a ton of them making a lot of money. Like all these Telegram bots making 100 million a year plus. So all I have to say is like, I think if applications find a way to make money, they will do it regardless of the ecosystem.

[:

True.

[:

Which is like what these Telegram bot type of things have been kind of proving. So I think it's just more like people haven't been able to get any traction and the zkVM is like a safe fallback. Because you know what you're supposed to be building. And you know, if you have a particular vector direction of like these are the things I want to change, then you have a thing to keep moving on versus getting stuck in the like 'no one wants to use my thing.'

[:

Yeah. It's kind of funny, but zkVM feels like it's become a well trod path versus the applications which is kind of an open thing. And it's like, okay, cool, if we build a zkVM, we'll find maybe some applications. Someone will figure out some applications for it versus being like, okay, but --

[:

It's funny because this all happened this year which we're saying of like building of zkVM is now an easy task.

[:

Yeah, true.

[:is year. Up until like end of:[:

Yeah. So when I say like easy task, I don't mean like easy in a physical sense. I mean, easy in like the spiritual sense of like this is now a thing that like is a thing, right? I know it's a kind of wooey or whatever, but it is like a thing that people now do. Right? Which is a funny thing to say about such a kind of niche, esoteric topic relative to -- not in crypto, but just in the rest of the world like, I guess, the --

[:er what we said last year for:[:

Yeah, we did. I'm pretty sure. I can even look it up.

[:

The idea emerged and then within a few months, a few other teams were doing it, and then by the end of the year, there was like, a list version very, very long. And I think a lot of those teams have since sort of -- like that's their application, but they've moved a lot of their effort into like another part of the stack that they're building.

[:

Well, that's what we just saw with Axiom, right? Announcing OpenVM. So Axiom started out as a coprocessor, and now they're leading in one of these like VM efforts.

[:

That's crazy. I actually didn't know this. When did this get announced?

[:

Yesterday.

[:

Okay. Fair, fair. I'm allowed not to know this. I feel like I was completely out of the loop. I was like, what? Okay, Yesterday.

[:

No, no.

[:

A lot of other coprocessors then ended up moving more in the prover network side of things. And prover networks we definitely talked about at the end of last year. That was when we were first seeing those projects emerge. But in the last year, we've seen a lot of teams that had sort of initially pitched themselves as one type of application, include a prover network to widen their offering, basically.

[:ke it's literally that's what:[:

I mean, there's one -- so actually, thinking back to our predictions that we made earlier this year, we were talking about identity stuff and scanning passports, and that's actually something that has been built a lot this year.

[:

True.

[:

There's a lot of different teams that have been able to scan passports and make ZK proofs around them. The application built on top of it, I haven't seen many. We had heard of -- I think it was the Rarimo team using it to do shadow elections in specific countries. I think Aztec, some of their early wallets are looking to integrate this like from the very beginning in the wallet, but I don't know if we saw it yet like fully fledged identity systems and identity applications. We're on the way.

[:

Nico, you and I talked a lot about this in our last -- in sort of the last episode of the regular Weekly cadence. We did a lookback at some of the events that had happened, and we mentioned the Cursive presentation at ZK Summit there. There, there's some really interesting ideas, but this is in the -- to me, that's in like ideation phase. Although they are doing experiments with all of these. So you might get a chance to use this, but it's not like it's in production yet in any sort of large scale way.

[:

Yeah. But I guess that's exactly the kind of application research that Tarun was mentioning that some others have tried and haven't been fruitful with. And hopefully, some team will find something eventually.

[:

I mea, there are a few things that have popped up, but I don't know if they fall under the application category exactly, but things like this like Celestia ZK Accounts, there's all these BTC zkRollups, which is not new but it's different. I mean it's like rollups are not new but they're using -- I mean, they're going about it in a different way. And then coSNARKs, which is obviously more like an infrastructure piece. It's not an application per se, but I mean, at least in the conversations it sounded like that could open up some cool things.

[:

I would say one thing that hopefully will come out of all the VMs, a little bit like your analogy of like the 5 million L1s, is like, A, the VM that you don't expect will win, if it's like the L1s, right, where it's like --

[:

The Solana?

[:

Yeah, exactly. It's going to be like someone who you don't expect but who like developer -- builds better developer tooling. And then the other thing is like, I'm just hoping people kind of like upstream ZK into already successful applications versus trying to build totally new ones from scratch. Like use ZK tools as features within existing applications that already have usage.

hat and my prediction is like:[:

Yeah. On that point, I feel the same way about ZK as I do about blockchain. It's just like fundamentally, it is like a cool, insanely sick like fucking wild technology that is possible. But otherwise, like nobody's debating what database use -- like no front-facing consumer gives a shit about what database you're using in the backend of your application. Right? Like you're not like, ah, like use MongoDB. I'm not going to use you because like that. Sorry. You know, fundamentally it is like a boring technology.

[:

You're against my values. Yeah, yeah.

[:

Yeah. It's just like a boring -- a very fundamentally boring technology for like -- it's like okay, cool. Yeah, we use this cool cryptography stuff to ensure that my privacy is preserved. Sick. And it is very cool. Like for what's worth, it's insanely cool. Like the fact that we can even do it is insane. But it is not the point. It's kind of like an instrument to doing the thing rather than the thing itself. And so to Tarun's point, I think it's a healthy thing to see where it's like cool, the tooling has become good enough that I can just incorporate this in the way I want or the way I find interesting into the app. And now I never have to think about it again, except insofar as it's like the properties I get are cool or useful or whatever.

[:

I just realized Guillermo and Nico, you guys released some work recently and I don't know what category that falls under.

[:

Yeah.

[:

Is that an application or is that research?

[:

I have no idea. I'm glad you asked.

[:

That's a great question.

[:

And I do think we probably deserve a lot of -- you know, a lot of time and space to really dive into what you guys have recently released. But do you want to just quickly summarize it and then we can maybe do a full episode on it sometime soon?

[:

Okay. I'm going to give a quick setup, and then I'm going to say a thing. So there is this funny thing that I've been ranting on, and I believe I've done it here on the podcast, which is, right now we have multiple ways of building zero-knowledge proofs, one of which is we use a VM, a zkVM. Another one is -- I'm sorry, when I say zk, I mean zk in the royal sense of zk, you know, the lowercase zk. So succinct proofs. And I've been ranting about, like look, there are things that we use all the damn time, like literally so much that it is unbelievable that we are going to just eat all of this overhead to prove that a certain thing was done correctly.

n a second is correct is like:[:

More? Like more time?

[:

More. Yeah. More time.

[:

Okay. Yeah. That's bad.

[:

So like proving the thing is somehow -- even though it's kind of an incidental thing, like I just want to show you that what I did was correct, now it takes thousands, ten thousands, hundreds of thousands longer than just doing the thing itself, which is kind of silly.

[:

Wow.

[:

And, yeah, it's easier to prove -- easier to verify and stuff, but fundamentally, it's that. So one of the rants I've always had is like, okay, in some sense, there are certain -- you know, it is easier to build on a zkVM, which is why everyone's building zkVMs. So but you should be able to kind of go down to the bare metal, and for certain very important applications, we should be able to construct specific proving -- you know, I don't know what to call them, and like not a system because it doesn't prove everything, but a specific algorithm such that the amount of time you spend doing the thing and the amount of time you spend proving that the thing you've done is correct is very small. Right?

[:

Cool. And that's what you guys have built?

[:

So specifically, yeah. In the case of DA, we've built what we think is the first purpose specific succinct proof, which actually does a thing that people are using right now, like they're doing right now in the case of data availability. Which is, we want to prove that something is not just close to a correct encoding, but indeed it has this tensor property of this weird encoding and it has these additional data availability properties or whatever.

[:

Yeah. Just to be clear, like not exactly the first, Guillermo. Right? There's these great papers from Ethereum Foundation, like Benedikt Wagner, Mark Simkin and I think, Mathias Hall-Andersen who look at this problem. But the way they do it incurs a constant overhead cost on the network. So in their system, the network is going to be sending the same thing to every single light node, whereas we sort of relaxed a bit of definitions, looked at it, maybe I'd argue a bit more holistically and realized we could get rid of that overhead. And so that's sort of the main point of the work.

[:

Cool.

[:

Correct. Right. Yeah. So we're not the first ones to do it overall. I think we're the first and specifically is to do it in a way that's zero overhead, where constructing the proof and having done the thing takes the same amount of time, and takes the same amount of space. So which is weird and like, not obviously you can do. Anyways, that's a whole other thing for a later problem, we could talk.

[:

About another episode potentially.

[:

Yes.

[:e're almost at the end of the:[:

I mean, we know it's going to happen, right? It's just a matter of timeline.

[:

It's when?

[:

And yeah, I think we still have a lot of time.

[:

One thing that oftentimes happens, I think in research where you have to build physical devices, is if it's not really a general purpose device, it's built around solving a particular problem really fast. But it might not be particularly fast for other problems. Like maybe the only problems it can encode are things that are represented as an Ising model. Well, a lot of computations, if I have to embed them as an Ising model, it's extremely slow, gigantic, high overhead, et cetera. Right?

So my point is like there's a lot of devils in the detail to like how the device is implemented, what it's built around being used for, what the benchmarks are, and how its real world performance on lateral problem solving, like problems that aren't exactly the one thing it was designed for, work. And generally the quantum Fourier transform/Shor's algorithm type of stuff is not the benchmark everyone cares about. They generally care about these hardness circuits, like these worst case circuits. You know, if you've -- for the cryptography audience, if you've ever studied like IO or garbled circuits, you could think of there being a quantum version of that that represents a random garbled circuit. And you need to solve something that looks like satisfiability for it. That's considered the gold standard or the hard to prove that you've done something that a classical computer can't do.

And those types of things are very different than the practical things. So you might have built a device around like winning the benchmark, like beating the benchmark. But that device might actually be still kind of not super practical for doing like finding Satoshi's keys.

[:

But isn't there motivation to -- and not because -- I mean, bitcoin keys is like the least of -- it's like, think about all sorts of encryption that would destroy states. I feel like, is there enough motivation?

[:en Guillermo was my intern in:[:

That's right.

[:

Like yeah, the stuff we were working on, like you could do that like 10 to the 15th times faster or whatever. Right? That's like that's kind of why people want quantum computers. So Google is actually focused on the science side much more than the cryptography side. And that's also why I think like -- I find the marketing aspect of it, very unsavory. I get it, but the idea that your CEO is going on TV saying, like this proves the multiverse exists is a little embarrassing to me. This is like --

[:

Did they actually say that. This is to -- that's actually why I like paid for.

[:

Go read the blog post. It's like, it is embarrassing --

[:

Is what they released actually a thing? Like, so maybe the marketing's wrong, maybe like -- but is it a first step towards something, or is it all fluff?

[:

I mean, for what it's worth, I've only very casually read the stuff. I was interested in quantum computing, doing a master and part of the PhD, which is the reason why I did some of the stuff I did, but I don't -- I mean, I'm sure it's like a nice step towards the same things that everyone has cared about for a while, but it is a step. It's not obvious to me that this is like an enormous -- I think the biggest thing that was very interesting was kind of a positive claim that they made, which has kind of fallen by the website -- wayside, which is this, in quantum systems, there's a big problem with errors that kind of -- they get accumulated in the system, and at a certain point, your signal becomes mostly noise, right? So you can't use it in any useful way if too much noise gets into this. I don't want to get too much into detail, but too much noise gets into the signal, right? And then the point is, like what they showed is, what normally happens as you make the systems larger, you get more error.

[:

Yeah. But here, because of error correction --

[:

What's weird --

[:

They were able to not get more error or like less.

[:

But it's not just that. So what's interesting is here, as the system gets larger, you actually get less error. And that's what's --

[:

And error is what you're saying, right? E-R-R-O-R.

[:

Error, like error. Like the -- E-R-R-O-R.

[:

Not air.

[:

Not air.

[:

Not air. Yeah, yeah. No. Not air, like generic oxygen or like nitrogen mixed with oxygen.

[:

Or STARK AIR.

[overlapping conversations]

[:

-- no naming confusions.

[:

Or that it's going to be it.

[:

Okay.

[:

Yeah. So what that was -- what was weird? That was like, what was like, oh, shit. This maybe is like, very --

[:

Do you know anyone who's checking this or how does this get verified?

[:

The fun part about --

[:

There's a lot of Trust Me Bro in hardware. Like, that's just a fundamental truism, right? It's like you can't really verify the hardware.

[:

The fun part about physics is actually just -- it's shocking how much of it does not reproduce, which is very weird because everyone holds physics as the golden center of reversibility, because sometimes yeah, it's --

[:

Well, you used to not be able to be become Stanford's president by faking your results for 20 years. Unfortunately that's true nowadays.

[:

To be fair, yeah. That was -- well, that was like biochemistry. I don't know. I'm not sure.

[:

I know, I know. But like, I mean it's a societal value thing that like --

[:

Other challenges and setbacks. I'm going to just say like, I think we just highlighted one which was like applications almost like reverting back to infrastructure, which was not something we were hoping for this year. We were hoping to see like standalone application. Is there anything else?

[:

Yeah. I found that competition got a bit nasty.

[:

True.

[:

Maybe it already was, but it got a bit nasty --

[:

It always was. It just wasn't on the surface.

[:

Well, it always was. It was just in different categories like the L1s or the rollups with last year's battle. Right?

[:

Yeah, yeah. But communications got a bit nasty around it and that's, I think, unfortunate for the whole space. And I think it is a challenge moving forward. Like how are we going to make sure that it doesn't happen again.

[:

Or maybe just the nature of the size of the ecosystem, sadly. And like the money on the line, it attracts more of that, I think. Tarun, you had sort of hinted at the rollup roadmap in our pre conversation here.

[:

I basically think like fundamentally, and I'm biased now because I'm writing some small note on paper on this, but there's actually this fundamental annoying problem about rollups, which is, if I have a system that has n rollups, in the worst case, the amount of transaction demand that I need to make up for the lost revenue to the L1 grows with the number of rollups. So like in a world with a thousand rollups, I basically need a thousand times the transaction volume to compensate for like L1 lost revenue due to congestion.

[:

Which makes total sense actually.

[:harder of a claim to make in:

And I just generally think like there, as great as a lot of things that are within Ethereum are, like there is a sense, and like I'm -- again, this is sort of like one of the reasons I've been thinking a lot about this and writing about this is like this. Everyone on Twitter will be fighting about are rollups parasitic to Ethereum or not? Are they removing all the volume and demand and then no one wants to hold the asset, whatever. There's a sense in which it's true, but only if the transaction demand for the whole network is below some threshold. And the problem is we're kind of below that threshold. Like we just don't -- we don't have enough demand to justify. If you look empirically, it's like kind of actually kind of weird. And that's why you see kind of, I think, the success of the Solana and Move type of stuff right now compared to some of the things In the --

[:

Although this isn't ZK specific.

[:

But it not that it can't change.

[:

I guess, this is just like one of the these -- this is --

[:

This isn't ZK. Yeah. This is --

[:

This is zk-Rollups included.

[:

But I think the zk-Rollups have an even worse version of this, right? They have a higher cost basis.

[:

Yeah.

[:

To go hurdle rate. They have to like have all the proving working. They have to have that be cheap, on top of all the other things that the current ones have. So that means they need even more transaction demand. And if you look at them, they have much less.

[:

Although don't they make money through these provers, like having proof -- like doing proofs on-chain kind of?

[:

Okay. Okay. So --

[:

I don't think it cost more.

[:

I think like the world, it does --

[:

Like make more via one. Yeah.

[:

The real question is, will ZK get more in my head and this is sort of like again more philosophical question. But will ZK get actually used in applications when the cost of proving is -- and the actual proof generation is the valuable part or will it actually get used when proving basically cost zero?

[:

So you're saying like we're heading towards a zero cost anyway? So like the -- it won't --

[:that seems very unfeasible in:[:

I think, yeah. The hard part for me is betting on ZK costs remaining high and then people paying for it, feels very much like betting against the improvement of technology and the improvement of proof systems, which if we've learned anything over the past literal four years, is that those things feel like they're going to zero extremely quickly. Maybe there's some fundamental limitation why this time it's different and all the stuff. But I do have a hard time with this. It's like if you're depending fundamentally on the economics of proofs costing some minimal amount, I feel like I have bad news betting against technology. We have literal supercomputers in our pockets.

[:

This puts into question lots of projects actually. Because aren't prover networks sort of based on that too?

[:

I'm not saying they won't be valuable necessarily, I'm just trying to say the following statement, which is like in proof-of-work, the value of the hardness of the process is important to the network.

[:

That's right. That's right.

[:

It's -- in fact one where like by not adding in new technology to the network --

[:

That's right.

[:its transaction demand like a:[:ng, I really want to touch on:[:

I think for me it's more of a wishlist than a prediction. But I want to see proving for people. No more proving for big servers, for rollups. I want to see proving for people. I want to see it happen on your phone or if it's not happening on your phone, I want to see it delegated privately.

[:

Yes.

[:

But yeah, I want people to actually build on phones. Kind of in the same way it's easy to fall back to building a VM, it's also easy to fall back to building --

[:

Server side.

[:

Proof systems or applications -- well, no tools. Yeah, for big beefy servers. It's harder to build them for phones. That's my number one item on the wishlist. And I think that's also what a lot of people are going to be targeting.

[:

Yeah. I mean, I think I had this prediction last year too, which is like client-side proving. Like, come on, what are we doing? What are we doing here? This is supposed to be privacy software, not like SaaS, right? We don't need a rack somewhere to do the things for us. We have supercomputers in our pockets. We should just use the supercomputer in our pockets. Do the thing. Like come on. Anyways, I'm very mad about this and I agree -

[:

You want to put the ZK back into ZK.

[:

Correct.

[:I see:[:

I'll give one instead of a predict -- you know, I gave,I guess, an earlier prediction that an existing application will be a better user than any new application. And I count rollups in the new application. But let me tell you the most successful usage of cryptographic techniques in terms of a money making application in crypto this year.

[:

I think I know what you're gonna say.

[:

Take a guess. Each of you.

[:

I know what it is.

[:

You get one guess.

[:

I think it is a project I'm not a big fan of. Okay. Okay.

[:

I don't know if you're a big fan of.

[:

No. Then it's not what I thought. Okay. It's not what -- you would have known what I meant. Okay.

[:

I don't think -- I actually don't -- I don't know. I don't know. I don't know your opinion.

[:

Although, I do think it is that. But well, for me. But --

[:

Okay, guess. Give me your guess. Give me your guess.

[:

Total number of proofs produced?

[:

Like economic.

[:

No, no. In terms of using cryptographic tools on a mobile phone for an end user.

[:

Oh, then it's Worldcoin, isn't it?

[:

I think Worldcoin.

[:

BONKbot make makes way more money.

[:

What?

[:

BONKbot is done completely in TEEs on mobile devices. Made $300 million last year.

[:

Although TEEs.

[:

Holy shit. Wait, that's incredible.

[:

Yes. You should read the BONKbot TEE architecture. It's actually quite -- I was quite impressed.

[:

Hilarious.

[:

For something that's like doing memecoin trading. So this is where my point is that like all the stuff that people are spending money on, I think eventually we'll see -- you'll get the seep in of mobile privacy. Especially as people are using way more mobile apps for crypto. Like, as weird as it is to say, no one has gotten more mobile app usage of wallets than memecoins, which is kind of scary and sad at the same time.

[:s the worst outcome for ZK in:[:

Rollups are the only consumers of proofs.

[:

No applications.

[:

I mean it is a worst case outcome, but I don't know --

[:

Yeah. I don't think that's going to happen.

[:

Like for me the probability that happens is zero.

[:

You haven't talked to enough zkVM teams. They all think that rollups are the best customers.

[:

No, they're by far the worst. Like, I don't know, I find it hard to believe that we're not just going to be slinging proofs from our phones all the time in the next 10 years or something, you know.

[:

I also do think it might be.

[:

We said next year, not 10 years. Next year.

[:

Oh, next year. Okay, fine. Yeah, maybe next year, fine. All right --

[:

Okay. I'll give -- wait, I'll give another best one. Best one would be like -- well, kind of best. I don't know if this is that great, but some existing huge monolith technology company starts to be deep, like ZK becomes like a deep part of their stack, which is exciting because it would be like, then the number of people touching it would be very cool. And this tech would go further. The bad thing is obviously, the whole spirit would change potentially. And the fix -- the focus, it would almost be like that company could almost absorb all of this work we've been doing, building out the ideas around this, and then just kind of take it sort of as their own, which would be a bummer.

[:

I mean, you were in music before Anna, so maybe this analogy is going to help resonate with you. Like when you're in Underground Indie circles, you love the kind of music you're doing, and you want it to be pushed out to as many people as you can, but suddenly when it becomes mainstream and when mainstream music sort of appropriates that kind of stuff, it becomes uncool.

[:

Totally.

[:

And it is what you wanted in the first place. But then when you get it, you're very sad about it.

[:s, indie from the:[:

Yeah, yeah, yeah. I think we should be okay with it. I'd be happy to see it go mainstream. I'd be happy to see Apple wallets have a crypto wallet in it and you can sign with FaceID. Super great. Uses TEEs on the device already. Wonderful. Kind of sucks for the Indie developers, but it's good for the end user.

[:

Yep. There may or may not be at least one large company doing this, but it's fine.

[:

Yeah. I feel like there already are. Like Cloudflare, I know --

[:

They always have though.

[:

Cloudflare. I would actually put money on Cloudflare using ZK first.

[:

But they did. They always did. They just used it in a really tiny way.

[:

But as in like using more, like using recursion, for instance. I would actually, if anyone uses it first, it's probably them.

[:

Another worst outcome then is like a very bad big project or big company or big entity or state or something using it. Like maybe that would be the opposite.

[:

It's permissionless, Anna. There's no good or bad here.

[:

Actually, worst outcome, some kind of huge breakthrough but gets patented. And then like no one cares.

[:

Oh that one is –

[:

That's what I worry about FHE for this, by the way, guys. Too many patents in FHE not going to name names, but someone. I think they might kill their own industry if they don't like open it up a little bit more. Is there anything you guys want to share about what's coming up?

[:

For me, nothing specific, just more research hopefully.

[:

Honestly, just like -- I feel like I've spent the last year falling out of closer to cryptography stuff and getting back more into DeFi stuff. I think that's -- because I think it really is back.

[:

DeFi-pilled once more.

Before we sign off, I just wanted to share a bit about this show as well, because as you know, we've been on break, and we haven't been releasing as regularly, although we have released one a month, which is cool. So yeah, the plan is after taking a bit of time off, I had a chance to -- I don't know, get a little bit of inspiration going. A lot of these new use cases and directions, these are things that I've been reading about. And so, I'm really excited to come back to these interviews and to start up the show again.

The plan right now is to start releasing again around mid January. I don't have anything scheduled yet or recorded. So maybe we have to push that back by a week, but that's what we're looking at sort of mid January. I'm still not sure if I come back weekly, like every week, all the time, but I'm sort of playing with the cadence and we'll figure that out over the next few months.

But yeah, I mean, I just -- I feel like taking some time away. What it made me do really was it made me start to miss these interviews and obviously seeing you guys, my co-host, but also digging into these topics. I find it always very challenging intellectually. It's broadening in a lot of ways instead of sort of narrowly focusing in on my day-to-day. It's really great.

I think one of the challenges I've had is being super, super hands-on and operational on the events front and running these three companies. But yeah, I think in the new year, I mean, I have an amazing team around me with Gaylord and Quentin over at ZK Hack, Agni and Rachel and Henrik and everyone over at the podcast, also the entire ZKV team, who's really rolling right now. So I feel like, yeah, in the new year, I'm going to be able to focus more on these episodes, which is something I really want to do.

ng up. So in the new year, in:ast time we were there was in:

Yeah. So I hope I'll see a lot of you there. Put that in your calendar. Just so you know that we won't even be opening applications or anything until February. We want to get sort of fresh research for the talks. And also just generally we tend to do that in more of like a three month window. But if you're gonna be there, if you're planning on being there, just put that in your calendars already, and stay tuned on all the channels, because that's where I'll be sharing more.

All right, guys, thank you to all of you for joining this sort of finale to the year on the ZK Podcast, this special episode where we looked back and looked forward. Yeah. Thanks for being on.

[:

Thank you.

[:

Glad to be here.

[:

Thank you for having us.

[:

I want to say thank you to the podcast team, Henrik, Rachel and Tanya, and to our listeners. Thanks for listening.