In this week’s episode, Anna and Nico chat with Chelsea Komlo, Chief Scientist for the Zcash Foundation and member of the Cryptography, Security, and Privacy lab at the University of Waterloo.

They discuss what sparked Chelsea’s interest in cryptography research, starting with her work contributing to Tor, to her move to Zcash and her PhD work on Threshold Signature Schemes. They define some important terms around different signature schemes and discuss possible optimizations that can be used to make these more performant. They then dive into her work on the FROST Threshold Signature Scheme plus some new upcoming work.

Here’s some additional links for this episode:

zkSummit11 is happening next week, head to the zkSummit website to apply for a waitlist spot now. The event will be held on 10 April in Athens, Greece.

Check out the ZK Jobs Board

Aleo is a new Layer-1 blockchain that achieves the programmability of Ethereum, the privacy of Zcash, and the scalability of a rollup.

Dive deeper and discover more about Aleo at http://aleo.org/

If you like what we do:

Transcript

Anna Rose:

Today, Nico and I are here with Chelsea Komlo. Chelsea is part of the cryptography, security, and privacy lab at the University of Waterloo and is Chief Scientist for the Zcash Foundation. Welcome to the show, Chelsea.

Chelsea Komlo:

Yeah, thank you so much for having me. I'm a big fan, so I'm really happy to be here.

Anna Rose:

Cool. Yeah, we've actually wanted to have you on the show for a long while. We tried a few different routes, and I'm so glad we've made this happen. Hey, Nico.

Nico Mohnblatt:

Hi, Anna. Hi Chelsea

Chelsea Komlo:

Hey. Yeah, thank you so much for all the work you all do in the community. It's like, it's really fun to see all the shows you all do, and it's just great.

Anna Rose:

Cool.

Nico Mohnblatt:

Oh, thanks.

Anna Rose:

I mean, we're really curious to find out more about you too. So I think the first question I wanted to understand was like, what was the spark that got you interested in the topics that you work on?

Chelsea Komlo:

Oh, that's fun. So actually... Yeah, so before going to graduate school, I was an engineer, which has actually been very helpful in doing cryptography research, because I feel like I can put my prior engineering hat on and think about like, oh, what would we actually want to deploy in practice? So that's been really helpful when designing research. But yeah, before going to graduate school, I was an engineer, and I worked on cryptography and privacy protocols. So I contributed to Tor, I did a little bit of work on Enigmail, I worked on some of the OTR protocol, Off-The-Record messaging protocol for secure messaging. Yeah, and I just loved it. And then I wanted to be able to design cryptography. So, but for threshold signatures, that topic really came out of the Zcash Foundation. So in trying to make threshold signatures more usable and easy to deploy, that's where kind of my current work has come from.

Anna Rose:

I want to go even further back, though. What got you interested in working on Tor and on this type of engineering? What drew you to that topic?

Chelsea Komlo:

l, but I guess it was back in:

Anna Rose:

Cool. What was it like working on Tor? What does it mean to do that? Is it sort of just like contributing from afar or were you like more in the org?

Chelsea Komlo:

Yes, so I was a core Tor contributor. I wrote some Rust that helped inspire the use of Rust in Tor, which is great, because it's a memory safe language. I was on the board for a little while. And overall, I think it's great being a technical person, being able to contribute to projects that people need. So like people use Tor to circumvent censorship or to look up things privately, and I think it's really great for technical skills to be used for sort of altruistic motivations like that.

Anna Rose:

Did you after that then look to study it? Because as far as I know, you're doing a PhD now. Like, yeah, what was happening maybe at the same time academically? What led you to the work you're doing today?

Chelsea Komlo:

Yeah, so I was on a team. We contributed to a lot of open source projects like Tor and Enigmail. And then we started doing work on a new version of Off-The-Record messaging protocol. And that's the protocol that Signal eventually built on. So it basically does ratcheting, so you have forward secrecy in your messages. And in contributing to that, I got to know Ian Goldberg, who's at the University of Waterloo, and I wanted to be someone who could design those protocols myself and write proofs of security and think about not just implementing what exists, but also thinking about how do we design new things. And that's what inspired me to go get my PhD, and I've been lucky enough to work with Ian Goldberg and Douglas Stebila, who are my co-advisors. And it's an amazing thing to be able to think of ideas and be able to prove them secure. It's very hard. It's non-trivial.

Nico Mohnblatt:

Oh, yes

Chelsea Komlo:

But I know... And you make tons of mistakes. I feel like I've made every mistake in the world at this point, but I'm sure there's more mistakes to make. But it's a fun expertise to have.

Anna Rose:

Nice.

Nico Mohnblatt:

Ian Goldberg, actually, I should add, is one of the OG cypherpunks, for those of you who know history of where he came from with censorship resistance.

Chelsea Komlo:

Yeah. Ian has a wealth of knowledge about where privacy started and where we are. And he's teaching a new class at the University of Waterloo on just the new generation of privacy enhancing technologies. And there's so much more we have today, like where private information retrieval is going, secure messaging, it's really an exciting space, and it's amazing to see people deploying these technologies.

Anna Rose:

s this legendary hackathon in:

Chelsea Komlo:

Yeah, I mean, it's a big lab. So I'm in the CrySP lab, the Cryptography Security and Privacy Lab, and people are working on all kinds of things. So like everything from private machine learning to more pure cryptography, to applied cryptography, to censorship resistance. So there's a lot of different topics. It's really fun. I actually, so Alfred Menezes teaches at the University of Waterloo. And I was sitting in his cryptography class and Vitalik actually came to give a guest lecture. So it was kind of a very full circle moment, which was really fun. There's a lot of post-quantum work going on as well. So David Jao, who kind of came up with the SIDH, Supersingular Isogeny Diffie-Hellman is faculty there. So there's just a lot of topics. It's a fun place to be.

Anna Rose:

Very cool.

Nico Mohnblatt:

So Chelsea, what has been your focus within the lab?

Chelsea Komlo:

Yes, since:

So network rounds are very important, because parties could go away or you could have network latency or packets could drop. And so multi-party protocols that have fewer rounds are much easier both to implement and overall faster. So the core question was, can we have a threshold signature with fewer number of rounds than five? And there was kind of a folklore protocol of how to do a scheme in three rounds, but then the question we wanted to ask is, well, can we do even better? That's where one of the first projects that I worked on, which is called FROST, and it stands for Flexible Round-Optimized Schnorr Threshold Signatures, and we were able to come up with a threshold Schnorr Signature scheme that is secure in two rounds.

Anna Rose:

Oh, wow.

Chelsea Komlo:

And what's very nice about FROST is that the first round can be preprocessed as well. So if you especially care about network latency, you can do the first round in kind of a batched manner, and then you can just have one online round. And that was something people were interested in. So it's kind of taken off since then, and I've done a lot of follow-up work on threshold signatures since then. I also think an interesting question is, why do we care about Schnorr signatures now? And why is there so much work going on in Schnorr signatures? Yeah, so Schnorr signatures have existed for a long time in terms of the history of cryptography, there were a very early signature scheme to emerge. And really, what a Schnorr signature is, is it's just a proof of knowledge of discrete log, but it's bound to a message. So your secret key is some field element, and your public key is some group element, where your secret key is the discrete log of your public key. And so all a Schnorr signature is, is just proving knowledge of your public key, but you additionally bind the signature to some message.

And that's all a Schnorr signature is. So it's very, very simple. It's existed for a long time. But it's quite amenable to building protocols on top of, because it's linear. So it's quite amenable to building things like multi-party signatures, because basically what you can do is you can have all the parties derive a signature share, which itself is a Schnorr-like signature, and then you can just aggregate all the signature shares and come up with itself, the aggregated signature is itself a Schnorr signature. So basically these pieces are very composable and Schnorr signatures lend themselves to sort of more advanced primitives that are easily combinable.

Anna Rose:

Who was Schnorr? Who is this person?

Chelsea Komlo:

Claus Schnorr is still a cryptographer. Gosh, I wish I knew all of the things that he worked on, but the signature scheme is named after him. And I think it came out in the 90s, but I'm not sure.

Anna Rose:

Okay. So long time by cryptography. We're talking 30 years. Yeah. 20, 30 years. Okay.

Chelsea Komlo:

Yeah. Yeah.

Anna Rose:

Cool

Nico Mohnblatt:

Yeah, you were talking about aggregating multiple signatures. Are we talking about signatures over different messages over the same message from different people? What do we mean by aggregation here?

Chelsea Komlo:

Yes. So I know there's been work done where you can aggregate over different messages, but when I'm talking about aggregating here, it's over the same message. And so I guess maybe I can take a step back. Let me take a step back and talk about how a threshold Schnorr signature works, and then I think it'll become obvious of what aggregation means in this setting. So the goal for threshold Schnorr signature is to output a signature that is verifiable under the same algorithm, the same verify algorithm as single-party Schnorr. And the reason why that's nice is because if you're in implementation and you already verify Schnorr signatures, then you can also verify threshold signatures without having separate logic. So from an implementation perspective, it's very useful because it just allows for more simplicity and less logic switching.

So the goal is to output a plain Schnorr signature, but instead of having one party control a secret key, you have many parties. And so what's nice about that is then you have things like redundancy, so if one party loses their key, you still have other parties that can issue a key. And you also have a distribution of trust. So if you, say, control a large amount of funds for your customers, your customers might want some kind of assurance that one party can't just disappear with the funds, and so you might want to distribute trust of that secret key. So we want a couple of things. We want unforgeability under all of this secret key shares. We also want unforgeability under the single key that's secret shared among all the parties. And then we want to make sure that when all the parties sign, those what we call signature shares aggregate to a single Schnorr signature. Yeah, so I guess in this case, what we're talking about is one message. There can be variance with additional messages, but that kind of lends itself to a different protocol and different properties.

Anna Rose:

Why is there work on this now if it already existed for some time before?

Chelsea Komlo:

Yes, that is kind of a spicy question, actually. So Schnorr signatures were patented.

Anna Rose:

Ooooh.

Chelsea Komlo:

And so a lot of people don't know this. So a lot of people don't understand why we have both ECDSA and Schnorr. And ECDSA is actually quite painful in the multiparty setting. So you can design very, very simple multiparty Schnorr protocols, and for ECDSA, it's much harder because of the structure of the signature. But what happened is we had Schnorr signatures first, but then a patent was issued and then ECDSA was designed to kind of circumvent that patent. And so we were left with essentially years of building more complicated protocols around a more complicated scheme that circumvented a patent. So yeah, so I think this is a really interesting thing to know and to talk about because I've heard arguments for patents, which say things like, oh, we're investing resources into cryptography, we should be able to reap the rewards of those resources, which I think is a compelling argument. But I think history shows us that patents and cryptography lead to decades of work that we didn't necessarily need to do. And actually schemes which are harder to implement and potentially less secure, because there could be bugs in implementations

Nico Mohnblatt:

It's also harder to analyze, right? ECDSA from a proving perspective.

Chelsea Komlo:

Yes. So I think as a community, we really need to be conscientious of where we came from, and think really hard before diving into patents, because, for a certain company, it might be beneficial, but for the community as a whole, it's very difficult to design around.

Anna Rose:

Which company did the patent? Like, could they have built more stuff?

Chelsea Komlo:

Yeah, I haven't looked at the patent myself, so I can't go into that much detail. I just sort of know that it was patented, and that delayed a lot of, and ECDSA came out around it. And so I guess the important thing, though, is that the patent expired. I'm not exactly sure when the patent expired, but that's where, as I understand, we've seen a lot of re-emerging interest in Schnorr signatures and things being built and developed around it. So there was a delay in research and development and then when the patent expired, we were able to say, oh, this is a protocol we can actually deploy now again. And so that's why we've seen a re-emergence of research in this area.

Anna Rose:

What's the connection between Schnorr signatures and Bitcoin or Zcash? Like, you said sort of it came out through your work at Zcash. I'm guessing Bitcoin-oriented. Yeah, what's the connection?

Chelsea Komlo:

Yes. So Zcash uses a variant of Schnorr signatures called RedDSA. So it's a re-randomized variant of EdDSA signatures. Again, the difference between EdDSA signatures and Schnorr is extremely minor. EdDSA, you hash in the public key when you're sending the message but the structure is essentially the same. So when I'm saying Schnorr, you can sort of also substitute EdDSA. They're extremely similar. So Zcash was already using a variant of Schnorr, and then Bitcoin recently with Taproot is starting to move to EdDSA, or a variant of EdDSA signatures as well.

Anna Rose:

Okay.

Nico Mohnblatt:

Just to emphasize, EdDSA is not ECDSA. And I've seen people make that mistake before. And they look very similar.

Anna Rose:

I kind of made a mistake in our last episode where I used it incorrectly.

Nico Mohnblatt:

It can be confusing, but yes, EdDSA, Schnorr-like, ECDSA gets around the Schnorr patent.

Chelsea Komlo:

Yeah, don't blame the users. The names are very confusing.

Nico Mohnblatt:

No, of course.

Anna Rose:

So I guess this then leads us to the work you did on FROST, because, just to sort of go back to the purpose of it, you talked about the rounds. Are the rounds a function of speed or cost? Like trying to get these rounds down sounds like a good idea, but what are you actually accomplishing when you do that?

Chelsea Komlo:

Yeah, it's speed, essentially. So anytime you have a multi-party protocol, all the parties start, then they send messages to all the other parties, then they do some processing, then they send messages again, and eventually, you have some kind of output. So any time you send a network message, there's delay. So you have to wait for all the messages to arrive. If not all the messages arrive, you need extra logic. So there's kind of a lot of complexity that go into network rounds. And for things like signing, if you're an exchange and you have to issue millions of signatures a day, having fewer network rounds is quite helpful.

Anna Rose:

Who are the agents who are actually getting that speed up though? Is this for the miners? Is this for the wallet? I'm kind of curious where this actually gets used.

Chelsea Komlo:

Yes, that is a good question, and I think that does play into some of the discussion around, do we actually... There's been a discussion around, do we actually need speed in these signatures? And there's been similar discussions, I think, as well around, do we need speed in zero-knowledge proofs? So I think Henry de Valence, who is with Penumbra, said something which I thought was very useful to think about, which is for a client-side application, the most speed you need is enough time for the user to process something, which is kind of slow in terms of computer speed. So I think that that's actually something useful to think about. So yeah, if this is a wallet and it's on a user application, you can probably tolerate speeds of up to a second. But if this is an application, so for example, there's companies that hold shares on behalf of their clients. And so the agents are servers performing signatures directly, then speed matters because these are just computers talking to other computers.

So the agents do...they differ and the speed requirements differ as those agents differ. But then also in terms of complexity, network rounds can be important as well because you have things like packets dropping and other things like that.

Nico Mohnblatt:

It's funny that you mentioned speed and Penumbra because in a recent episode, that probably came out a few episodes ago, we talked about the same thing with Alin, and I made the same shout out to Henry and Penumbra and how their tools do everything in the background. So shout out once again.

Chelsea Komlo:

Yeah, I think it's important for assumptions, I think it's important to challenge them. So sometimes I think we get very caught up in speed, and sometimes it doesn't matter. And I think that's important to know, and it's context dependent.

Nico Mohnblatt:

Absolutely.

Chelsea Komlo:

Other things that I think are also important to know is like on the cryptography side, if you're designing cryptography systems, you want the most security possible in every dimension. But sometimes those security trade-offs are acceptable in other dimensions as well, such as for speed or simplicity. So there's been a tension around... So FROST, for example, is secure under an interactive security assumption, and schemes that have more network rounds can be proven secure under weaker assumptions. And so there's been a lot of discussion around what assumptions are fine, what do users want, what is better? And I think the important lesson to come out of this is that it depends and it's context dependent.

Nico Mohnblatt:

So is this where the FROST magic happens? Like, is this how you reduced all these rounds?

Chelsea Komlo:

Yes. So exactly. So schemes with more rounds can be proven secure under, for example, discrete log directly, which is a weaker and kind of well-understood assumption.

Nico Mohnblatt:

More common, yeah.

Chelsea Komlo:

Yes. So FROST requires an interactive assumption. It's under what we call the algebraic one more discrete log assumption, still in the random oracle model. So at least there's that. But it's an interactive assumption, and it basically says, let's say you're given L plus 1 discrete log challenges, and you're allowed L queries for solutions. Can you produce L plus 1 solutions? So basically, can you produce one extra solution than queries that you're allowed? So.

Nico Mohnblatt:

Sounds reasonable.

Chelsea Komlo:

I think intuitively, it sounds reasonable. This assumption has been out for a while. It's very hard to prove blind signatures secure without this assumption. So this assumption was introduced in the context of blind signatures. So again, it's context dependent. I think I personally feel this assumption is reasonable and especially in a practical setting, it's a fine tradeoff. But again, it's context dependent.

Nico Mohnblatt:

One last thing I wanted to ask, when we start a threshold signature, do the signers have their private key that they generated on their own? Or do they have to somehow have shares of a key? Where do we start from?

Chelsea Komlo:

Yes. The bootstrapping question is a very important question to ask. So signers have to bootstrap with a secret shared key. It's Shamir's secret shared. So basically, every party has a point... Their share is essentially a point on a polynomial, and the secret key, this joint secret key, is the constant term of that polynomial. And when you combine signature shares, what you're doing implicitly is polynomial interpolation to some other point on the polynomial, which is unknown. So really, the magic, it's very simple, is given t plus 1 points on a polynomial, you can find any other point on the polynomial. So that's all that's happening under the hood. And so we bootstrap by using either just plain Shamir secret sharing using a trusted dealer or another multi-party protocol, which is what we call a distributed key generation scheme.

And so what that is, again, you have all the parties, they're all participating, and the output from that protocol is secret key shares that every party holds that combine to some secret key that no party knows, but all parties have contributed to. So it's kind of like a magic black box, every party throws in some randomness, and at the end, they all get secret key shares that combine to a secret key that no one has actually seen.

Anna Rose:

So the thing I'm still not entirely clear on is almost like in a system like Zcash, do you implement this somewhere or is it on an app, like is it on a wallet level? Like I understand the research being created, but I don't really know where it fits in.

Chelsea Komlo:

Yeah, so the Zcash Foundation has implemented FROST and also a DKG and trusted dealer Keygen and then applications can pull it in as they need to. So if you're a Zcash wallet, you will pull this into your application. And the team right now is doing some work to make pulling that a little easier. So they're working on demos and other things like that. But this is kind of like a core library, and then it's pulled into various applications. And this is why it's being used outside of Zcash as well, because it's essentially a protocol agnostic. So you can pull it into other wallets that maybe need to implement Schnorr signatures.

Nico Mohnblatt:

And like you said, usually the applications will be either creating redundancies or making sure that if you lose one of your key shares, you have more and someone stores them for you.

Chelsea Komlo:

Yes.

Nico Mohnblatt:

So account recovery or multi-sig, like distributing trust, having multiple people having to sign.

Chelsea Komlo:

Yes. So something that's very Important to know, so we didn't write this in the FROST paper originally, but there's tricks for doing share recovery. So if one party loses its share, there's established protocols out there for re-deriving that share or creating new shares so that that party can recover their signing key. And there's also protocols for generating new shares for new parties. So if you're out there and you have questions about like, okay, do I have just a set number of shares, or is this dynamic? The answer is yes, it's dynamic, and protocols exist for that.

Anna Rose:

And you've mentioned sort of multiple parties throughout this, but we haven't actually said like multi-party computation. Does it fall under that category, or are we working beside, or are they just similar?

Chelsea Komlo:

Yes. So it is, yes. So technically threshold signatures is a special form of multi-party computation. I think it's nice to distinguish because when you say a multi-party computation, this can be done generically. So you can essentially take any function and distribute it among parties. And there's generic tools for doing this. Those generic tools could also be used to do threshold signatures, but they're generally less efficient. And so, and this is common in other kinds of cryptography, where you can design generic tools, but they're less efficient, or you can design schemes that are for specific use cases, and then you can tailor them for that specific use case, and they tend to be more efficient. So I think it's kind of nice to distinguish. Hopefully, one day we'll have generic MPC that can just MPC everything. And people, I think, especially in the FHE world, are moving in that direction, which is very exciting. But right now, we have the very simple threshold signature case, which is quite tailored.

Anna Rose:

Got it. I'm glad I got to work that question in, because just a shout out to Nigel Smart, who did finally introduce us. He was the one who finally made the connection, and he had recommended. We've done an episode in the past, or actually two episodes on MPC, but a pretty recent one. So we can also link to that.

Chelsea Komlo:

Yes, absolutely. And yeah, I am really excited for what places like Zama are doing and having generic MPC and FHE is a really strong tool and I think it will be, it's really exciting to see where it will go in the future. So hopefully we can have FHE for everything and all of our problems will be solved.

Anna Rose:

Easy. Although what it looks more and more like is it's these combos like ZK and FHE, so anyway.

Nico Mohnblatt:

So back to FROST, you said FROST was developed coming from the use case that Zcash needed. Is this kind of a standard technique? Is this, I guess, being standardized for other people to use?

Chelsea Komlo:

Yes. So we wrote an informational draft for the CFRG, the Cryptography Forum Research Group, within the IRTF. The reason why we did it, actually, was because we put out FROST, and then I had a lot of people emailing me to say that they were implementing FROST. But they were implementing it in slightly different ways and making slightly different choices around things like serialization of data...

Nico Mohnblatt:

Hash function.

Chelsea Komlo:

Yeah, like slight variations that I knew down the line would potentially be confusing and then potentially having bugs as well. So I...

Nico Mohnblatt:

Was very stressful to you? Seeing the word being taken apart like that.

Chelsea Komlo:

I mean, it was exciting, but also I was worried that, well, one, we would have a lot of incompatibilities. And I actually had auditors tell me that they'd seen bugs pop up. So for example, someone told me that they saw an implementation of FROST where the nonces were being derived deterministically as they're done in EdDSA. So instead of sampling nonces at random, the nonces were generated by hashing the secret key in the message. So if you're used to single party EdDSA, this is what you do. You hash the message and the secret key to generate your non-switches private. So that's totally reasonable. But in the FROST setting, this leads to a secret key recovery attack and two signing sessions.

Anna Rose:

Whoa.

Chelsea Komlo:

So it's like a total break if you do this.

Nico Mohnblatt:

Is this the insecurity of ROS paper?

Chelsea Komlo:

No, so ROS-

Nico Mohnblatt:

Different, okay.

Chelsea Komlo:

Yeah, ROS comes down to how the scheme is actually designed. And so FROST was one of the first schemes that was secure against ROS. And it's because of how FROST essentially has two nonces, you hash in the transcript from the first round, and then that becomes your overall nonce, and that avoids ROS attacks. But the key recovery attack, if you derive your nonces deterministically is just... Because the adversary has input into the challenge. So essentially, as long as the adversary changes doesn't follow the protocol deterministically, but the honest player does, there's a trivial key recovery attack that you can't detect. So someone told me this, and I was like, oh, no, it would be great if we actually had something. Because the research paper isn't written with exact engineering details. It's basically enough to show what's going on so that you can prove the scheme secure. But it's not enough details for engineers to follow to make really important decisions like serialization or ordering or other details that are somewhat important. So yes, that's why we decided to write this informational draft basically, because people were implementing it and we wanted something that was more useful. So that process is wrapping up right now. And conveniently, NIST has also put out a call, or they're very soon to putting out a final call for threshold schemes as well. And we'll be basically taking what we submitted to CFRG and turning that into a NIST submission.

Anna Rose:

But then with NIST, does NIST need to choose that? And then it becomes sort of the standard, but you're kind of up against other types of?

Chelsea Komlo:

So this threshold call is different than what NIST did for the post-quantum call. So for single-party signatures, I think it's easier to have a uniform competition, because you can do things like define what the API is, define what the inputs and outputs are. And this is what was done for the NIST post-quantum competition. For threshold signatures, it's a little harder, because there's so much variation within the actual schemes itself. So even though for thresholded EdDSA, they all might be putting out a single-party EdDSA signature. The internals of the scheme are all quite different. I think right now NIST is trying to decide what they'll do, but this call is basically send us your schemes in more detail than in the paper, and then the and is still being decided, as I understand. So yeah, so Luis, who's at NIST, who's sort of organizing all of this, would be a great person to have on the show. So he has more context and a plan and vision for where this will go. So hopefully you can have him on and ask him these questions.

Anna Rose:

Maybe we could do an Ep on standardization generally. We've never talked to anyone from NIST.

Chelsea Komlo:

Yes, he would be a great person to have insight into what they're trying to do.

Nico Mohnblatt:

So we now have or soon to be a FROST standard, what comes next? What's the rest of Chelsea Komlo's work?

Chelsea Komlo:

Hopefully, I'm just getting started. So I have grand plans for the future. But I guess one thing that this process has taught me, and I kind of referred to this before, which is there's different trade-offs in deploying cryptography into practice. So the trade-offs I sort of think about are things like usability, security assumptions, and then performance. So those are the different axes when you're designing a scheme and you have different trade-offs along those different axes. So before I talked about how people were using FROST deterministically and I was trivially broken, I've been thinking about how to do deterministic FROST for a long time, and it's a very hard problem. It's extremely difficult to do it in a way that's secure. There's been work done to do deterministic threshold Schnorr signatures. And basically, those works require things like generic SNARKs or generic MPC.

So you can do deterministic threshold Schnorr, but it requires some kind of heavyweight tools. I guess, so even taking a step back, when I say deterministic thresholds, the reason why this is something we want is not only because you can not have to rely on fresh sources of randomness when you're generating your signature, but also because signers can be stateless. So basically, you perform a round, and you don't have to save any state. And then you perform your next round, and you can just re-derive all of the state.

So from an implementation perspective, this is great because you don't have to cache things in a database. Take a lock on the database. Look up the thing in the database. Carefully delete information that if you don't delete it, your secret key is leaked. Unlock the database.

So we really want... Like these games are actually quite attractive, I think, in practice. But currently, they require heavyweight tools. So I have some upcoming work, which I think... I'm interested to see what practitioners think about it. So basically, the work is called Arctic, which I think is nice.

Nico Mohnblatt:

Arctic, yeah.

Anna Rose:

Sematic.

Chelsea Komlo:

Yeah, so nice.

Nico Mohnblatt:

Is it also an acronym?

Chelsea Komlo:

It's not. I couldn't think of an acronym with like DTS came into like a nice word. I spent a lot of time thinking about it. But Arctic was the best I could go.

Nico Mohnblatt:

How much compared to the time thinking about the paper?

Chelsea Komlo:

I mean, not as much. But I did try hard to make an acronym and just failed. I'm always on the market for good scheme names. So please tell me your good scheme names in the future if you have them.

Anna Rose:

There's lots of variation on cold things that could fit into all of this.

Chelsea Komlo:

Yes. So I'm on the market for it in the future. Yeah, so basically, Arctic is a deterministic threshold Schnorr signature. And it's very simple. It doesn't require generic MPC or things like generic zero-knowledge proofs. But the trade-off that it makes is that it requires a larger number of assumed-to-be-honest signers. So basically, for FROST, the security model that it's secure under is you can have t signers, and up to t minus 1 of them are assumed to be honest.

Anna Rose:

So one honest party would be enough.

Chelsea Komlo:

Exactly. And that's a very nice security model that you can reason about. So you have one honest party. For Arctic, Arctic assumes total two t minus 1 parties, where t minus 1 of them is assumed to be dishonest. So really what we require is out of your total set of signers, the majority of them are honest.

Nico Mohnblatt:

Right. So we go from single honest party to majority honest.

Chelsea Komlo:

Yes.

Nico Mohnblatt:

Okay.

Anna Rose:

But it couldn't really just be like, 51% honest, kind of just a little bit.

Chelsea Komlo:

Yes, just a little bit more honest.

Anna Rose:

Okay. It's not a 66% situation.

Chelsea Komlo:

No, it's like 51% honest. And so it's interesting because we do have assumptions like that already in cryptocurrencies, so for things like consensus. So in other places, we have like 51% honest as an assumption. But so far in threshold signatures, we haven't really explored those kind of assumptions. And so with Arctic, basically, we say, okay, if you're fine with those kind of assumptions, you can have a stateless scheme. That's pretty simple. So then again, it's up to implementers to say what trade-offs am I fine with? Am I fine with deploying more signers and then having a simpler scheme that has these nice security properties? Or do I really need that all but one honest?

Nico Mohnblatt:

About your last axis, speed, how does Arctic perform in terms of speed?

Chelsea Komlo:

It's pretty fast.

Nico Mohnblatt:

Is it still two rounds or is it a bit more?

Chelsea Komlo:

It's two rounds.

Nico Mohnblatt:

Okay, lovely.

Chelsea Komlo:

So I guess, so there's a trade-off. So for groups under size 25, it's pretty fast. There's a trade-off. And for larger groups, MuSig-DN, which requires generic zero knowledge proofs, MuSig-DN is faster. So there's a crossover point. But what we see is like, okay, for smaller sized groups, where you're fine with 51% honest, you can have a faster scheme. But again, I think it's interesting putting out these axes more explicitly and then thinking about, what are we fine with? But at least we have all of the options. And applications can say, I don't know, we don't mind implementing Bulletproofs. And we're fine with something being slower. MuSig-DN is fine. Or simplicity of the implementation is important to us because we're scared about bugs and we want something to be fast for smaller groups, then something like Arctic is a better choice.

Anna Rose:

You've sort of said, like, I've heard more about deterministic, but also stateless. Like, are those the same thing? Are those connected?

Chelsea Komlo:

Yes. So determinism is the means to statelessness in this thing. So we have a deterministic scheme, or you might read about deterministic threshold Schnorr schemes, but when we say deterministic, that means that the scheme is also stateless. Because basically, given an input, I can derive some state and some round.

Anna Rose:

And you know what it is.

Chelsea Komlo:

And you know what it is. And then in the next round...

Anna Rose:

So you have to save state somewhere else. Okay.

Chelsea Komlo:

Exactly. So if you're given the same inputs to these different rounds, then you can deterministically derive that output.

Anna Rose:

Cool.

Chelsea Komlo:

So, but they are kind of thrown around interchangeably, so it's a good question.

Nico Mohnblatt:

When is this work coming out?

Chelsea Komlo:

As soon as I put the paper up on Eprint, which is hopefully this week.

Anna Rose:

Oh, cool. That means...

Nico Mohnblatt:

So if it's out we can link it in the show notes.

Anna Rose:

Yeah, by the time this comes out, we should have it.

Chelsea Komlo:

Yes, that would be great. Yeah, I really am curious. This was kind of a hunch that I thought something like this would be interesting to implementers,so I'm very curious to hear feedback or thoughts from people working in practice. So as people see it and think about it, if they have questions, I would love to talk to people about it.

Anna Rose:

Cool. In this, you talk about this concept of dishonest, but I don't think we've really talked about what that would mean to be dishonest in this particular case. I mean, we know what dishonesty is for validators sometimes. But yeah, what is dishonest here?

Chelsea Komlo:

Yeah, that's a good question, because we also threw that term around a lot. It generally just means someone who you have no assumptions about how they will interact with the protocol. So they could follow the protocol honestly, they could appear to follow the protocol honestly, but like to store extra stuff. You know, generally with some nefarious goal in mind, like recovering the secret key or outputting a forgery or performing denial-of-service-attacks. But technically, when we use that word, it just means a party for which you have no guarantee how they will act within the protocol.

Anna Rose:

Where would these agents act dishonestly? Is it like in the rounds, before, after?

Chelsea Komlo:

Yeah. So it could be at any time. So I'm an adversary, I have corrupted let's say two participants, and I know their secret keys. I could initiate signing rounds with honest parties and follow the protocol exactly. And then I could take everyone's information and then I could try to do something nefarious with it.

Anna Rose:

Afterwards?

Chelsea Komlo:

Afterwards. I could take my secret keys and do something to them, like flip the bits or something. And then I could participate in the signing protocol, honestly, take the stuff that I received and try to do something nefarious with it. So it's really, it's very tricky. And I think writing proofs for these types of schemes is quite hard, because there's a lot of nuance in what a corrupted party could potentially do. So it's anything from before it starts, while the protocol is going on, or even afterwards.

Nico Mohnblatt:

And when we say honest majority, do we mean these actors act honest throughout the protocol? Or do we say at each round, a majority of people have to be honest?

Chelsea Komlo:

Yeah, so when I say like honest majority, what I mean is that there are T participants who follow the protocol as described throughout the protocol. So practically what this means is there's T machines whose secret keys have not leaked. Like this is practically how it translates, but when you're writing the proof, this is kind of what you mean when you do the modeling.

Nico Mohnblatt:

Are there any schemes that consider the case where the set of honest parties changes between rounds?

Chelsea Komlo:

Yes. Yeah, so there are schemes that consider adaptive security, and this is exactly what you're talking about. So static and adaptive is, I would say, kind of a more theoretical term with a practical lens. So when we write proofs, something that's very easy when writing the proof is saying, let's say you have n parties, and at the beginning of the proof, I say parties 1 through 5 are corrupt. And that those are the dishonest ones, and the honest ones are the last party, and the world stays the same throughout the proof. But that's not actually what happens in practice. What happens in practice is you can have an adversary that corrupts one machine, and then it determines on the fly who it wants to corrupt afterwards. And this is essentially adaptive security where throughout the protocol, the adversary can choose who it corrupts. From a proof writing perspective, this is much harder to model. So I do think there's benefits for writing proofs statically, but the adaptive model is closer to what we see in practice.

Anna Rose:

Wow.

Nico Mohnblatt:

So do you have an adaptive FROST variant?

Chelsea Komlo:

I think we can prove FROST adaptively secure. It's also direct FROST. The proof is very hard. We're working on it. It is currently being worked on, but it is nontrivial. Yeah, so it's an interesting research question around what theoretical trade-offs do you have when you assume adaptive security? So again, if we have simpler schemes which are statically secure, is that better? I think that's an interesting question to think about and like if you're assuming adaptive security in the proofs, but you have a less efficient scheme, what does that mean? And that's kind of an interesting conversation for all of us to have.

Anna Rose:

Cool. So I want to ask a question about not this new work, but the work we were just talking about before FROST, before we sign off, which is on potential use cases. We sort of mentioned the security of these systems and that people had actually implemented them. I'm curious if you could just share any of those implementations.

Chelsea Komlo:

Yeah, so the thing I love about FROST is it's evolved into its own thing, and people are doing lots of stuff about it. And then I learn about it on Twitter, which is the best thing, in my opinion. It's so cool. So one project that seemed exciting is Frostsnap. So the name is really fun. And it seems like they're implementing FROST for the Bitcoin ecosystem and hardware.

Anna Rose:

Oh, cool.

Chelsea Komlo:

Again, I think it's amazing because I actually don't know them. I just watched their work on Twitter and I think it's great.

Nico Mohnblatt:

I think I've seen these... It was a picture of little things that plug into a phone and they can plug into each other as well, and then together generate a signature.

Chelsea Komlo:

Oh, cool. Yeah, they'd be fun to have on the show. So I would love to hear how they're actually doing it.

Anna Rose:

Are there ever cases of these kinds of signature schemes being used together with ZKPs? And I'll give you just a bit of context to this question. Like we have seen a lot of crossovers with general MPC and ZKPs or FHE and ZKPs. So I'm just curious if, can something like FROST be used with ZK? I mean, obviously it's used in Zcash, so there's some connection, but is it really being used together with ZKPs?

Chelsea Komlo:

Yeah, so I think this is an interesting question and something that's still being explored. So in the Zcash setting, it's interesting because FROST is used at the signing level. So signers sign a transaction, but then the prover can be a separate entity that isn't trusted to hold the secret signing key. So you can have a setting where you have many signers and one prover.

Anna Rose:

Oh, okay.

Chelsea Komlo:

And that's, I think, a very nice architectural and system design in that these roles are separate so that you can separate out the signing key from the prover functionality. So that's what Zcash is doing. There has been work into multi-party zero-knowledge proof generation. So there's some work, and we can link it in the show notes, but there's some work where you take your witness, your secret witness, and you secret share it among provers, and then the provers generate the proof in a distributed manner, and then they send it back to you, and then you combine it. That's interesting, I think. The downside of that work is you have to outsource your witness to other parties. And maybe that's fine, but maybe sometimes that's not fine.

Anna Rose:

That just means there's no privacy, right? Or there's only privacy between you and this other party.

Chelsea Komlo:

Yes. So the witnesses' secret shared so they don't learn it, but they do learn what you're proving. So, I think there's also been some recent work using FHE, again, which is the magic thing that hopefully will solve everything, where you can have an untrusted server doing FHE to generate your proof without ever actually seeing your witness in the clear. So I think that some of that work has been done by Sanjam Garg at the University of Berkeley. There's been many implementations of FROST and I won't list all of them here, but in the CFRG GitHub repository for FROST, we list out different implementations and the organizations that have implemented it. Hopefully, we can include those in the show notes so people can go and look at other works that are using FROST as well.

Anna Rose:

Cool. We'll add, for sure. Kind of bringing it back to your earlier experience and sort of the thing that first got you excited about the general space, working in privacy. I'm just curious what your feelings are about the space today and the research that's being done and sort of how it's being implemented.

Chelsea Komlo:

So I think it is a thing people care about and it is a thing people are designing products around. So that's a great place to be. I do think sometimes some privacy research goes a little into the like, how do we collect data? But in a way that's private, or how do we allow tracking, but in a more private way? And I think practically that research can be useful if it means it gets us to a better place than collecting everything and storing it. But I think as a community, it always comes back to the ethical implications of your research. And I think we always have to think about what are we designing and beyond it being a cool and hard research problem, which I love those myself. I love the challenge, like we all do. But what are the ecosystem implications for the work that we do? And there's always the practical, well, in the ad tracking business, people track everything now anyway, so let's do something that's a little better.

Anna Rose:

Make it better.

Chelsea Komlo:

Yeah, so I don't think that's a right answer, but I think there's the moral obligation of the work that we do, and we always have to ask the question about, what is making something better versus what is legitimizing tracking? And what organizations do we support and what goals are they trying to achieve? Again, I think that's why I like FHE and the work that's being done to make that practical is an amazing feat. That's really exciting. And in the meantime, there are always tradeoffs and every organization makes tradeoffs. So, for example, Tor makes security and privacy trade-offs to make the tool fast. So every organization has trade-offs, and you can't get around it, but as long as we're talking about what those trade-offs are and who it benefits, I think that's the thing we really have to think about.

Anna Rose:

I have a variation on this. I was just thinking about this has come up for many months, many years, the idea of ZK being used, not for privacy, but for other things. And I've sort of argued that, well, ZK is a useful short form for all sorts of different cryptography, also SNARKs that aren't using the zero-knowledge property. But actually, just recently, someone joined our Telegram group kind of shocked that all of these companies that do rollups with ZK at the front of their names are not private at all. And I was a bit like, oh no, that's a messed up narrative there. A bit of a missed... It's a miscommunication. And that actually worries me a little bit, because I think then people think they're operating in a private space when they're not, and that could lead to potential problems.

Chelsea Komlo:

I think, though, there's a big... So when I first started doing cryptography research, I was like, I will never work on something that is not practical. I was like, I've been an engineer, I've seen all the papers that you can't implement, I will never do a thing that can't be implemented. But I actually think...

Nico Mohnblatt:

How did that go?

Chelsea Komlo:

Oh, I mean, yeah, it's actually very hard to do things that are ready for use in practical immediately. So I think the stance of viewing things as a progression is great, and even though maybe ZK isn't being used right now, the work that's being done to make it fast means that tomorrow we can have a private driver's license on our phone. That is a reality that I think is coming to fruition. And the work that's being done to standardize ZK means that companies who want standards and maybe to use ZK in another context could do that. So I don't actually think anything is wasted, and the progress that's being made is amazing. And I think where I sit now after being in the space and in research for a while is it's very hard to tell when something might be useful. And actually, you throw something out in the wild, and you don't know, and then maybe 10 years down the line, someone finds another use case or builds on it or finds a tweak that actually makes it useful in practice. So I do think we will see ZK for things like private identity in the future.

Anna Rose:

For sure.

Chelsea Komlo:

And especially like I do want to give a shout out to the people who are doing standards work in ZK, like that's a very complicated thing to do. Writing standards is very hard, and it's kind of an uncelebrated job.

Nico Mohnblatt:

Even harder when the schemes keep changing.

Chelsea Komlo:

Yes, so fast. Yeah, the field is moving really fast and like the schemes are complicated, but like the people who are doing that are doing like heroes work because I think in the future it will unlock a lot of use cases for ZK.

Anna Rose:

Cool. Chelsea, thank you so much for coming on the show.

Chelsea Komlo:

Yes, thanks so much for having me. This is such a pleasure. Thanks for all the work you all do.

Anna Rose:

I'm glad we were able to organize it this time around. We're really excited to see the new work and yeah, hope to have you back on with your next endeavor.

Chelsea Komlo:

I would love that.

Anna Rose:

Cool.

Chelsea Komlo:

Thank you so much.

Anna Rose:

All right. Thanks, Nico.

Nico Mohnblatt:

Thanks, both. And thanks, Chelsea, for sharing some of the new work.

Anna Rose:

I want to say thank you to the podcast team, Rachel, Henrik, and Tanya, and to our listeners, thanks for listening.

Transcript

Anna Rose:

Today, Nico and I are here with Chelsea Komlo. Chelsea is part of the cryptography, security, and privacy lab at the University of Waterloo and is Chief Scientist for the Zcash Foundation. Welcome to the show, Chelsea.

Chelsea Komlo:

Yeah, thank you so much for having me. I'm a big fan, so I'm really happy to be here.

Anna Rose:

Cool. Yeah, we've actually wanted to have you on the show for a long while. We tried a few different routes, and I'm so glad we've made this happen. Hey, Nico.

Nico Mohnblatt:

Hi, Anna. Hi Chelsea

Chelsea Komlo:

Hey. Yeah, thank you so much for all the work you all do in the community. It's like, it's really fun to see all the shows you all do, and it's just great.

Anna Rose:

Cool.

Nico Mohnblatt:

Oh, thanks.

Anna Rose:

I mean, we're really curious to find out more about you too. So I think the first question I wanted to understand was like, what was the spark that got you interested in the topics that you work on?

Chelsea Komlo:

Oh, that's fun. So actually... Yeah, so before going to graduate school, I was an engineer, which has actually been very helpful in doing cryptography research, because I feel like I can put my prior engineering hat on and think about like, oh, what would we actually want to deploy in practice? So that's been really helpful when designing research. But yeah, before going to graduate school, I was an engineer, and I worked on cryptography and privacy protocols. So I contributed to Tor, I did a little bit of work on Enigmail, I worked on some of the OTR protocol, Off-The-Record messaging protocol for secure messaging. Yeah, and I just loved it. And then I wanted to be able to design cryptography. So, but for threshold signatures, that topic really came out of the Zcash Foundation. So in trying to make threshold signatures more usable and easy to deploy, that's where kind of my current work has come from.

Anna Rose:

I want to go even further back, though. What got you interested in working on Tor and on this type of engineering? What drew you to that topic?

Chelsea Komlo:

l, but I guess it was back in:

Anna Rose:

Cool. What was it like working on Tor? What does it mean to do that? Is it sort of just like contributing from afar or were you like more in the org?

Chelsea Komlo:

Yes, so I was a core Tor contributor. I wrote some Rust that helped inspire the use of Rust in Tor, which is great, because it's a memory safe language. I was on the board for a little while. And overall, I think it's great being a technical person, being able to contribute to projects that people need. So like people use Tor to circumvent censorship or to look up things privately, and I think it's really great for technical skills to be used for sort of altruistic motivations like that.

Anna Rose:

Did you after that then look to study it? Because as far as I know, you're doing a PhD now. Like, yeah, what was happening maybe at the same time academically? What led you to the work you're doing today?

Chelsea Komlo:

Yeah, so I was on a team. We contributed to a lot of open source projects like Tor and Enigmail. And then we started doing work on a new version of Off-The-Record messaging protocol. And that's the protocol that Signal eventually built on. So it basically does ratcheting, so you have forward secrecy in your messages. And in contributing to that, I got to know Ian Goldberg, who's at the University of Waterloo, and I wanted to be someone who could design those protocols myself and write proofs of security and think about not just implementing what exists, but also thinking about how do we design new things. And that's what inspired me to go get my PhD, and I've been lucky enough to work with Ian Goldberg and Douglas Stebila, who are my co-advisors. And it's an amazing thing to be able to think of ideas and be able to prove them secure. It's very hard. It's non-trivial.

Nico Mohnblatt:

Oh, yes

Chelsea Komlo:

But I know... And you make tons of mistakes. I feel like I've made every mistake in the world at this point, but I'm sure there's more mistakes to make. But it's a fun expertise to have.

Anna Rose:

Nice.

Nico Mohnblatt:

Ian Goldberg, actually, I should add, is one of the OG cypherpunks, for those of you who know history of where he came from with censorship resistance.

Chelsea Komlo:

Yeah. Ian has a wealth of knowledge about where privacy started and where we are. And he's teaching a new class at the University of Waterloo on just the new generation of privacy enhancing technologies. And there's so much more we have today, like where private information retrieval is going, secure messaging, it's really an exciting space, and it's amazing to see people deploying these technologies.

Anna Rose:

s this legendary hackathon in:

Chelsea Komlo:

Yeah, I mean, it's a big lab. So I'm in the CrySP lab, the Cryptography Security and Privacy Lab, and people are working on all kinds of things. So like everything from private machine learning to more pure cryptography, to applied cryptography, to censorship resistance. So there's a lot of different topics. It's really fun. I actually, so Alfred Menezes teaches at the University of Waterloo. And I was sitting in his cryptography class and Vitalik actually came to give a guest lecture. So it was kind of a very full circle moment, which was really fun. There's a lot of post-quantum work going on as well. So David Jao, who kind of came up with the SIDH, Supersingular Isogeny Diffie-Hellman is faculty there. So there's just a lot of topics. It's a fun place to be.

Anna Rose:

Very cool.

Nico Mohnblatt:

So Chelsea, what has been your focus within the lab?

Chelsea Komlo:

Yes, since:

So network rounds are very important, because parties could go away or you could have network latency or packets could drop. And so multi-party protocols that have fewer rounds are much easier both to implement and overall faster. So the core question was, can we have a threshold signature with fewer number of rounds than five? And there was kind of a folklore protocol of how to do a scheme in three rounds, but then the question we wanted to ask is, well, can we do even better? That's where one of the first projects that I worked on, which is called FROST, and it stands for Flexible Round-Optimized Schnorr Threshold Signatures, and we were able to come up with a threshold Schnorr Signature scheme that is secure in two rounds.

Anna Rose:

Oh, wow.

Chelsea Komlo:

And what's very nice about FROST is that the first round can be preprocessed as well. So if you especially care about network latency, you can do the first round in kind of a batched manner, and then you can just have one online round. And that was something people were interested in. So it's kind of taken off since then, and I've done a lot of follow-up work on threshold signatures since then. I also think an interesting question is, why do we care about Schnorr signatures now? And why is there so much work going on in Schnorr signatures? Yeah, so Schnorr signatures have existed for a long time in terms of the history of cryptography, there were a very early signature scheme to emerge. And really, what a Schnorr signature is, is it's just a proof of knowledge of discrete log, but it's bound to a message. So your secret key is some field element, and your public key is some group element, where your secret key is the discrete log of your public key. And so all a Schnorr signature is, is just proving knowledge of your public key, but you additionally bind the signature to some message.

And that's all a Schnorr signature is. So it's very, very simple. It's existed for a long time. But it's quite amenable to building protocols on top of, because it's linear. So it's quite amenable to building things like multi-party signatures, because basically what you can do is you can have all the parties derive a signature share, which itself is a Schnorr-like signature, and then you can just aggregate all the signature shares and come up with itself, the aggregated signature is itself a Schnorr signature. So basically these pieces are very composable and Schnorr signatures lend themselves to sort of more advanced primitives that are easily combinable.

Anna Rose:

Who was Schnorr? Who is this person?

Chelsea Komlo:

Claus Schnorr is still a cryptographer. Gosh, I wish I knew all of the things that he worked on, but the signature scheme is named after him. And I think it came out in the 90s, but I'm not sure.

Anna Rose:

Okay. So long time by cryptography. We're talking 30 years. Yeah. 20, 30 years. Okay.

Chelsea Komlo:

Yeah. Yeah.

Anna Rose:

Cool

Nico Mohnblatt:

Yeah, you were talking about aggregating multiple signatures. Are we talking about signatures over different messages over the same message from different people? What do we mean by aggregation here?

Chelsea Komlo:

Yes. So I know there's been work done where you can aggregate over different messages, but when I'm talking about aggregating here, it's over the same message. And so I guess maybe I can take a step back. Let me take a step back and talk about how a threshold Schnorr signature works, and then I think it'll become obvious of what aggregation means in this setting. So the goal for threshold Schnorr signature is to output a signature that is verifiable under the same algorithm, the same verify algorithm as single-party Schnorr. And the reason why that's nice is because if you're in implementation and you already verify Schnorr signatures, then you can also verify threshold signatures without having separate logic. So from an implementation perspective, it's very useful because it just allows for more simplicity and less logic switching.

So the goal is to output a plain Schnorr signature, but instead of having one party control a secret key, you have many parties. And so what's nice about that is then you have things like redundancy, so if one party loses their key, you still have other parties that can issue a key. And you also have a distribution of trust. So if you, say, control a large amount of funds for your customers, your customers might want some kind of assurance that one party can't just disappear with the funds, and so you might want to distribute trust of that secret key. So we want a couple of things. We want unforgeability under all of this secret key shares. We also want unforgeability under the single key that's secret shared among all the parties. And then we want to make sure that when all the parties sign, those what we call signature shares aggregate to a single Schnorr signature. Yeah, so I guess in this case, what we're talking about is one message. There can be variance with additional messages, but that kind of lends itself to a different protocol and different properties.

Anna Rose:

Why is there work on this now if it already existed for some time before?

Chelsea Komlo:

Yes, that is kind of a spicy question, actually. So Schnorr signatures were patented.

Anna Rose:

Ooooh.

Chelsea Komlo:

And so a lot of people don't know this. So a lot of people don't understand why we have both ECDSA and Schnorr. And ECDSA is actually quite painful in the multiparty setting. So you can design very, very simple multiparty Schnorr protocols, and for ECDSA, it's much harder because of the structure of the signature. But what happened is we had Schnorr signatures first, but then a patent was issued and then ECDSA was designed to kind of circumvent that patent. And so we were left with essentially years of building more complicated protocols around a more complicated scheme that circumvented a patent. So yeah, so I think this is a really interesting thing to know and to talk about because I've heard arguments for patents, which say things like, oh, we're investing resources into cryptography, we should be able to reap the rewards of those resources, which I think is a compelling argument. But I think history shows us that patents and cryptography lead to decades of work that we didn't necessarily need to do. And actually schemes which are harder to implement and potentially less secure, because there could be bugs in implementations

Nico Mohnblatt:

It's also harder to analyze, right? ECDSA from a proving perspective.

Chelsea Komlo:

Yes. So I think as a community, we really need to be conscientious of where we came from, and think really hard before diving into patents, because, for a certain company, it might be beneficial, but for the community as a whole, it's very difficult to design around.

Anna Rose:

Which company did the patent? Like, could they have built more stuff?

Chelsea Komlo:

Yeah, I haven't looked at the patent myself, so I can't go into that much detail. I just sort of know that it was patented, and that delayed a lot of, and ECDSA came out around it. And so I guess the important thing, though, is that the patent expired. I'm not exactly sure when the patent expired, but that's where, as I understand, we've seen a lot of re-emerging interest in Schnorr signatures and things being built and developed around it. So there was a delay in research and development and then when the patent expired, we were able to say, oh, this is a protocol we can actually deploy now again. And so that's why we've seen a re-emergence of research in this area.

Anna Rose:

What's the connection between Schnorr signatures and Bitcoin or Zcash? Like, you said sort of it came out through your work at Zcash. I'm guessing Bitcoin-oriented. Yeah, what's the connection?

Chelsea Komlo:

Yes. So Zcash uses a variant of Schnorr signatures called RedDSA. So it's a re-randomized variant of EdDSA signatures. Again, the difference between EdDSA signatures and Schnorr is extremely minor. EdDSA, you hash in the public key when you're sending the message but the structure is essentially the same. So when I'm saying Schnorr, you can sort of also substitute EdDSA. They're extremely similar. So Zcash was already using a variant of Schnorr, and then Bitcoin recently with Taproot is starting to move to EdDSA, or a variant of EdDSA signatures as well.

Anna Rose:

Okay.

Nico Mohnblatt:

Just to emphasize, EdDSA is not ECDSA. And I've seen people make that mistake before. And they look very similar.

Anna Rose:

I kind of made a mistake in our last episode where I used it incorrectly.

Nico Mohnblatt:

It can be confusing, but yes, EdDSA, Schnorr-like, ECDSA gets around the Schnorr patent.

Chelsea Komlo:

Yeah, don't blame the users. The names are very confusing.

Nico Mohnblatt:

No, of course.

Anna Rose:

So I guess this then leads us to the work you did on FROST, because, just to sort of go back to the purpose of it, you talked about the rounds. Are the rounds a function of speed or cost? Like trying to get these rounds down sounds like a good idea, but what are you actually accomplishing when you do that?

Chelsea Komlo:

Yeah, it's speed, essentially. So anytime you have a multi-party protocol, all the parties start, then they send messages to all the other parties, then they do some processing, then they send messages again, and eventually, you have some kind of output. So any time you send a network message, there's delay. So you have to wait for all the messages to arrive. If not all the messages arrive, you need extra logic. So there's kind of a lot of complexity that go into network rounds. And for things like signing, if you're an exchange and you have to issue millions of signatures a day, having fewer network rounds is quite helpful.

Anna Rose:

Who are the agents who are actually getting that speed up though? Is this for the miners? Is this for the wallet? I'm kind of curious where this actually gets used.

Chelsea Komlo:

Yes, that is a good question, and I think that does play into some of the discussion around, do we actually... There's been a discussion around, do we actually need speed in these signatures? And there's been similar discussions, I think, as well around, do we need speed in zero-knowledge proofs? So I think Henry de Valence, who is with Penumbra, said something which I thought was very useful to think about, which is for a client-side application, the most speed you need is enough time for the user to process something, which is kind of slow in terms of computer speed. So I think that that's actually something useful to think about. So yeah, if this is a wallet and it's on a user application, you can probably tolerate speeds of up to a second. But if this is an application, so for example, there's companies that hold shares on behalf of their clients. And so the agents are servers performing signatures directly, then speed matters because these are just computers talking to other computers.

So the agents do...they differ and the speed requirements differ as those agents differ. But then also in terms of complexity, network rounds can be important as well because you have things like packets dropping and other things like that.

Nico Mohnblatt:

It's funny that you mentioned speed and Penumbra because in a recent episode, that probably came out a few episodes ago, we talked about the same thing with Alin, and I made the same shout out to Henry and Penumbra and how their tools do everything in the background. So shout out once again.

Chelsea Komlo:

Yeah, I think it's important for assumptions, I think it's important to challenge them. So sometimes I think we get very caught up in speed, and sometimes it doesn't matter. And I think that's important to know, and it's context dependent.

Nico Mohnblatt:

Absolutely.

Chelsea Komlo:

Other things that I think are also important to know is like on the cryptography side, if you're designing cryptography systems, you want the most security possible in every dimension. But sometimes those security trade-offs are acceptable in other dimensions as well, such as for speed or simplicity. So there's been a tension around... So FROST, for example, is secure under an interactive security assumption, and schemes that have more network rounds can be proven secure under weaker assumptions. And so there's been a lot of discussion around what assumptions are fine, what do users want, what is better? And I think the important lesson to come out of this is that it depends and it's context dependent.

Nico Mohnblatt:

So is this where the FROST magic happens? Like, is this how you reduced all these rounds?

Chelsea Komlo:

Yes. So exactly. So schemes with more rounds can be proven secure under, for example, discrete log directly, which is a weaker and kind of well-understood assumption.

Nico Mohnblatt:

More common, yeah.

Chelsea Komlo:

Yes. So FROST requires an interactive assumption. It's under what we call the algebraic one more discrete log assumption, still in the random oracle model. So at least there's that. But it's an interactive assumption, and it basically says, let's say you're given L plus 1 discrete log challenges, and you're allowed L queries for solutions. Can you produce L plus 1 solutions? So basically, can you produce one extra solution than queries that you're allowed? So.

Nico Mohnblatt:

Sounds reasonable.

Chelsea Komlo:

I think intuitively, it sounds reasonable. This assumption has been out for a while. It's very hard to prove blind signatures secure without this assumption. So this assumption was introduced in the context of blind signatures. So again, it's context dependent. I think I personally feel this assumption is reasonable and especially in a practical setting, it's a fine tradeoff. But again, it's context dependent.

Nico Mohnblatt:

One last thing I wanted to ask, when we start a threshold signature, do the signers have their private key that they generated on their own? Or do they have to somehow have shares of a key? Where do we start from?

Chelsea Komlo:

Yes. The bootstrapping question is a very important question to ask. So signers have to bootstrap with a secret shared key. It's Shamir's secret shared. So basically, every party has a point... Their share is essentially a point on a polynomial, and the secret key, this joint secret key, is the constant term of that polynomial. And when you combine signature shares, what you're doing implicitly is polynomial interpolation to some other point on the polynomial, which is unknown. So really, the magic, it's very simple, is given t plus 1 points on a polynomial, you can find any other point on the polynomial. So that's all that's happening under the hood. And so we bootstrap by using either just plain Shamir secret sharing using a trusted dealer or another multi-party protocol, which is what we call a distributed key generation scheme.

And so what that is, again, you have all the parties, they're all participating, and the output from that protocol is secret key shares that every party holds that combine to some secret key that no party knows, but all parties have contributed to. So it's kind of like a magic black box, every party throws in some randomness, and at the end, they all get secret key shares that combine to a secret key that no one has actually seen.

Anna Rose:

So the thing I'm still not entirely clear on is almost like in a system like Zcash, do you implement this somewhere or is it on an app, like is it on a wallet level? Like I understand the research being created, but I don't really know where it fits in.

Chelsea Komlo:

Yeah, so the Zcash Foundation has implemented FROST and also a DKG and trusted dealer Keygen and then applications can pull it in as they need to. So if you're a Zcash wallet, you will pull this into your application. And the team right now is doing some work to make pulling that a little easier. So they're working on demos and other things like that. But this is kind of like a core library, and then it's pulled into various applications. And this is why it's being used outside of Zcash as well, because it's essentially a protocol agnostic. So you can pull it into other wallets that maybe need to implement Schnorr signatures.

Nico Mohnblatt:

And like you said, usually the applications will be either creating redundancies or making sure that if you lose one of your key shares, you have more and someone stores them for you.

Chelsea Komlo:

Yes.

Nico Mohnblatt:

So account recovery or multi-sig, like distributing trust, having multiple people having to sign.

Chelsea Komlo:

Yes. So something that's very Important to know, so we didn't write this in the FROST paper originally, but there's tricks for doing share recovery. So if one party loses its share, there's established protocols out there for re-deriving that share or creating new shares so that that party can recover their signing key. And there's also protocols for generating new shares for new parties. So if you're out there and you have questions about like, okay, do I have just a set number of shares, or is this dynamic? The answer is yes, it's dynamic, and protocols exist for that.

Anna Rose:

And you've mentioned sort of multiple parties throughout this, but we haven't actually said like multi-party computation. Does it fall under that category, or are we working beside, or are they just similar?

Chelsea Komlo:

Yes. So it is, yes. So technically threshold signatures is a special form of multi-party computation. I think it's nice to distinguish because when you say a multi-party computation, this can be done generically. So you can essentially take any function and distribute it among parties. And there's generic tools for doing this. Those generic tools could also be used to do threshold signatures, but they're generally less efficient. And so, and this is common in other kinds of cryptography, where you can design generic tools, but they're less efficient, or you can design schemes that are for specific use cases, and then you can tailor them for that specific use case, and they tend to be more efficient. So I think it's kind of nice to distinguish. Hopefully, one day we'll have generic MPC that can just MPC everything. And people, I think, especially in the FHE world, are moving in that direction, which is very exciting. But right now, we have the very simple threshold signature case, which is quite tailored.

Anna Rose:

Got it. I'm glad I got to work that question in, because just a shout out to Nigel Smart, who did finally introduce us. He was the one who finally made the connection, and he had recommended. We've done an episode in the past, or actually two episodes on MPC, but a pretty recent one. So we can also link to that.

Chelsea Komlo:

Yes, absolutely. And yeah, I am really excited for what places like Zama are doing and having generic MPC and FHE is a really strong tool and I think it will be, it's really exciting to see where it will go in the future. So hopefully we can have FHE for everything and all of our problems will be solved.

Anna Rose:

Easy. Although what it looks more and more like is it's these combos like ZK and FHE, so anyway.

Nico Mohnblatt:

So back to FROST, you said FROST was developed coming from the use case that Zcash needed. Is this kind of a standard technique? Is this, I guess, being standardized for other people to use?

Chelsea Komlo:

Yes. So we wrote an informational draft for the CFRG, the Cryptography Forum Research Group, within the IRTF. The reason why we did it, actually, was because we put out FROST, and then I had a lot of people emailing me to say that they were implementing FROST. But they were implementing it in slightly different ways and making slightly different choices around things like serialization of data...

Nico Mohnblatt:

Hash function.

Chelsea Komlo:

Yeah, like slight variations that I knew down the line would potentially be confusing and then potentially having bugs as well. So I...

Nico Mohnblatt:

Was very stressful to you? Seeing the word being taken apart like that.

Chelsea Komlo:

I mean, it was exciting, but also I was worried that, well, one, we would have a lot of incompatibilities. And I actually had auditors tell me that they'd seen bugs pop up. So for example, someone told me that they saw an implementation of FROST where the nonces were being derived deterministically as they're done in EdDSA. So instead of sampling nonces at random, the nonces were generated by hashing the secret key in the message. So if you're used to single party EdDSA, this is what you do. You hash the message and the secret key to generate your non-switches private. So that's totally reasonable. But in the FROST setting, this leads to a secret key recovery attack and two signing sessions.

Anna Rose:

Whoa.

Chelsea Komlo:

So it's like a total break if you do this.

Nico Mohnblatt:

Is this the insecurity of ROS paper?

Chelsea Komlo:

No, so ROS-

Nico Mohnblatt:

Different, okay.

Chelsea Komlo:

Yeah, ROS comes down to how the scheme is actually designed. And so FROST was one of the first schemes that was secure against ROS. And it's because of how FROST essentially has two nonces, you hash in the transcript from the first round, and then that becomes your overall nonce, and that avoids ROS attacks. But the key recovery attack, if you derive your nonces deterministically is just... Because the adversary has input into the challenge. So essentially, as long as the adversary changes doesn't follow the protocol deterministically, but the honest player does, there's a trivial key recovery attack that you can't detect. So someone told me this, and I was like, oh, no, it would be great if we actually had something. Because the research paper isn't written with exact engineering details. It's basically enough to show what's going on so that you can prove the scheme secure. But it's not enough details for engineers to follow to make really important decisions like serialization or ordering or other details that are somewhat important. So yes, that's why we decided to write this informational draft basically, because people were implementing it and we wanted something that was more useful. So that process is wrapping up right now. And conveniently, NIST has also put out a call, or they're very soon to putting out a final call for threshold schemes as well. And we'll be basically taking what we submitted to CFRG and turning that into a NIST submission.

Anna Rose:

But then with NIST, does NIST need to choose that? And then it becomes sort of the standard, but you're kind of up against other types of?

Chelsea Komlo:

So this threshold call is different than what NIST did for the post-quantum call. So for single-party signatures, I think it's easier to have a uniform competition, because you can do things like define what the API is, define what the inputs and outputs are. And this is what was done for the NIST post-quantum competition. For threshold signatures, it's a little harder, because there's so much variation within the actual schemes itself. So even though for thresholded EdDSA, they all might be putting out a single-party EdDSA signature. The internals of the scheme are all quite different. I think right now NIST is trying to decide what they'll do, but this call is basically send us your schemes in more detail than in the paper, and then the and is still being decided, as I understand. So yeah, so Luis, who's at NIST, who's sort of organizing all of this, would be a great person to have on the show. So he has more context and a plan and vision for where this will go. So hopefully you can have him on and ask him these questions.

Anna Rose:

Maybe we could do an Ep on standardization generally. We've never talked to anyone from NIST.

Chelsea Komlo:

Yes, he would be a great person to have insight into what they're trying to do.

Nico Mohnblatt:

So we now have or soon to be a FROST standard, what comes next? What's the rest of Chelsea Komlo's work?

Chelsea Komlo:

Hopefully, I'm just getting started. So I have grand plans for the future. But I guess one thing that this process has taught me, and I kind of referred to this before, which is there's different trade-offs in deploying cryptography into practice. So the trade-offs I sort of think about are things like usability, security assumptions, and then performance. So those are the different axes when you're designing a scheme and you have different trade-offs along those different axes. So before I talked about how people were using FROST deterministically and I was trivially broken, I've been thinking about how to do deterministic FROST for a long time, and it's a very hard problem. It's extremely difficult to do it in a way that's secure. There's been work done to do deterministic threshold Schnorr signatures. And basically, those works require things like generic SNARKs or generic MPC.

So you can do deterministic threshold Schnorr, but it requires some kind of heavyweight tools. I guess, so even taking a step back, when I say deterministic thresholds, the reason why this is something we want is not only because you can not have to rely on fresh sources of randomness when you're generating your signature, but also because signers can be stateless. So basically, you perform a round, and you don't have to save any state. And then you perform your next round, and you can just re-derive all of the state.

So from an implementation perspective, this is great because you don't have to cache things in a database. Take a lock on the database. Look up the thing in the database. Carefully delete information that if you don't delete it, your secret key is leaked. Unlock the database.

So we really want... Like these games are actually quite attractive, I think, in practice. But currently, they require heavyweight tools. So I have some upcoming work, which I think... I'm interested to see what practitioners think about it. So basically, the work is called Arctic, which I think is nice.

Nico Mohnblatt:

Arctic, yeah.

Anna Rose:

Sematic.

Chelsea Komlo:

Yeah, so nice.

Nico Mohnblatt:

Is it also an acronym?

Chelsea Komlo:

It's not. I couldn't think of an acronym with like DTS came into like a nice word. I spent a lot of time thinking about it. But Arctic was the best I could go.

Nico Mohnblatt:

How much compared to the time thinking about the paper?

Chelsea Komlo:

I mean, not as much. But I did try hard to make an acronym and just failed. I'm always on the market for good scheme names. So please tell me your good scheme names in the future if you have them.

Anna Rose:

There's lots of variation on cold things that could fit into all of this.

Chelsea Komlo:

Yes. So I'm on the market for it in the future. Yeah, so basically, Arctic is a deterministic threshold Schnorr signature. And it's very simple. It doesn't require generic MPC or things like generic zero-knowledge proofs. But the trade-off that it makes is that it requires a larger number of assumed-to-be-honest signers. So basically, for FROST, the security model that it's secure under is you can have t signers, and up to t minus 1 of them are assumed to be honest.

Anna Rose:

So one honest party would be enough.

Chelsea Komlo:

Exactly. And that's a very nice security model that you can reason about. So you have one honest party. For Arctic, Arctic assumes total two t minus 1 parties, where t minus 1 of them is assumed to be dishonest. So really what we require is out of your total set of signers, the majority of them are honest.

Nico Mohnblatt:

Right. So we go from single honest party to majority honest.

Chelsea Komlo:

Yes.

Nico Mohnblatt:

Okay.

Anna Rose:

But it couldn't really just be like, 51% honest, kind of just a little bit.

Chelsea Komlo:

Yes, just a little bit more honest.

Anna Rose:

Okay. It's not a 66% situation.

Chelsea Komlo:

No, it's like 51% honest. And so it's interesting because we do have assumptions like that already in cryptocurrencies, so for things like consensus. So in other places, we have like 51% honest as an assumption. But so far in threshold signatures, we haven't really explored those kind of assumptions. And so with Arctic, basically, we say, okay, if you're fine with those kind of assumptions, you can have a stateless scheme. That's pretty simple. So then again, it's up to implementers to say what trade-offs am I fine with? Am I fine with deploying more signers and then having a simpler scheme that has these nice security properties? Or do I really need that all but one honest?

Nico Mohnblatt:

About your last axis, speed, how does Arctic perform in terms of speed?

Chelsea Komlo:

It's pretty fast.

Nico Mohnblatt:

Is it still two rounds or is it a bit more?

Chelsea Komlo:

It's two rounds.

Nico Mohnblatt:

Okay, lovely.

Chelsea Komlo:

So I guess, so there's a trade-off. So for groups under size 25, it's pretty fast. There's a trade-off. And for larger groups, MuSig-DN, which requires generic zero knowledge proofs, MuSig-DN is faster. So there's a crossover point. But what we see is like, okay, for smaller sized groups, where you're fine with 51% honest, you can have a faster scheme. But again, I think it's interesting putting out these axes more explicitly and then thinking about, what are we fine with? But at least we have all of the options. And applications can say, I don't know, we don't mind implementing Bulletproofs. And we're fine with something being slower. MuSig-DN is fine. Or simplicity of the implementation is important to us because we're scared about bugs and we want something to be fast for smaller groups, then something like Arctic is a better choice.

Anna Rose:

You've sort of said, like, I've heard more about deterministic, but also stateless. Like, are those the same thing? Are those connected?

Chelsea Komlo:

Yes. So determinism is the means to statelessness in this thing. So we have a deterministic scheme, or you might read about deterministic threshold Schnorr schemes, but when we say deterministic, that means that the scheme is also stateless. Because basically, given an input, I can derive some state and some round.

Anna Rose:

And you know what it is.

Chelsea Komlo:

And you know what it is. And then in the next round...

Anna Rose:

So you have to save state somewhere else. Okay.

Chelsea Komlo:

Exactly. So if you're given the same inputs to these different rounds, then you can deterministically derive that output.

Anna Rose:

Cool.

Chelsea Komlo:

So, but they are kind of thrown around interchangeably, so it's a good question.

Nico Mohnblatt:

When is this work coming out?

Chelsea Komlo:

As soon as I put the paper up on Eprint, which is hopefully this week.

Anna Rose:

Oh, cool. That means...

Nico Mohnblatt:

So if it's out we can link it in the show notes.

Anna Rose:

Yeah, by the time this comes out, we should have it.

Chelsea Komlo:

Yes, that would be great. Yeah, I really am curious. This was kind of a hunch that I thought something like this would be interesting to implementers,so I'm very curious to hear feedback or thoughts from people working in practice. So as people see it and think about it, if they have questions, I would love to talk to people about it.

Anna Rose:

Cool. In this, you talk about this concept of dishonest, but I don't think we've really talked about what that would mean to be dishonest in this particular case. I mean, we know what dishonesty is for validators sometimes. But yeah, what is dishonest here?

Chelsea Komlo:

Yeah, that's a good question, because we also threw that term around a lot. It generally just means someone who you have no assumptions about how they will interact with the protocol. So they could follow the protocol honestly, they could appear to follow the protocol honestly, but like to store extra stuff. You know, generally with some nefarious goal in mind, like recovering the secret key or outputting a forgery or performing denial-of-service-attacks. But technically, when we use that word, it just means a party for which you have no guarantee how they will act within the protocol.

Anna Rose:

Where would these agents act dishonestly? Is it like in the rounds, before, after?

Chelsea Komlo:

Yeah. So it could be at any time. So I'm an adversary, I have corrupted let's say two participants, and I know their secret keys. I could initiate signing rounds with honest parties and follow the protocol exactly. And then I could take everyone's information and then I could try to do something nefarious with it.

Anna Rose:

Afterwards?

Chelsea Komlo:

Afterwards. I could take my secret keys and do something to them, like flip the bits or something. And then I could participate in the signing protocol, honestly, take the stuff that I received and try to do something nefarious with it. So it's really, it's very tricky. And I think writing proofs for these types of schemes is quite hard, because there's a lot of nuance in what a corrupted party could potentially do. So it's anything from before it starts, while the protocol is going on, or even afterwards.

Nico Mohnblatt:

And when we say honest majority, do we mean these actors act honest throughout the protocol? Or do we say at each round, a majority of people have to be honest?

Chelsea Komlo:

Yeah, so when I say like honest majority, what I mean is that there are T participants who follow the protocol as described throughout the protocol. So practically what this means is there's T machines whose secret keys have not leaked. Like this is practically how it translates, but when you're writing the proof, this is kind of what you mean when you do the modeling.

Nico Mohnblatt:

Are there any schemes that consider the case where the set of honest parties changes between rounds?

Chelsea Komlo:

Yes. Yeah, so there are schemes that consider adaptive security, and this is exactly what you're talking about. So static and adaptive is, I would say, kind of a more theoretical term with a practical lens. So when we write proofs, something that's very easy when writing the proof is saying, let's say you have n parties, and at the beginning of the proof, I say parties 1 through 5 are corrupt. And that those are the dishonest ones, and the honest ones are the last party, and the world stays the same throughout the proof. But that's not actually what happens in practice. What happens in practice is you can have an adversary that corrupts one machine, and then it determines on the fly who it wants to corrupt afterwards. And this is essentially adaptive security where throughout the protocol, the adversary can choose who it corrupts. From a proof writing perspective, this is much harder to model. So I do think there's benefits for writing proofs statically, but the adaptive model is closer to what we see in practice.

Anna Rose:

Wow.

Nico Mohnblatt:

So do you have an adaptive FROST variant?

Chelsea Komlo:

I think we can prove FROST adaptively secure. It's also direct FROST. The proof is very hard. We're working on it. It is currently being worked on, but it is nontrivial. Yeah, so it's an interesting research question around what theoretical trade-offs do you have when you assume adaptive security? So again, if we have simpler schemes which are statically secure, is that better? I think that's an interesting question to think about and like if you're assuming adaptive security in the proofs, but you have a less efficient scheme, what does that mean? And that's kind of an interesting conversation for all of us to have.

Anna Rose:

Cool. So I want to ask a question about not this new work, but the work we were just talking about before FROST, before we sign off, which is on potential use cases. We sort of mentioned the security of these systems and that people had actually implemented them. I'm curious if you could just share any of those implementations.

Chelsea Komlo:

Yeah, so the thing I love about FROST is it's evolved into its own thing, and people are doing lots of stuff about it. And then I learn about it on Twitter, which is the best thing, in my opinion. It's so cool. So one project that seemed exciting is Frostsnap. So the name is really fun. And it seems like they're implementing FROST for the Bitcoin ecosystem and hardware.

Anna Rose:

Oh, cool.

Chelsea Komlo:

Again, I think it's amazing because I actually don't know them. I just watched their work on Twitter and I think it's great.

Nico Mohnblatt:

I think I've seen these... It was a picture of little things that plug into a phone and they can plug into each other as well, and then together generate a signature.

Chelsea Komlo:

Oh, cool. Yeah, they'd be fun to have on the show. So I would love to hear how they're actually doing it.

Anna Rose:

Are there ever cases of these kinds of signature schemes being used together with ZKPs? And I'll give you just a bit of context to this question. Like we have seen a lot of crossovers with general MPC and ZKPs or FHE and ZKPs. So I'm just curious if, can something like FROST be used with ZK? I mean, obviously it's used in Zcash, so there's some connection, but is it really being used together with ZKPs?

Chelsea Komlo:

Yeah, so I think this is an interesting question and something that's still being explored. So in the Zcash setting, it's interesting because FROST is used at the signing level. So signers sign a transaction, but then the prover can be a separate entity that isn't trusted to hold the secret signing key. So you can have a setting where you have many signers and one prover.

Anna Rose:

Oh, okay.

Chelsea Komlo:

And that's, I think, a very nice architectural and system design in that these roles are separate so that you can separate out the signing key from the prover functionality. So that's what Zcash is doing. There has been work into multi-party zero-knowledge proof generation. So there's some work, and we can link it in the show notes, but there's some work where you take your witness, your secret witness, and you secret share it among provers, and then the provers generate the proof in a distributed manner, and then they send it back to you, and then you combine it. That's interesting, I think. The downside of that work is you have to outsource your witness to other parties. And maybe that's fine, but maybe sometimes that's not fine.

Anna Rose:

That just means there's no privacy, right? Or there's only privacy between you and this other party.

Chelsea Komlo:

Yes. So the witnesses' secret shared so they don't learn it, but they do learn what you're proving. So, I think there's also been some recent work using FHE, again, which is the magic thing that hopefully will solve everything, where you can have an untrusted server doing FHE to generate your proof without ever actually seeing your witness in the clear. So I think that some of that work has been done by Sanjam Garg at the University of Berkeley. There's been many implementations of FROST and I won't list all of them here, but in the CFRG GitHub repository for FROST, we list out different implementations and the organizations that have implemented it. Hopefully, we can include those in the show notes so people can go and look at other works that are using FROST as well.

Anna Rose:

Cool. We'll add, for sure. Kind of bringing it back to your earlier experience and sort of the thing that first got you excited about the general space, working in privacy. I'm just curious what your feelings are about the space today and the research that's being done and sort of how it's being implemented.

Chelsea Komlo:

So I think it is a thing people care about and it is a thing people are designing products around. So that's a great place to be. I do think sometimes some privacy research goes a little into the like, how do we collect data? But in a way that's private, or how do we allow tracking, but in a more private way? And I think practically that research can be useful if it means it gets us to a better place than collecting everything and storing it. But I think as a community, it always comes back to the ethical implications of your research. And I think we always have to think about what are we designing and beyond it being a cool and hard research problem, which I love those myself. I love the challenge, like we all do. But what are the ecosystem implications for the work that we do? And there's always the practical, well, in the ad tracking business, people track everything now anyway, so let's do something that's a little better.

Anna Rose:

Make it better.

Chelsea Komlo:

Yeah, so I don't think that's a right answer, but I think there's the moral obligation of the work that we do, and we always have to ask the question about, what is making something better versus what is legitimizing tracking? And what organizations do we support and what goals are they trying to achieve? Again, I think that's why I like FHE and the work that's being done to make that practical is an amazing feat. That's really exciting. And in the meantime, there are always tradeoffs and every organization makes tradeoffs. So, for example, Tor makes security and privacy trade-offs to make the tool fast. So every organization has trade-offs, and you can't get around it, but as long as we're talking about what those trade-offs are and who it benefits, I think that's the thing we really have to think about.

Anna Rose:

I have a variation on this. I was just thinking about this has come up for many months, many years, the idea of ZK being used, not for privacy, but for other things. And I've sort of argued that, well, ZK is a useful short form for all sorts of different cryptography, also SNARKs that aren't using the zero-knowledge property. But actually, just recently, someone joined our Telegram group kind of shocked that all of these companies that do rollups with ZK at the front of their names are not private at all. And I was a bit like, oh no, that's a messed up narrative there. A bit of a missed... It's a miscommunication. And that actually worries me a little bit, because I think then people think they're operating in a private space when they're not, and that could lead to potential problems.

Chelsea Komlo:

I think, though, there's a big... So when I first started doing cryptography research, I was like, I will never work on something that is not practical. I was like, I've been an engineer, I've seen all the papers that you can't implement, I will never do a thing that can't be implemented. But I actually think...

Nico Mohnblatt:

How did that go?

Chelsea Komlo:

Oh, I mean, yeah, it's actually very hard to do things that are ready for use in practical immediately. So I think the stance of viewing things as a progression is great, and even though maybe ZK isn't being used right now, the work that's being done to make it fast means that tomorrow we can have a private driver's license on our phone. That is a reality that I think is coming to fruition. And the work that's being done to standardize ZK means that companies who want standards and maybe to use ZK in another context could do that. So I don't actually think anything is wasted, and the progress that's being made is amazing. And I think where I sit now after being in the space and in research for a while is it's very hard to tell when something might be useful. And actually, you throw something out in the wild, and you don't know, and then maybe 10 years down the line, someone finds another use case or builds on it or finds a tweak that actually makes it useful in practice. So I do think we will see ZK for things like private identity in the future.

Anna Rose:

For sure.

Chelsea Komlo:

And especially like I do want to give a shout out to the people who are doing standards work in ZK, like that's a very complicated thing to do. Writing standards is very hard, and it's kind of an uncelebrated job.

Nico Mohnblatt:

Even harder when the schemes keep changing.

Chelsea Komlo:

Yes, so fast. Yeah, the field is moving really fast and like the schemes are complicated, but like the people who are doing that are doing like heroes work because I think in the future it will unlock a lot of use cases for ZK.

Anna Rose:

Cool. Chelsea, thank you so much for coming on the show.

Chelsea Komlo:

Yes, thanks so much for having me. This is such a pleasure. Thanks for all the work you all do.

Anna Rose:

I'm glad we were able to organize it this time around. We're really excited to see the new work and yeah, hope to have you back on with your next endeavor.

Chelsea Komlo:

I would love that.

Anna Rose:

Cool.

Chelsea Komlo:

Thank you so much.

Anna Rose:

All right. Thanks, Nico.

Nico Mohnblatt:

Thanks, both. And thanks, Chelsea, for sharing some of the new work.

Anna Rose:

I want to say thank you to the podcast team, Rachel, Henrik, and Tanya, and to our listeners, thanks for listening.