Crates We Love

Adam Leventhal:

Now you can say something funny. Yes.

Bryan Cantrill:

Now we can okay. Thank god.

Adam Leventhal:

Yes.

Bryan Cantrill:

You know, actually, it was what a great episode last week, by the way. That was that was, it was tons of fun with the the predictions episode.

Adam Leventhal:

It it was great. And, a lot of people seem to enjoy it. So that was great. I'll tell you, the one person who did not enjoy it lives with me. Go on.

Adam Leventhal:

2 hours.

Bryan Cantrill:

Yeah. The predictions episode is always gonna be long, though.

Adam Leventhal:

Uh-uh. I'll just tell her that. That's fine.

Bryan Cantrill:

Yeah. Exactly. You know, it's like listen. Like, once a year, it's they they're just gonna go, they're gonna go a little long.

Adam Leventhal:

I'll try that.

Bryan Cantrill:

Yeah. You better try it. Yeah.

Adam Leventhal:

But, no, it was it was a great one. And and and, great job getting Simon and and Mike on. Oh, that sounds so good. Those guys are delightful.

Bryan Cantrill:

So I did see a headline on Thursday that I'm like, oh my god. This prediction is wrong already. Intel's CEO search heats up as leadership shake up drives turnaround hopes. And And so I'm literally, like, this story is gonna be, like, a rumor about a candidate that they're gonna be announcing in, like, a a like, this is gonna be done by the weekend. But then I go to the store, I'm thinking, like, oh, great.

Bryan Cantrill:

Like, well, fine. I I I knew the risks. The story is that Citi analyst Christopher Danely says Intel might name a permanent CEO in the next few months.

Adam Leventhal:

I mean, that's talk about, like, a prediction.

Bryan Cantrill:

Oh, no. I'm like, wow. Wow. That's how how do you get to be an analyst? That sounds hard.

Adam Leventhal:

Yeah. Conversely, they might not. Like, that's the alternative.

Bryan Cantrill:

Yeah. Sorry. I mean, it it's just taking away from the superlative work of city analyst, Christopher Danely, who I'm sure does very thoughtful research. They but it is not a a great insight that they might name a permanent CEO the next couple of months. Yeah.

Bryan Cantrill:

But I Adam, I decided that I'm going to take, 0 I will take no credit if they name this in January. But if it's February, I will take 1 12th credit. And I you know, if it's March, 2 12th credit, 1 6th credit, and so on. So that'd be for a little nice little way of, you know because I know we keep score so closely.

Adam Leventhal:

Yes, some of us do.

Bryan Cantrill:

And then the other thing that I just wanted to mention to you because, so, the my, I got a high school senior, he's got a terrific English teacher, a shout out to Ms. Foster, and she has given him a very good assignment, she's given the class a very good assignment, which is, you need to seek out an adult and you need to ask that adult for 3 books that had an impact on them as an adult. They can't be kids books, but books written for adults, and then you, student, will read one of these books. And I, I, the teacher, I, Ms. Foster will decide if one of these you read.

Bryan Cantrill:

And, Alexander picked me, my my 17 year old, which is great. I I think I think my wife is a little hurt, but, you know, we've got other kids, you know, we can you can focus your efforts on them.

Adam Leventhal:

I bet he asked her both. I bet he both he told both of you that he was only asking you because he's a pro.

Bryan Cantrill:

I think so he he is I agree with you. He is a pro to the point where I think he might have concocted that story. If he had such a story, he would concoct it only for my wife's benefit. I think he he and I both know that I was his first I I actually was his first call. I really do appreciate it.

Bryan Cantrill:

And so, the the the 3 books that I named, because I know I'm like, this is a great question, what are the 3 books that are, that have had an impact on me professionally, have changed me, or changed me in in adulthood. You have read all 3. And and I would with with the advice of don't overthink it, I was wondering how many of them you could rattle off of the 3. I think you can do kind of 3 for 3.

Adam Leventhal:

How many of the 3 books that you and I have both read have most influenced you as an adult?

Bryan Cantrill:

That's right. I feel that you you I mean, you can I think you can go at least 2 for 3? Again, don't overthink it.

Adam Leventhal:

I'm, like, my I'm, like, reeling. I feel like I've this is a quiz I did not prepare for.

Bryan Cantrill:

Listen, it's a pop quiz. Okay, everybody. Take take out your paper. No. I feel like the

Adam Leventhal:

okay. Can I have a hint? Can I photo a friend on this one? Give so are we talking about, like, in the technology domain? Is that what you're saying?

Adam Leventhal:

Yes.

Bryan Cantrill:

Okay. In the In the actually, our shared professional lives.

Adam Leventhal:

Yes. Okay. I'm gonna say Steve Jobs and the next big thing.

Bryan Cantrill:

That is 1. And I think you I'm gonna give you the hint that I think you've gotten the hardest one. The other 2 are easier than that.

Adam Leventhal:

Holy shit. I mean, I, like, I don't think this is right, but I I feel like the quantum dot, was Well, that's a deep

Bryan Cantrill:

hole, and no, that's not right. That is a very deep hole. Okay. So what was the title of my blog entry when we launched the company? Oxide.

Adam Leventhal:

Oh, oh, oh, oh, oh, right. Solve the new machine. Right. Of course.

Bryan Cantrill:

Solve the new machine. Yeah. And then what did we name our group after at at Sun?

Adam Leventhal:

There we go. Ben Ben Rich's, Skunk Works. Alright. You're right. It was easy.

Adam Leventhal:

You're right. I got I got the hardest one first.

Bryan Cantrill:

The Quantum Dot is definitely an example of overthinking it. That is, God, that's a very deep pull, by the way. That's an obscure book.

Adam Leventhal:

I don't know why that's like, that's the only thing literally the only book

Bryan Cantrill:

I could think of in that moment. It's like, I'd

Adam Leventhal:

name another book. Couldn't I couldn't name another book.

Bryan Cantrill:

Your brain just went white and all I could think of was this relatively obscure book published in 1988 or whatever it was.

Adam Leventhal:

That's all that was left. Yeah.

Bryan Cantrill:

Well, yeah. I, you know, I tried to prevent you from overthinking it, but I, you know, sometimes sometimes the the overthinking is gonna happen on its own. Great. Anyway, I thought that was a great assignment. So I thought it was I'm I'm and I will be, he'll be reading Skunk Works by Ben Rich.

Bryan Cantrill:

So Oh, nice.

Adam Leventhal:

We'll be

Bryan Cantrill:

reading that with him. Yeah. It's gonna be fun. So, Rainn, Eliza, welcome. Sorry, welcome to, us talking about 3 books that have influenced us professionally.

Bryan Cantrill:

Yeah. Very excited about this, and a shout out to Chris Krako because I, you know, I I I he had a had a tweet of like, oh, you know, the crazy you should know, it's great to see a resurrection of this because I did this on the new restoration podcast, and I that must have been lodged somewhere in my subconscious. But did you ever listen to the new illustration, Adam?

Adam Leventhal:

I have. I but not that episode. Like, that doesn't I didn't realize that we were, stealing from another great artist.

Bryan Cantrill:

That's right. Exactly. But very, very excited for this and because I also feel like, I mean, and this is true of not just Rainn and Eliza, but I feel like Rainn and Eliza are too that definitely are are constantly pulling out crates that I have never heard of, that are extremely useful. And I'm wondering, like, why haven't I heard of them? Should we start with how do we go to this, Adam?

Bryan Cantrill:

Because do do we online, you implied that we should have a detawnee cap.

Adam Leventhal:

I just well, I just I just mean, I look at my list and and David is here in the audience. I assume just to to bask in my my fanboying for him. But, you know, I look at my list. There's there's so many crates that that David Taulnet has made that I appreciate. And and I'm gonna kick it off just with one of those because, one of the ones that I stumbled on, you know, I bought write a bunch of macros here and there.

Adam Leventhal:

I am frustrated with Rust format, Brian. I not in the way that you're frustrated with Rust format, but, like, I wanna use it in kind of a library context and that's challenging to do.

Bryan Cantrill:

So what do you go on. That's interesting.

Adam Leventhal:

Well, I, so I stumbled. I was like, surely, David, writer of all macros has stumble has, like, done something for this. And yeah. So you, there's a a a crate called pretty, please, and it is for doing Rust formatting, like formatting of code. And what I really love about it is it's kind of, tersely opinion.

Adam Leventhal:

That is to say, look, I'm not trying to be Rust format. I'm just trying to make things better, like, pretty. Like, I'm pretty printing the thing. I'm not formatting the thing. And if you don't like it, well, you know, get out of here.

Adam Leventhal:

And there are a bunch of, like, I think there are bunch of been some PRs and and issues of the form, like, could you do it a little bit differently? And I really appreciate that David's kinda like, no. Like, it you'd take it or leave it. Like, if you don't like the way it's formatted, then, you know, maybe format it differently. That's fine.

Adam Leventhal:

But it has been a godsend for a lot of the the testing that I've done for these, cogeneration crates.

Bryan Cantrill:

For okay. That's interesting. Because so you got a lot of crates that that generate a huge amount of of code.

Adam Leventhal:

Yeah. Like, so for for example, progenitor, you have, like, 3 lines of macro that it poop out, like, 60,000 lines of code. And And,

Bryan Cantrill:

and you want the the the code that it emits, you want to be readable.

Adam Leventhal:

Well, in particular, when, I'm dumping that, like, into a file for, test automation or whatever, yeah, I want it to be, like, at least vaguely readable. And, I've used Rust format in the past, but, like, there's a bunch of challenges associated with using Rust format in a programmatic context like that. And Pretty Pleased has been fantastic, like, exactly what I needed. You know, maybe I should be

Bryan Cantrill:

using this. Okay. This is this is already paying dividends, actually, because I I I I've got in my crates that generate code, I have just I've kind of manually made the code that I generate Rust format clean.

Adam Leventhal:

Oh, yeah. Pretty pleased is gonna help you. Yeah. Pretty pleased is I think exactly what you want in that, like, you mean you your code generation is, like, emitting new lines and stuff like that.

Bryan Cantrill:

The the yeah. Yes. That's right. It's doing all of that. Yeah.

Bryan Cantrill:

Yeah. Totally. And, you know, that that's interesting because it has made the I mean, of course, I I I I kinda believe that code that generates other code, there's like a balance that must be achieved in the universe. And if the the the code that you're gonna emit is gonna look clean, the code that admits that code has to be filthy. But maybe that maybe that's too

Adam Leventhal:

Well, this is one of

Bryan Cantrill:

My code that emits from rust format clean code is filthy, and I would this, I think, would allow me to clean it up quite a bit.

Adam Leventhal:

Totally. This is what I've fallen in love with with regard to, like, rust macros, which is you can use, another David Crate, the quote the quasi quote quasi quoting system. So you you quote quote code that looks like Rust code, and then pretty please will just clean

Bryan Cantrill:

it up. So you don't have

Adam Leventhal:

to, like, just live in this cave person era of, like, strings on strings and doing your own semi formatting here and there. And the beautiful thing too is, like, your code generation in macro context can be exactly the same code that if you wanna generate code and dump it into files. And I think it just allows for really, like, debuggable, testable, understandable code as opposed to, as you're saying, Brian, like, kind of this swirly code generation that is also interspersed with formatting.

Eliza Weisman:

Brian, you might be pleased to know that I got out in front of this a few months ago and ripped out a bunch of string based code generation from idle, which may have been Cliff's doing rather than yours. But, now that uses quote and pretty pleased.

Bryan Cantrill:

Oh, that's interesting. You know, and I was yeah. That is Cliff's doing not mine, but I would I can go look at that as a model because I well, what I'm thinking of is in particular is the PM Bus crate, which is just Mhmm. There's there's some grime that could be cleaned up in there, for sure. Yeah.

Bryan Cantrill:

This looks great. God, you know, there's there's always there's always a you know, they always say you did as I think we've said before, you know, they always say that there's a chat that includes everyone except for you. I always feel like there's always a detail in that crate you haven't heard of. And then I

Adam Leventhal:

That's what I'm saying. I mean, that that was the tweet. Right? Like, I feel like, there David has done so much stuff in this kind of domain too that surely David has

Bryan Cantrill:

found this problem. In fact, I I'm gonna cast this open to David and to to Rain who's bumped into this. You know, one of

Adam Leventhal:

the things that I struggle with, Rain, is a a problem I saw you working on in Drop Shot, which is

Eliza Weisman:

Mhmm.

Adam Leventhal:

One of the things that Syn, another different tool that create, does very nicely is, like, turning, you know, errors in the Rust macro context into, code generated errors to help debug and stuff. And one of the things I saw you do in drop shot was, like, collect a pile of errors to then emit all at once. And I I'm sort of surprised that there wasn't something you reached for to say, you know, as you encounter problems and errors along the way, accumulate this list so that you're not just failing on the first problem, but actually emitting a bunch of errors for the user to then handle all at once. Yeah.

Bryan Cantrill:

Is there

Adam Leventhal:

anything like that?

Rain Paharia:

So I spent it's funny because after, you know, after we talked about it last time, I ended up spending a little while looking at it. And, there are a bunch of great libraries, and actually some of them that I wanted to talk about here. There wasn't quite anything that I noticed kind of hit that exact spot. And partly because I think one of the things that kind of becomes challenging is that your, if you want to do good error handling, and this kind of goes into that, the cosmic balance thing that you were talking about, Brian, if you wanna do good error handling, often like you can no longer use, like, like good type system, things. So so as an example, you know, one of the ways you might model something in Rust is with like a result of, you know, the okay value or the error value.

Rain Paharia:

Right? But if you want to, like, collect errors, then often something you will do is you'll pass in, like, an ampersand mute error collector or something like that. Right. And the value that you returned, like, is an option. And now you have to know that if, you know, there's kind of this this implicit invariant here is that if there is an option, then that means that you had at least one error go in and so on.

Rain Paharia:

A crate that I did actually wanna call out though, and and something that, doesn't quite solve this, specific problem, even though I wish it wouldn't, is, Miette. So Miette is, a really, really cool crate. It is kind of, so if you're familiar with Rust and and, of course, dTolny's crate verse, you'll have come across, this error and anyhow. Right? Miat is kind of a combination of this error and anyhow, and it kind of meets both of those things.

Rain Paharia:

But, another crate that it actually meets, is, codeSpan. So if you're familiar, like, one of the things that's really interesting about Rust is that it, Rust C has, like, great error messages, and I think, you know, I think that's one of the reasons that all of us feel pretty good about Rust, right? Is that fair to say? And with, the error messages, I think, you know, one of the things that's really nice is, like, you know, there's this lovely, like, syntax, like, highlighting where it'll, like, show you the exact, like, you know, things that were wrong, and it'll give you a suggestion of what to do instead and, like, all of those things.

Bryan Cantrill:

Amazing. Yeah. It's so good.

Rain Paharia:

It's and so there's actually a few crates that, do that. So so the Rust c's own error thing is, it was extracted out into a crate, that I don't remember the name of off the top of my head. Then there's another crate called Goldspan. But, Miat also captures all of that. And Miat actually, one of the things it can do is it can store a list of errors.

Rain Paharia:

So what you can do is, and I have used this pattern in some places, is that you actually store a list of errors, and then you have me at report that with, so you can provide the source code, that those errors associated with, and the byte offset. And so you kinda provide that source code, and then Veth will kinda render that in a nice way. So it's, I think that kind of style of, like, high quality error reporting is actually something that is really, really cool about Rust. And I don't know if there's any other ecosystem that has paid this much attention to, like, how your error messages look. Right?

Rain Paharia:

Rather than just reporting, like, your, you know, layer 9 number or whatever.

Adam Leventhal:

Yeah. This looks really cool. This looks

Bryan Cantrill:

so good. Yeah. I've not seen this. Are you have you seen this before? I don't know.

Bryan Cantrill:

I've never seen this before.

Adam Leventhal:

No. Never. Actually, you know what? Rain may have pointed me to this a while ago. And but, Rain and just to be clear, this is not in, like, macro context.

Adam Leventhal:

This is yeah, when you say it's it kinda draws inspiration from Rust c

Bryan Cantrill:

Yes.

Adam Leventhal:

It's not like it's for, you know, if you're processing some other kind of document or whatever and you wanna draw on that that kind of concept. Yeah. Okay. Cool.

Rain Paharia:

Yes. It is. Right. So so as an example, actually, one of the examples is that you can integrate it with SerdJSON. So you can actually get, like, great highlighting for, like, which bit of a SerdJSON, thing failed, and and I think that's really cool.

Bryan Cantrill:

Oh, that is really good. Yeah. That is because, you you know, I this I I really like Ron a lot. Ron, not the humans named all the humans named Ron, although I guess I like all of you too, but the Ron the the the Rust object notation, I like a lot. But, man, the error handling is the the error messages are really not very good, and it's which is frustrating and, boy, this would be an opportunity to really improve them.

Eliza Weisman:

Brian, it is with a heavy heart that I must inform you that if you have any time in the last 6 months or so made a typo in an idle RAN file and gotten an error that has, like, some RAN source in it, that's thanks to Miat.

Bryan Cantrill:

Okay. So so you've integrated Miat into the wrong parsing in idle?

Eliza Weisman:

That is correct. I'm

Bryan Cantrill:

gonna do the same thing

Eliza Weisman:

for for PR for you.

Bryan Cantrill:

Yeah. Okay. I I I need to do the same thing. This is this is really, but this already paid an enormous dividends. I'm really, like, 20 minutes in or whatever.

Adam Leventhal:

Do you wanna sound a little less surprised?

Bryan Cantrill:

And my You spent 20 minutes, like, 7 minutes was us screwing around. I mean, this is amazing.

Rain Paharia:

I think I think in general, though, there is, some value in having, you know, the kind of thing I wrote, which is, like, you have, like, a nested tree structure. Right? And you are parsing through the nested tree structure, and you wanna actually not just fail on the first error globally, but you have kind of a notion of, like, you know, wanna go through as much as possible, and you wanna collect as many errors as possible. And I have had to do that a few times, and I've pretty much handwritten something, each time. So that kind of suggests that maybe, if I don't know if if there if the audience has a suggestion for something that kind of does that, otherwise, that might actually be worth doing and kind of putting out as a separate thing.

Rain Paharia:

Meehaw is kind of a much bigger scope thing, but yeah.

Adam Leventhal:

This is where d Tollnet tells us to reach under our chairs, and we've we've all got that crate sitting right there.

Bryan Cantrill:

Exactly. I know.

Rain Paharia:

I have a I have a couple of, I have, like, a pair of really, really cool serge related crates. So there is a crate called serde ignored, and there is another crate called serde path to error. So, I think both of these are are really good. And, again, kind of coming at it from the, you know, you wanna produce good error messages kind of thing. Right?

Rain Paharia:

So one of the things that I've noticed in, like, when defining, say, a configuration file is that people will often misspell things. Right? And, Serde has this really cool, deny unknown fields. Yeah.

Bryan Cantrill:

I love this.

Rain Paharia:

Yeah. Yeah. So and and deny unknown fields is great, but sometimes you don't want an error. You instead want a warning. And, serde ignored actually kind of lets you get that warning.

Rain Paharia:

So it's kind of somewhere in the middle between, like, the silently accepting the, you know, maybe the typo, or failing, and I really like serde ignored for that because, often, like, you know, you wanna support, like, some kind of forward compatibility, and if you have that forward compatibility, then you don't just wanna choke if you see, like, a new option or whatever. Right? And so serde ignored does a really, really good job of kinda reporting that. And then kind of paired with that, but kinda solving a slightly different problem is, serde path to error. And, what serde path to error does is it will, it will try and report the nearest part of, the, kind of what failed.

Rain Paharia:

So it'll it'll kind of maintain some state and, like, you know, which keys have you traversed into and so on. And it does a pretty good job at that. So, you know, there's there's a there's one specific asterisk which, can you know, we can don't really wanna kinda get into right now because it detracts, but, overall, like, these this pair of crates have has just kind of been I feel like this has, like, really elevated the, you know, like, kind of error handling experience around configuration files for me.

Bryan Cantrill:

Yeah. This is great. I've not used actually either of these. And, again, we're you know, I but, Adam, I'm also standing by my my, belief that we should not have any bag limit on detail net credits.

Adam Leventhal:

No. I think spot on.

Bryan Cantrill:

Maybe like 50 or a 100 or something.

Adam Leventhal:

But, yeah, path to error, we've definitely used I mean, I'm I'm sure you've seen those JSON errors that are like, yeah, no. I failed the parse, byte 6,015. Is that helpful? Yes. Okay.

Bryan Cantrill:

Yes. Yeah. Yeah. I mean, it's not helpful at all. It's very very unhelpful.

Bryan Cantrill:

The opposite of helpful.

Adam Leventhal:

That's right. In fact, so unhelpful because you're like, maybe I could figure out the 615th byte. Like, that wouldn't be the hardest thing in the world, would it?

Bryan Cantrill:

I I have done that. I mean, that that that has been, like, my kind of immediate go to, sadly, as opposed to being, like, you should actually you deserve a better error message. This is, like, we deserve nice things. I need to Totally. It's a long path to really adjust to that.

Bryan Cantrill:

Yeah. Yeah. Rain it. Those are great.

Adam Leventhal:

What else on your list, Ray?

Rain Paharia:

Oh, boy. That was actually, as far as, I mean, I had a couple of the other detailing traits, but they'll come up there. Another crate that I wanted to call out, and kind of, you know, as a cool proc macro crate is, derive where. So, so one of the things that, you know, kind of people run into sometimes is that, you wanna do, say, like, a derived debug, right, or like a derived clone or something. And, like, if you have a, crate which has a generic, like, you know, t colon clone, Then the implementation that Rust generates is, you, it only implements, like, you only derive clone if t also, implements clone.

Rain Paharia:

Right? Like, you you only get to implement clone that way. That is, like, mostly what you want, but sometimes not. And, so, you know, one option is you if you don't want that clone bound, right, like, sometimes you're not actually storing a t in there. You are storing, say, some kind of derivative type of t, then you do that.

Rain Paharia:

But, the one I really like, that kind of automates this is derive where. So what derive where lets you do is it lets you say derive clone where, some bound. So you can say derive derive t clone sorry, derive my type clone where t is, you know, like some implementation, some some implementation of some trait that you've defined, but you can also say if, like, you know, you don't you don't have any restrictions on t. So you can just do derive where clone. I remember, like, showing, something else at a demo day, and then everyone else was like, what's this derive where thing?

Rain Paharia:

And then, that ended up joining it to derive where demo. It was very funny, but it's a it's a crate I really like and I and I kind of reach end up reaching for it. What does this work? As far as I can tell, it just generates a proc macro that iterates over the the fields, but it just puts different set of bounds on them. So it's it's just a proc macro in that sense.

Bryan Cantrill:

That's really cool.

Rain Paharia:

Yeah. Definitely. One of my my favorite little little proc macros that, you know, help out.

Bryan Cantrill:

Yeah. And so and how were you using it in the the thing that you were demoing. Right? What what was the specific use case where you're using it?

Rain Paharia:

I think, what I ended up, I think what I ended up having was, like, I was storing, like, a phantom data of t or something like that. So what I was, so I was storing, like, so I had these, I had this thing which only stored the t as a marker, so it didn't actually store any concrete values of t. And, implementing, like, something like clone for that should not require that t implement clone. Right? Like, just logically, it is, that is not a requirement.

Rain Paharia:

So I ended up reaching for drive where with that. That's a that's a pretty common thing I end up having to do.

Bryan Cantrill:

That's right, Nate. It seems very useful.

Rain Paharia:

Yeah. It's it's fun.

Bryan Cantrill:

How did you discover that crate? That's this is a good question. We're like Yeah. Wait. Wait.

Bryan Cantrill:

How did it occur to you to go look for a crate that does that?

Rain Paharia:

I think, I think what I ended up doing god. This is

Bryan Cantrill:

Do you have telepathic crate powers? I mean, you can you don't have to tell us if you don't want to, but I this is I this is, these are, there aren't these crates like these rain where I'm like, how does rain know about this?

Adam Leventhal:

Okay. So rain, while you think of the answer, I would just say, have you ever done this brain? I go to chat gpt and I think like, surely there is a crate for this. And I describe the crate that I want. And every time that who's into crate?

Adam Leventhal:

A 1000%. It's like all you have to do, cargo ad, smorgasbord, then smorgasbord, you know, colon, colon, whatever, and it'll do exactly what you want. I'm like, thank you so much, jet g p t. Like, great doesn't exist or it just is or it exists and it does something totally unrelated to it. But I've never got it I've never had it say, sorry.

Adam Leventhal:

Like, nope. Nothing.

Bryan Cantrill:

I do think it's funny that chat gpt is most likely to hallucinate, in my experience, when you yourself think that this thing should exist. You know what I mean? Where ChatGPT is like, no, this should be totally a great like that. In fact, there is. It's this it's smorgasbord.

Bryan Cantrill:

And the, and on the one hand, it's like somewhat vindicating that it's like because there are you know, I I had I went this I Chat Gee, he had a very vivid hallucination around, organizational gist. So organizational you'd use gist in GitHub and all sure. Yeah. And organizational gist exist, but I can't see how to create them or edit them. And we ChatGBT also believes that you should be able to do the things that I believe you should be able to go do to create or edit them.

Bryan Cantrill:

And it's, like, no. You can't do any of those things actually. I they do not it it was, you know what? You know what it reminds you that the organizational gist are, like, inside UFO 5440. Are we talking inside UFO 5440 here?

Adam Leventhal:

Maybe not this year, but, like, several times.

Bryan Cantrill:

Right. Right. Exactly. Maybe not in the last, like, last 6 weeks. I don't know.

Bryan Cantrill:

Inside UFO 5140 was a choose your own adventure, which okay, fine. Choose your own adventures were you you had read choose your own adventures.

Adam Leventhal:

Yes. Yes.

Bryan Cantrill:

But I think but, Raine, you and like Eliza, you did not Steve, did you read Choose Your Owned Adventures? Jeremy Grantham:

Eliza Weisman:

Oh, excuse me. I have read Choose

Bryan Cantrill:

Your Owned Adventures. Jeremy Grantham: Yeah. Jeremy Grantham: This is great to hear. Eliza, so this is good. Choose 1 Avengers have left the generational chasm.

Bryan Cantrill:

This is a relief. I'm glad.

Steve Klabnik:

I used to, like, hold my finger at a place when I wasn't sure what decision I wanted to make, and then, like, choose 1 and be, like, nah. Never mind. And, like, back up 1. And then, like, I started getting a stack of, like, I can't hold 3 of my fingers in 3 different places in this book at the same time.

Rain Paharia:

That's right.

Bryan Cantrill:

It's amazing that Eliza, you you also read Choose Your Own Adventure.

Rain Paharia:

I

Eliza Weisman:

am just old enough to remember safe scumming in a book.

Bryan Cantrill:

Yeah. Okay. Well, so there was okay. Well, great. This is great.

Bryan Cantrill:

This is something that really can bring all generations together. There was one of the early choose your adventures was inside UFO 5440. And inside UFO 5440, in every ending you died, except for one ending where you got to utopia. And, but there was no page that actually sent you, there was no path that sent you to utopia. And I just remember my little 9 year old brain just absolutely smoldering on this and not being able to and of course, in hindsight, as an adult, going back and reading Inside UFO 5440, in addition to the warning of, like, you know, don't read this book straight from cover to cover blah blah blah.

Bryan Cantrill:

There is a special warning that this book that this Choose Your Own Adventure may may require you to think in a very unorthodox way, which was basically their way of saying that you needed to go straight. Anyway, it was it was way too meta.

Adam Leventhal:

Also, as an adult, I'm sure you you appreciate the, like, maybe this will keep this kid occupied for an hour and a half,

Bryan Cantrill:

aspect to that. I definitely appreciate that. And it, may have overshot the mark because, like, I'm pretty sure my brain, like, just absolutely seized up on it. I I I don't I mean, clearly, I'm talking about I mean, I'm I'm a 51 year old man talking about this Choose Your Own Adventure. So, like, I think I think you can safely say that it overshot the mark.

Adam Leventhal:

How did you not reference this to Alexander when he was asking for a book that influenced you most significantly. Like like, fish skunkworks? Like, give me a break.

Bryan Cantrill:

Oh my god. Why was not inside UFO 5440? And I mean, good. They asked for 3, not 4. Okay.

Bryan Cantrill:

That's why. They, obviously, they asked for 4. That's that, I mean, clearly has influenced me a a great deal. Where were we? I'm so sorry.

Bryan Cantrill:

We I feel like I'd be I I think I've done the thing where we are we now we've ended up on, like, the ending that you can't have any way of getting I'm so sorry.

Rain Paharia:

I think, you know, I I was thinking about how I do that, and I feel like I don't there's no, like, magic. Right? Like, I in this case, I ended up Googling for, like, I think Rust derived custom trait bounds or something and, like, derive where it is, like, in the first page of results. But, like, you know, I'll, like, spend a little time looking on Google and creates IO and, and I also, like, you know, maybe, like, ask some people who are there, like, you know, I know. Do you know something?

Rain Paharia:

And then, sometimes I can't find it. It's just, you know, it's really hard. Like, I think one of the ways, one of the more, I think, structured ways that has helped is, like, like, if you have a a particular code base you like, which you feel like might use something like that, then kind of stick around in that code base source code. I think that is kind of, you know, that that feels like a good way, and I've discovered a whole bunch of, crates that way.

Bryan Cantrill:

Yeah. And it's funny, Reyna. I think you that you mentioned that because I kinda feel like this has been this glorious positive feedback loop of the Rust ecosystem getting larger is that there are more kind of programs you can go to to say, how did this thing do this that I like? I mean, certainly that's how I discovered 2ERS, now Ratatouille, was from, like, using something that used that. I'm like, this is amazing.

Bryan Cantrill:

Like, what did this thing actually use to do this? And I guess, Adam, is is now a good time to say the teaser for our our Ratatouille episode coming up in 2 weeks. Right? I think we're

Adam Leventhal:

Yeah. Orlin, who, is a maintainer for Ratatouille, met over at Rust Lab, in Florence, Italy, which I've mentioned a couple of times on the pod. And, he's gonna be joining us in 2 weeks at 9 AM Pacific, which is some other time in in Turkey. So, yeah, really excited to be talking about Ratatouille with him.

Bryan Cantrill:

It'd be great. But I definitely found that from doing exactly that right now of going to looking at other programs and seeing what they were using.

Eliza Weisman:

On the subject of finding crates, I was thinking a bit this afternoon about, you know, I have personally had a lot of experiences with there are many categories of crate where there are a bunch of implementations of basically the same thing with basically the same API and wildly different performance characteristics. My classic example is like async channels, like MPSCs. There's the Tokyo one. There's the futures one. There are a bunch of crates that are just channels.

Eliza Weisman:

And having done a bunch of benchmarking and digging into their implementations a few years back when I was writing my own channel, They all have very different performance characteristics and they're not really there isn't a good one. You know, you don't just pick a good one. You pick the right one for the job, and it depends pretty substantially on the usage pattern. And I think kind of the big lesson that I learned from that is that as a Crayt author, I found it very useful to sort of upfront, like, at the very top of the read me, front and center when you've gone and done something that there are, you know, 4 different versions of on crates. Io like a channel.

Eliza Weisman:

Why should you use this? Why should you not use it? How does it compare with other crates that maybe implement something similar? And I found it really useful to do that and I hope that when people go searching for these libraries and encounter something that I've written, they read this section and know when, you know, maybe this is actually not the appropriate implementation for the specific problem they're trying to solve or when it is. And I wanna kind of encourage others to think about doing this because I think it's a really valuable exercise.

Eliza Weisman:

I can maybe scare up some of the examples of times I've written that.

Bryan Cantrill:

Yeah, for sure. And is now Axelaj, I know at some point we're gonna talk about bit field crates. Is now a good time to talk about that? Because that is an example where there are a bunch of different crates and a unique

Eliza Weisman:

brand as well. Well, so my bit field crate actually, and I I know I promised you some some bit field crate opinions. My bit field crate has such a section in its read me and it basically leads with there's like no reason you should choose this. I wrote it for fun because I wanted to write it for fun. And one interesting thing about it relative to other Bitfield crates is that mine is a declarative macro rather than, I'm a big fan of the bit field crate called modular bit field, which allows you to sort of have a struct and annotate various fields in the struct with attributes and you generate this very nice, you know, packed within one word bit field thing, and I think that it presents kind of the nicest interface for doing this.

Eliza Weisman:

And mine is just kind of worse in every possible way except that it doesn't use a procedural macro because I thought that it would be fun to see if I could get it to work without using a procedural macro.

Bryan Cantrill:

Okay. So, I'm, you know, I've not used module or bit field and this is not some like, this bit fields are a really good example of something where it can be hard to find. The I mean, there are a bunch of different crates. They're all, like, named bit field or have, like, bit field or bit fields or they've got bit field in the name somewhere, and it can be hard to sort out what's what. So I did not I'd never discovered much or bit field in this, but this is actually much closer to the interface that I had kind of wanted to find in other bit field grades.

Bryan Cantrill:

This feels like a really a much a much more kind of natural interface.

Eliza Weisman:

Yeah. I think it's really nice. It's by far the nicest interface to this sort of thing that I've seen and I just pasted in chat the, comparison with other crate section from my sort of c Generis bit field crate that says basically don't use this use modular bit. But I've used my own thing in all of my projects because I wanted to make my own because I thought it would be fun.

Bryan Cantrill:

Yeah. This is actually a great guideline in terms of comparisons of the crates. I also think it's something that I like about the Rust ecosystem too is that it's not like I it doesn't feel like it's a popularity contest. People are just trying to find the right tool for the job and it's like, hey, you don't, like, if my crate is not the right fit for you, let me actually go stir you to the other crates that may be a better fit for you.

Adam Leventhal:

I mean, to the contrary of being a popularity contest, I mean, how many crates have you gone to that have a really thoughtful, like, see these other crates section? And I really appreciate it because they're not that they're apologizing for existing, but real explaining, like, no. This is not NIH. Like, I built this because I looked at these other things, and I evaluated them, and they didn't meet my needs. But they probably do meet your needs.

Adam Leventhal:

So, like, don't use my thing just because I built it. Like, go use the thing that meets your needs. It's incredibly thoughtful.

Eliza Weisman:

It's also it's very nice as a maintainer to not have to, like, respond to the issues of people who are just constantly showing up to say, can you make this other crate? Well, no, I can't because I didn't set out to do that. And, you know, not to sound too much like a member of a cult, but Brian, this is why it's important to have upfront values for one's technical projects.

Bryan Cantrill:

Yes. In terms of being very explicit, I do think it's very helpful to be explicit about the things I care about and the things I don't care about Or and because I think it's it's okay that the things that that matter to a Crates author don't need I mean, I think, and just just what you're saying, Elias, about being upfront about, you know, maybe it's, maybe performance is a primary consideration, maybe it's a secondary consideration, but being explicit about that is actually, really, really helpful I think.

Eliza Weisman:

Yeah. And the big lesson from the channel thing that I learned from doing a bunch of benchmarking of channels is that there isn't really just a box that says performance on it that you can click, right? Because for instance, I in the read me to my channel crate which I posted in Discord chat, I discussed, you know, there's this question of, like, an MPSC channel, an async channel has to store the messages in the queue someplace, and there are channel implementations that will allocate and deallocate chunks of buffer as you are like sending and receiving messages so that like the the memory usage of the channel is proportional only to the number of messages currently in the queue versus channel crates that will allocate the whole buffer once when you make the channel. And there isn't necessarily one of those is not good and the other is bad. It's a question of is this channel like something that is structurally integral to the program and it lives for the entire time that the program exists and all of the messages go in that channel and is bound on that channel like extremely large and the message very big and that means that if you keep it fully allocated all the time, that's like a very large amount of memory or is it something where you are making these channels and you're having a bound of like 8 or 10 messages and now there's just extra overhead of doing this like I'm gonna allocate another chunk of buffer as I need it.

Eliza Weisman:

And it really depends pretty substantially on the usage patterns and there is no sort of the right move. So it's useful to document those sort of performance trade offs and the ways in which like this might be suitable for one type of use but not for another.

Bryan Cantrill:

Yeah. Totally. And I think that the Rust ecosystem tends to be pretty good about talking explicitly about what those trade offs are, but, certainly excellent advice to create authors to be very explicit about those trade offs.

Rain Paharia:

I think a a a a case where it kind of that's kind of organically arose is, is with, CLI, parsing crates because there's a whole bunch of CLI parsing crates. So sometimes things like clap, which, you know, many if you've written the rest CLI too, you've almost certainly come across clap. But there's a whole bunch of other points in this design space that people have hit with various trade offs. And, I was really appreciative of, like, people really put together, like, benchmarks for, like you know, you're considering things like, how long a build takes and, like, how many bytes get added to the final binary, right, versus, like, error handling and so on. And I think, you know, different projects can reasonably make different trade offs here.

Rain Paharia:

And, one of the things I, this table was, like, when I saw this table and when I saw, like, you know, the amount of work put into it, it was it was just very, very impressive to me.

Bryan Cantrill:

And so you've dropped, the, argparse rosetta, RS repo into the chat that is looking at the, benchmarking all of the the argument parsing libraries, which I think is really interesting. Truthfully, I've been, although actually now looking at the times, it's kind of interesting, look at the build time versus versus the overhead, because I I have used Clap for most things and I've been, especially with the when Clap merged with Structopt, I've found it to be, really pretty terrific. But I also think it's great that there are other approaches out there. It's not the only one out there.

Rain Paharia:

Yeah. And and in particular, like, I mean, Clap has a couple different ways to use it. You can use it, with or without the, proc macro, but then, there's a bunch of others like, so actually another one that I really like, that is much lower level than Clap, is Lexopt. So the goal of Lexopt is, like, all you are like, all it gives you is an iterator over the options. Right?

Rain Paharia:

You know? So you're getting an iterator, and in the iterator, you get, like, a little bit of structure. So you get whether it's, like, a single, dash or a double dash. So you you get, like, very, very basic things like that. And, you know, some if you really want that low level of control, then, then Lexoft is great.

Rain Paharia:

But the trade off there is that you need to write your help yourself, and you need to remember Right. That each time you add, you know, a thing, you also need to add the help for that. And, you know, maybe the error messages aren't as good and so on. So so, you know, these are these are the kinds of things that, that you have to consider. So, you know, I I I recommend clap as the thing to go to, right, if you wanna if you wanna start.

Rain Paharia:

But but these are all things that are, you know, worth considering for things like embedded binaries and so on.

Eliza Weisman:

I wanna point out that LexOpt has in its read me a very nice why and why not section in which it says basically everything the range has told us.

Bryan Cantrill:

Yeah. Interesting. That just looking at LexOps, read me now. And it it me it wants to be, like, small, correct, pedantic, imperative, minimalist, unhelpful. I feel like this is a description of many of us.

Adam Leventhal:

That's right. Look in the mirror. You may be looking at Lexoft.

Bryan Cantrill:

You may be looking at Lexoft. Exactly. But you and I I I and and it also means so it's making less decisions for you. I gotta say, like, the thing that drives me, that is annoying about Clap and maybe this is something that has been fixed. I should go see if they and I should probably get an actual issue open on this.

Bryan Cantrill:

But there and see, I can't remember if we if we actually open an issue on this or not with Clap that there is basically no way to have a a minus h option. Like, a minus h, it is gonna take for help no matter what. Like, if you're like, no, no, no. I don't want that to be help. It's like, too bad.

Bryan Cantrill:

Clap is like, it's, nope, that's help.

Steve Klabnik:

Yeah. I remember that being something. I'm not sure if we did open an issue or not, but Yeah.

Bryan Cantrill:

And but but honestly, Clap is so useful and helpful in so many other regards. I'm like, okay, you know what, I actually really appreciate it, but it's a good example where it's making, it is not, I think it's fair to say, not small, and not minimalist. And, it is different from LexOpt. So I think it's, you know, Eliza, just what you're saying about, like, being very upfront about who you are as a crate and kind of what what the rubric is gonna be for the a way a crate decides to integrate additional work or not, I think is extremely helpful. Yeah.

Bryan Cantrill:

Lexap design is Lexap is nice. I like that.

Rain Paharia:

Yeah. It's it's it's really cool. I have I've actually used it in combination with clap. So I was like, you know, there were places where I had clap for the first hole, and then, and then I wanted, like, a second level of parsing for, like, something more detailed, and then I used Lexap for that. So I know, you know, ultimately, like, it it takes a bunch of strings.

Rain Paharia:

Right? Like, it is a thing that takes up strings and produces output. Right? So you can it's a it's a primitive that is generally useful, I think.

Bryan Cantrill:

Yeah. And I also love that that the the why not under LexOps too. Yep. So that's that yeah. That's pretty great.

Bryan Cantrill:

Alright. What what else is on is on everyone's list?

Eliza Weisman:

Well, I do really feel like I will be sad if I don't get the opportunity to plug what I feel is the crate that has had the biggest and most profound impact on my life personally. And that crate is Loom, which is pretty different from everything we've discussed so far. This is the crate that Karl Erca wrote while he was working on, the Tokyo scheduler And what Loom is, is a model checker for concurrent Rust programs. And the way that it works is it gives you, sort of a set of all of the primitives in standard threads, standard sync atomics, and standard sync mutex, and so on, and a sort of simulated unsafe cell. And the way these things work is that they have basically the same API as the standard library functions, but rather than actually being like you're spawning a real thread or you're just creating a real single word that you're doing atomic compare and swap operations on, instead what they do is they deterministically simulate all of the potential interleavings of concurrent operations that are permitted by Rust's memory model or the C plus plus memory model with Rust inherits.

Eliza Weisman:

And this is sort of based on a paper I believe that describes a sort of model checker like this for C plus plus And so what you can do is you can have like using some conditional compilation, you can say, normally I want to actually spawn threads or use real atomics or what have you, but when I'm running my tests, I want to be able to write these, these deterministic models that will exhaustively explore all of the permitted interleaving like the Rust compiler is allowed to emit or allowed to allow the operator scheduler to emit. And then, you know, it's, if you use the loom on safe cell, it will check like, okay, if I have a immutable access from one of the simulated threads and then the this is like thread yields and now I'm like executing some other thread and now there's a immutable access to that same unsafe cell, it will then, you know, generate a reasonably nice panic. And when you do this, you sort of have to sit and run this test for tens of thousands of iterations because, you know, this is sort of a combinatorial explosion of potential like paths that the model permits through this like test that you've written.

Eliza Weisman:

But the reward for that is that if you've written complex concurrent code like a lock free data structure, you get to learn all of the ways in which you've done it wrong Yeah. Which is I would say deeply and profoundly humbling. You learn the ways that, like, perhaps you were executing this code in real life on an x86 machine and you've never seen any of these possible data races because you're running on an X86 machine. But, you know, someday your code might be cross compiled for ARM and it just so happens that, like, you've used sufficiently advanced or sufficiently relaxed atomic orderings that when compiling for ARM, you will actually see like loads and stores reordered in ways that will result in this data race that you never see in real life.

Bryan Cantrill:

And And then so, so, you've used Loom, it sounds like, to actually debug wait and lock free data structures.

Eliza Weisman:

I have learned used it not to debug wait and lock free data structures

Bryan Cantrill:

so much as to

Eliza Weisman:

learn that my wait and lock free data structure is wrong. You okay.

Bryan Cantrill:

So I was gonna ask. So so you alright. So Loom has found an interleaving, which now has incorrect behavior. Yep. What does he what happens now?

Bryan Cantrill:

Does it in terms of like getting from that interleaving to understand were you able to relatively easily get from from Loom's discovery of an interleaving to be able to be able to wrap your brain around what had actually happened?

Eliza Weisman:

So, Loom will log it's like it will log, you know, I'm doing this operation at this time and it will try to tell you it's logging is like somewhat useful. It will try to use track caller a lot so that it like captures like, where was this mutex constructed in the program, at what line, where was this atomic constructed, at what line was it accessed, at what line was this, this unsafe cell accessed, and which thread did that, or which simulated thread in this test that you've written and it will try to sort of give you some helpful information about that. But honestly, it also is just sort of very useful as a trial and error mechanism that sometimes you just sort of end up going, oh, I think I understand what the problem is and I'm going to kind of permute the program a little bit and I'm going to run it through Loom again and maybe now this model will actually you know, after running through tens of thousands of iterations, I've actually not found anything that causes a data race or a deadlock. It also does deadlock detection and it has a leak detection facility similarly.

Eliza Weisman:

If you also use Loom's wrappers around box or other ways of allocating and deallocating, it'll tell you leaked a box or an arc. And again, the thing about this is that it sounds at the surface level similar to tool like, t san or ASAN or Valgrind, but it's actually quite different because it's a model checker rather than a sanitizer that you run your program under and then get back, oh, while it was executing it did a bad thing, but it's possible that you'll just never see the bad thing happen during that execution. Whereas with this sort of deterministic model checking, of course there might be bugs in the model checker or you might have set bounds on how much it can explore the state space. Wow. I can't talk today.

Eliza Weisman:

The state space. And you might have missed a bug. But if you set aside those things, you know that you've actually determined to explore everything that Tyler is allowed to generate. So, anything that is outside of that is not permitted by the model.

Bryan Cantrill:

And This is a life changing crate for you in part because it highlighted the, the challenges in your own weight and log free data structures. So with that, describe a little bit how how this comes back to your

Eliza Weisman:

Yeah. This stuff is incredibly hard to reason about. And every time you think that you're actually good at it, that's that's very dangerous. Right? Because it's just this this stuff is incredibly difficult for us to deterministically explore all of the interweave permitted by the model in our head.

Eliza Weisman:

And so it's just sort of like it has really kneecapped me every time I've used it and, it just sort of taught me about my own insignificance and how small my mind is relative to what is permitted by this like kind of extremely complex, the memory model. And really the way that it has impacted me is that I will never write lock free, wait free or even concurrent code that uses locks that is of sufficient complexity without using Loom. And Yeah. I tried very hard to avoid anyone else's code that has not either been tested using Loom or tested using another similar model checking because I don't think that human beings unassisted do. I think that it's sort of like C versus Rust.

Eliza Weisman:

Right? It's sort of like there are plenty of C programs that have run-in production and are, you know, thus far we have not seen the lurking memory errors in them. That's great. But this is a way of exhaustively proving the correctness of our programs and it has taught me that I don't it's like this is not me saying like y'all don't know what you're doing because I don't trust myself to do this unassisted either. I think that it is just fundamentally like you will regret not using these tools and you will regret using any library that implements a complex concurrent data structure that is not tested using a tool like I'm not saying it doesn't it certainly does not have to be Zoom in particular, but something of this nature is just kind of a necessary tool to write this kind of software.

Eliza Weisman:

I I certainly was there for the gradual sort of push to cover all of Tokyo's internals with Loom. Carl developed this while he was sort of rewriting the scheduler and over time we sort of pushed to get it into more and more of the various synchronization primitives and like other, Tokyo internals and we found just a kind of devastating amount of bugs by requiring that any newer changed code have loop test. And many of those bugs had not been discovered directly, but they probably sort of fixed a lot of the, like, weird, inexplainable behaviors that there were GitHub issues that nobody what the answer to was.

Bryan Cantrill:

Yeah. That is wild. And I mean, as you say, kinda like chilling when you start seeing all the, also when you have these issues where you realize, like, God, the, the symptoms of this problem would be really far removed from the root cause. It would be really difficult to debug, presumably, if seen in the wild. They would just kind of be, you would die on some state inconsistency, and presumably, and then try to reason about how the hell you could possibly end up in that state.

Bryan Cantrill:

That yeah. That seems great. And I I love the fact so, Yves, was Loom done by Carl as part of the work on Tokyo? I mean, was kind of Loom born out of the need to be able to to better understand or validate the Tokyo changes?

Eliza Weisman:

Yeah. At it came out of, I believe, between some of you might be old enough to remember Tokyo 1.0 or Tokyo 0.1, where, like, Tokyo was split into, like, Tokyo core and, Tokyo IO and various other other crates. And in the the sort of process of writing Tokyo 0.2, Carl rewrote the entire multi threaded run time more or less on his own And in the process of doing that, he realized that this was just like extremely difficult and somewhere along the line found Yeah. Found the paper. I believe the original, the paper is called CDS Checker and it describes a very similar thing in in C plus plus and Carl basically said to himself, you know, I can't keep continuing the scheduler rewrite without this.

Eliza Weisman:

I have to stop what I'm doing and go and implement it. And I'm sure Carl can recount this story much better than I can, but he sort of stopped everything he was doing and went and materialized the thing. Since then it has been kind of improved substantially, in particular with regards to actually being able to tell you what went wrong in your program instead of just sort of, well, you did a data race, good luck. And also its performance has been kind of optimized substantially because we might not generally think like, oh it's a testing tool, performance is like very, very important, but it's a testing tool that will execute a test potentially 100 of 1000 of times. Right.

Eliza Weisman:

Sometimes you're really sitting there for like an hour waiting for the thing to run one test. So a great deal of perf work was sort of done, more recently to try and make it not just mind numbingly slow. But, yeah, that's really that's its heritage.

Bryan Cantrill:

It was and I mean that's great. I was simply the performance is terrific. I didn't

Eliza Weisman:

No, it's not. It's pretty slow.

Bryan Cantrill:

Well, but it is also just extremely satisfying when you got the computer just working so hard too. I I I mean, I love it when the computers are working and we're, you know, we can get no, we get to come back in an hour and see what the computer has found in terms of these, of these subtle issues. It's very satisfying. That's that's great, Eliza. That's a that's great.

Eliza Weisman:

At a meta level, one last note on just how long it takes for this thing to run. At a meta level, I would add that the length of time that the Loom model of a concurrent data structure takes to run is sort of a good, like, warning metric too. Like, if it takes if it takes an hour to test this thing, maybe this thing is actually too complicated and you could make it Complicated.

Bryan Cantrill:

Yeah. Interesting. And trying to it it just driving towards something that is simpler. And then, someone in the chat asks about postcard. I we actually use postcard in in Humility.

Bryan Cantrill:

We Humility and Hubris use postcard, as a Is

Adam Leventhal:

it a SIRDE serialization format?

Bryan Cantrill:

Yep. Yeah. It's a and it but one that is, pretty tight, and pretty straightforward. So, yeah, well, I'm a I'm a postcard fan for sure.

Eliza Weisman:

Postcard is, if memory serves, is very similar to Hubris' sort of indigenous serialization format but with a couple of key differences. I think that Cliff skipped the, variant, But everything in postcard, every integer in postcard I believe is variant.

Steve Klabnik:

I don't know, but there's there's a difference, but Cliff definitely looked at postcard whenever we were doing the Hubert civilization stuff, and definitely did a lot of inspiration from it but did, like, ultimately decided to design his own thing.

Bryan Cantrill:

I

Eliza Weisman:

had on my list. Okay.

Bryan Cantrill:

Yeah. Go ahead.

Eliza Weisman:

I had on on my list another one of, Postcard is at James Munn's thing, and I had one of his other projects on my list of crates, which is BBQ which is like, queue like the data structure and BBQ is a multi consumer, multi producer byte queue, that allocates exclusively in contiguous regions in memory. And the idea is that this is a queue that you can grab sort of a chunk of bytes of a given size off the front of and then you can do a DMA directly into that lease and release it to the queue and then you can like wake up the other end. And, he's got like a bunch of different, in the interface for it is like kind of hairy, but it allows you to say I want this, like, static region that I've declared as the backing storage for the queue or I want to be able to dynamically allocate a byte buffer that is the backing storage for the queue so that you can use it really in both, like embedded projects where you don't have any capacity to do dynamic allocation. You can make them on the stack, you can make them on the heap and they're really nice and they're DMA safe, so you can just like have your NIC or whatever right directly into the region in queue that will then be consumed by somebody else.

Eliza Weisman:

It's quite nice. It's also based on a paper I believe called BitBuffers and I think that that's kind of an underappreciated crate that I have I have really enjoyed using.

Bryan Cantrill:

Yeah. And as someone in the chat points out, has a, 90 minute guided tour, that is Yeah. Go, watch the guided tour of BBQ. But, yeah, that looks that looks good. I

Rain Paharia:

have a great Yeah. I, so, is PetGraph too mainstream to talk about here? Or,

Bryan Cantrill:

no? No. Don't think so? I I I'm not sure. Yeah.

Bryan Cantrill:

What is it? Sorry. I'm embarrassed to visit too mainstream. Like, I, no. I don't think so.

Adam Leventhal:

Still ever heard of YouTube, Brian? Like, the it's,

Bryan Cantrill:

Dawn? I don't know. Okay.

Eliza Weisman:

So there's the

Adam Leventhal:

a hub of it.

Rain Paharia:

So, so PetCraft is, it's a crate I've, had the good fortune to use, a few times, in my career. And, it is a crate that, lets you represent graphs. Right? So it is a crate that essentially has a bunch of graph data structures, and, you know, you can represent your things in there. And one of the things, you know, and and I was thinking about why I like petGraph so much.

Rain Paharia:

And, like, you know, there's some there's some other places where I will, like, handwrite my own representations rather than using some framework someone has provided. And, like, you know, for this in this case, pet graph is, like, it is a whole framework, right? So you you kind of model your data, you put it into their data structures, right? And for me, like, I think the distinguishing thing is, that pet graphs gives you a lot of value, from that thing. So there is, like, a wealth of graph algorithms that are included in pet graph.

Rain Paharia:

So, you know, there's, like, there's like 2 different, SCC algorithms. There's, there's a bunch of different, like, you know, like min, you know, the max cut min flow stuff. Like, there's, there's a lot of really careful handling. And, so, you know, at this point, it's like, okay. You know, if I have a graph, and one way I could use do a graph is, like, you know, the simplest way as you can imagine, like, a a a node with, like, a arc of node of children or something, right, like, or something like that.

Rain Paharia:

And I think, you know, what that kind of work and you end up having to write your own algorithms, on top of that. But pet graph just kind of you know, you have to do a little bit of work to fit into it, but it just gives you all of these algorithms. And, like, there have been times where I have thought that all I want is, like, a DFS, and, you know, you could probably write a DFS by yourself. But then I realized, oh, you know, in some cases, the graphs can have cycles. So I need an algorithm to kind of convert the graph into, like, what is called the condensation graph, which is the same graph but without cycles.

Rain Paharia:

And, and then you know? So it it kind of gives you all of these things. And, I don't know. It's it's just some there's something very satisfying about pet graph in a way that, you know, I I really like it for

Bryan Cantrill:

for pet graph and shit. This seems cool.

Adam Leventhal:

Yeah. This is great. You know, I've known about this for a while, unlike some folks.

Bryan Cantrill:

It's not.

Adam Leventhal:

But, but I've sort of resisted using it just because it felt heavyweight, if you know what I mean. I I think it's exactly as you're saying, Ray, about this this kind of dichotomy between kind of big framework versus kinda lean and mean. Yeah. But this is a great endorsement and and a good reminder to go at least kick the tires next time I come across a problem that feels like it might be up at grass graft's, alley. Yeah.

Bryan Cantrill:

In fact, you've heard of it. I mean, this explained Ray or explains Rayn's concern that it was too mainstream for this that, so so wait. Alright. So where did you did the mister, I've already heard of this thing. Where did you you hear about PetCraft?

Bryan Cantrill:

Where where did this one

Adam Leventhal:

Yeah. So where where did I hear about it? I can't I don't know. I don't know. I mean, I guess, just other podcasts.

Adam Leventhal:

I don't know. But I I, you know, I think I was looking for it because I was

Bryan Cantrill:

Did we just be at the podcast? I didn't realize that we'd do we have an open relationship like that? I didn't realize that.

Adam Leventhal:

You're the one who appears on all these other podcasts. Not

Bryan Cantrill:

Okay. Okay. Oh, no. Here we are. Now we're here.

Adam Leventhal:

So, I in the typify, create that I wrote to do, JSON schema to Rust, like, cogeneration or I would there's a bunch of, like, compiler I mean, graph like problems to it. In particular, you gotta find cycles. And if you find a cycle, you wanna break it with a box as you generate this kind of containment cycle. As you look at derives, you kinda maybe wanna look at strongly connected components. So I started looking for Rust crates that implemented these SCC algorithms, and that's where I came across PetGraph.

Bryan Cantrill:

PetGraph. Interesting.

Steve Klabnik:

PetGraph is, like Yeah. I think pretty well known on forums because when people say, like, oh, Rust can only handle tree shaped data structures, A very common thing is, like, well, did you try pet graph? Because it's it's, like, old. It's not old is maybe wrong. It's been around for a long time and therefore is well known largely because it was kind of the first, like, you want a graph, like, data structure.

Steve Klabnik:

Like, okay. Here's, like, a good easy one to use thing.

Bryan Cantrill:

Yeah. Interesting. And the and it's actually doing this by actually properly managing adjacency lists as opposed to actually, like, having references to nodes. Right? I mean, I presume

Rain Paharia:

Yeah. Would. So BeckGraph is really interesting because it actually presents, like, 4 or 5 different, representations of a graph. So there's the adjacency list graph, which is the default graph, right? Like, you know, if you want a graph, then you probably want to reach for an adjacency list graph, right?

Rain Paharia:

There's also an adjacency matrix graph, which, you know, if your graph is, you know, in some cases, you wanna use the matrix representation of things, and you can do fancy things with eigenvectors and so on. There's, also, like, this other one that lets you kind of so the the the first two representations only let you use integer keys, if I remember correctly. But then there's also one that lets you use your own keys, anything that implements copy, and I think hash and EQ or whatever. But then it also provides, essentially, like, an abstract interface, so it provides a bunch of traits. And, if you have your own graph, and so you can bring your own graph, and you can implement, you know, those traits for your graph.

Rain Paharia:

And if you do that, then you get access to, like, the full set of algorithms, to the extent that your create support that, to the extent that your graph supports that.

Adam Leventhal:

That's awesome. That's really great.

Bryan Cantrill:

Well, then I also love the fact that you can easily output it as graph is. So you can actually go Right.

Adam Leventhal:

That is your love language.

Bryan Cantrill:

You know, Datapol is my love language. First of all, not graph is. You know, I

Adam Leventhal:

My bad. Sorry.

Bryan Cantrill:

Please, Datapol can get us. Yeah. Exactly. But, yeah, this is, this is Nate. This is Nate.

Bryan Cantrill:

And just of course, laying eyes on on the references Dijkstra gives me flashes me back to Dijkstra's Dijkstra's tweet, actually, Adam, in your your mask for work there.

Adam Leventhal:

Oh, yeah. That was

Bryan Cantrill:

from the way back. Yeah. Put that in there. In the way back. Exactly.

Adam Leventhal:

From the from

Bryan Cantrill:

the true Twitter Spaces era. Okay. So, Brain, not apparently, too mainstream for Adam and for Steve, perhaps, but not for me. I I I you could just take me as a as as a complete neophyte with respect to some of these these greats, but, looks great.

Adam Leventhal:

Name another mainstream crate that everyone knows about, but I'm gonna and it is a Detonate crate, but I am going to give a particular shout out in it, which is SYN, s y n, the the syntactic partial create. Now the shout out I'm gonna give in it is that, like, the more you can do things like, every SYN has thought of more things than you think of. Like, so for example, I've had a path. I'm like, oh, okay. I want if the path is exactly of length 1, and if it matches the string, then do a thing.

Adam Leventhal:

There's a built in for that. If you ever find yourself dealing with, like, a function that or a, a structure that has a bunch of generic parameters, there's a function that splits it up in exactly the way that you want for doing a drive macro. So this is only to say, like, spending a a quiet time in the tub or whatever, like, reading the docs for sin is time well spent, and there's, like, lots of stuff built in there that, anticipates the things that you think you might need to build by yourself.

Bryan Cantrill:

Yeah. The was and another detail in the crate that kind of sin reminded me of is paste, Adam. Mhmm. And I was looking for the equivalent of the, do we say for the, the, the Octathorpe character, do you, do you.

Adam Leventhal:

Do we say hash or, or, or, or pound? I think I say pound.

Bryan Cantrill:

Yeah. I think I did say pound and I'm worried I now say hash. In any case, the, the so in CPP, the C pre processor, there is the pound pound operator does not Google well. Very hard. Like, I didn't even know what that thing was called.

Bryan Cantrill:

All I know is that I had used it in CBP and I wanted an equivalent in Rust and I could not go, I mean, I just like, I didn't even know what to search for. I was just, I felt so helpless. And I don't know, I think you bailed me out of that one at some point. I think you're just like, I was describing my agony of trying, I couldn't even search for the thing that I was trying to replace in terms of what is I now I now know is called the token concatenation operator, in terms of pound pound. But the actually, I couldn't even Google the thing I wanted to replace, let alone a way to replace it in Rust.

Bryan Cantrill:

And I think you I think you had put me onto paste. But I noticed that that Paste is now read only. I'm not sure if that's because it's done or if I should be using something else. The I I do love the fact that oh my god. Paste has Detonle has added pound pound as a as a GitHub topic in the Paste Crate.

Bryan Cantrill:

Has it been done for my benefit? Is that That's what

Rain Paharia:

you're saying.

Adam Leventhal:

That was quick. That's incredible.

Bryan Cantrill:

Yeah. Exactly. But another another great detailed a crate. The another crate that I wanted to get in there, Adam, is, the, goblin for Elf, and Gimli for dwarf. Elf is much simpler than dwarf.

Bryan Cantrill:

And and I'm actually looking at like, libelf is actually a pretty good library in C, but goblin You

Adam Leventhal:

know what's not a pretty good library is libdwarf in C.

Bryan Cantrill:

Libdwarf is not a pretty good library. That's exactly right. That's exactly right. Libelf is a good library and libdwarf is really not a good library at all. That's exactly right.

Bryan Cantrill:

The, but Goblin makes it super easy to rip apart elf binaries. And Gimli makes it as easy to go through Dwarf as Dwarf is. Gimli has done a good job of like Gimli's basically like, look, like dwarf's problems are not my problems, Gimli. Gimli does a good job as good a job as it can do. I I I really like, Gimli quite a bit.

Bryan Cantrill:

That was another and those are, like, relatively easier to find because if you're looking for a dwarf quart a a dwarf crate or an elf crate, you kinda know what you're searching for. You'll find them. But, they're both very good. I have a shout out for

Adam Leventhal:

a crate that, that's sitting in a in a sea of, undifferentiated creates more more so. That is to say, if you're searching for I want the dwarf parser, like, you're gonna find it. I really like HTTP mock. There are a bunch of HTTP mocking crates out there. And in fact, I think in our open repo, we use all of them by accident.

Adam Leventhal:

But HTTP mock, is the one that I really enjoy the most and, in particular, gives you a little closure with a structure called, when and then another structure called then, And then you do kind of manipulation on when to find the, the kind of predicates of, like, when you want the response returned. And then the then is the actions taken as a result of the HTTP query. I really like it. I really like the the way it you know, I think that there are some crates that kind of, like, vomit their guts out. And this is one where it really presents a nice user experience, a nice user interface, and there's a bunch of complexity underpinning that that that allows for that nice interface.

Adam Leventhal:

I I I really enjoy that one. It's my my favorite HTTP mock and create, if that doesn't make me the world's biggest dork.

Bryan Cantrill:

And that is HP mock.

Adam Leventhal:

Right? HTTP mock. Exactly. Yeah.

Bryan Cantrill:

Okay. Yeah. Yeah. Yeah. Interesting.

Bryan Cantrill:

So what have you used this for?

Adam Leventhal:

So, we use it in the, so I want so I wrote the, Progenitor CLI generator and I wanted to have, end to end validation of, like, running CLI commands that the CLI is built in CLAP as well. So, I wanted to do that, but not against a real, you know, oxide server. So we actually auto generate Yep. Additional traits for HTTP mock so then you can make type checked, mocks against our API. So, like, the the CLI is banging against this mock server to validate all the different, you know, CLI subcommands that we that we emit or that we create.

Bryan Cantrill:

Yeah. Wow. That's really cool.

Adam Leventhal:

Yeah. It's a

Bryan Cantrill:

it's just it's just a nice interface.

Adam Leventhal:

I just really appreciate, like, the way that it the way that it operates. There's some limitations, like, you know, I think there are other mocking crates where you have maybe more flexibility or, like, you just get a generic, you know, function where you can respond with whatever you want. I think the constraints associated with this allow you to build something that's, a little more type safe.

Bryan Cantrill:

Yeah. That's neat. I'm just looking in the chat. There is RHDL, a Rust based HDL for FPGA development. We've got that, oh, that's very spicy.

Bryan Cantrill:

I have to go look at that one.

Adam Leventhal:

Yeah. That sounds really cool.

Bryan Cantrill:

That that one is totally new, at least to me. But then we but but I think we've already established lots of things that are apparently very mainstream are are, are new to me. Right? Eliza, would you give some other, other shout outs to the crates that

Rain Paharia:

Yeah. Go ahead. So I got one. So this is a crate that, so, so I maintain this crate, but I didn't write it. I just happen to have been the one that manages its great site releases.

Rain Paharia:

So this is a crate called, Camino. So this was originally written by, Boats, who, was a Rust project alumna, they had, so they were the one who drove, like, async await, for example, in Rust. And so they've done a done a work. And and so one of the the things that they did was, you know, it if you've done anything around bots, in Rust, like file bots and file names and stuff, then it's always been bothersome because in the very, very typical, you know, Rust will be, like, anally correct about everything as far as possible. So that correct about everything as far as possible.

Rain Paharia:

So that mindset kind of gets reflected in the way the path, libraries are designed. So they will handle weird things like, unpaired surrogates on windows, or like non UTF-eight bots on unixes, and like they will, and and so that ends up being like, if you want to write a, a tool that is as correct as possible and handles as many files as possible, then you probably need to take care of all that. But in reality, most of the time you don't, right? Most of the time, like, if imagine you're like at oxide, right, and you're writing a simple, like a simple, like, server or whatever. Like, you are going to like, the files you're gonna get and the files you're gonna use are like, are they're gonna be like well structured, right, in some way?

Rain Paharia:

Right. So Botz, wrote, a library called Kamino, which, essentially replaces OS string as the base with string as the base for things. So so these are parts that behave like strings. So so you don't they don't handle every possible path, but they handle, like, basically every, like, realistic path that, like, most programs are ever gonna see. So this is, this is a crate that I I use for, like, pretty much everything that, like, I end up writing, and I think most people should use it.

Rain Paharia:

There's a, there are actually some now this sounds like, you know, there sounds like a trade off in some cases, right? Like, you're losing some functionality or whatever, but one of the things I've realized, from my time working on this stuff is that actually that trade off was always false. And so as an example, like, you know, if you say if you have a path path, and that path path has a path that isn't a valid string, then that path does not get serialized as JSON properly, right, as an example? Or, like, if you or if you get a string, it won't, you know, won't get serialized properly. So if you are ever deserializing bots, you are already putting in a restriction that those bots must be valid strings.

Rain Paharia:

Right? So you are not adding anything new here, and I think, you know, I think amino kind of is is a real improvement to anyone who does that. So I know that, like, at, you know, at, at Oxide, we use it a bunch. I've used it a bunch. But, yeah, I think if you if you want, if you want to handle parts and you don't already know that you need to handle, like, every possible path, then you should consider, using Camino.

Bryan Cantrill:

Well, Camino does a good job in in the kind of the creative readme of explaining why it exists and and when you should use it and when you shouldn't use it. Just to your your analyzes earlier point about, I think they do an excellent job. Yeah. About what the the problem that it's solving.

Adam Leventhal:

I definitely need to be using this in like 3 different places. Thank you so much, Rain.

Eliza Weisman:

Yeah. Camino Rocks. I do have one last rock. I have a hard stop, 6:30. So, I just really wanted to get this one in.

Eliza Weisman:

Crate that I really like because of its sort of implementation and its sort of cleverness and beauty. And it is also sort of an example of a thing where there right design for this category of of data structure, and instead you, like, really have to pick the correct one for your use case, which this may or may not be, which is concurrent hash maps. And my personal favorite concurrent hash map is John Gengset's, Evie, John Hu, Evie Maps, which is an eventually consistent hash map. And the way it works is it's just sort of got 2 hash maps, and one of them you read from, and that allows you to read from it without acquiring like a any kind of lock. Right?

Eliza Weisman:

And then there's one that you write to, and periodically you swap them. And this is quite nice because, you know, there's actually like nothing scary going on in have 2 maps and a read write lock. And if you choose to have them be only eventual consistent, you don't refresh the read replica on every write. And if you do, you still have something nicer than just naively sticking a one hash map inside of a read write lock because sometimes doing the write operation the map will do a bunch more. You might have to allocate something inside the map, you fill up a bucket and have to move things around, and all of that happens in a write lock that's only contended by writers.

Eliza Weisman:

Right? And then the lock that also contends with leads just swaps 2 pointers. Right? So, the amount of, like, the amount of time that a tend with that lock substantially reduced relative to just putting, like, one hash map inside of a right. But you're still contending with the reader because you have said, like, I want to do this refresh operation on every right, But you also can tune the consistency of the map and say, I actually don't wanna do that.

Eliza Weisman:

I want to do it periodically. And now you have a situation where you've reduced the contention readers substantially by every 5 or 10 or 25 rights, you refresh the replica that's read from. And this is just kinda neat because I I find it very beautiful in its sort of conceptual elegance. And depending on particular need you have for a concurrent hash map, it could be the right one or it could be wildly incorrect for your use case. I just think it's fun.

Bryan Cantrill:

Yeah. It would and in particular so this one, we this is gonna be, especially a good fit if you've got many, many, many readers.

Eliza Weisman:

Right.

Bryan Cantrill:

A I agree. And performance is important. And it's it's something that you want to update, it's a structure you wanna update, but you're willing to to have some control over when those updates are seen by the readers. Right. You don't need them to be always the the eventual because the thing I also like, I mean, I'll ask correct if I'm wrong, but just from reading the the the description, it sounds like you've got some control over eventually consistent is not just like, well, it may be a day or 2.

Bryan Cantrill:

It it you know, that's you've got some control over over when that actually happens.

Eliza Weisman:

Yeah. I the thing that I neglected to mention is leave way to explicitly say right now I want to summarize the 2 replicas as well as you can set like an interval or, a number of rights after which you will refresh. I haven't used this in quite some time. I don't remember the API for it, but the idea of it has stuck with as long as I've known about it.

Bryan Cantrill:

Yeah. I know. I like it. I like it. I actually I also, although nowhere near as sophisticated as this, I also do love, index maps and multi maps are 2 very, very simple crates that are, very useful, index maps being where you can actually iterate over things in the order in which they were put in the hash map, which I think is very helpful.

Bryan Cantrill:

And then multimap allows you to have multiple values for a particular key, which is also very helpful, without, and again, very simple crates, but very very useful. Yeah. This looks neat. Well, Liza, especially you've got to run at 6:30. Did you did you get all your crates out there?

Bryan Cantrill:

Did you have any last crates you need to to get in there?

Eliza Weisman:

That's most of my list. The rest was, oh, I wanted to mention, the bytes crate, which is a terrible name for a wonderful library that many of you already encountered or perhaps unknowingly, because if you use hyper, you actually are secretly using this. And bytes is something from the Tokyo project and what it is. Oh, Sean's here in the chat. Sean can talk lots about bytes.

Eliza Weisman:

Bytes is essentially it's a reference counted byte buffer. So it's like an arc back UA except that you can take slices of it and the slices are also owned objects that participate in the reference count of the whole buffer. So this is very nice if you want to read data from the network and then, you know, parse it into something and you want to take slices out of it, it, for like here's the H requests path and its headers can all be subslices of 1 buffer

Bryan Cantrill:

that

Eliza Weisman:

all of the bytes into. And I think bytes is just sort of a really lovely library really nicely, and also is the foundational building block under, creative that Rain and I collaborated in past, which is Bufflist, which is just some code that was that I think, Rain asked me how to do something, and I referenced some code that had been written probably by Sean MacArthur, within an application, that Rain just went and turned into a library views at oxide. And I'm gonna let Brain talk about it. Sure.

Rain Paharia:

Yeah. So process was just something that actually, it was your code. I'm pretty sure I blamed you, Eliza. But, so I love I love bytes, because it kind of presents this unified interface, over, so so bytes comes with a type called bytes, which is represents, this. So it is kind of a it uses dynamic dispatcher under the hood, but it it is, a a type that represents a contiguous sequence of bytes.

Rain Paharia:

Bytes also comes with a trait called buf, and that buf trait, does not require the sequence of bytes to be contiguous. So, so you can imagine a different implementation, which actually is the segmented list or a segmented queue, which which ends up being the right data structure for this, of, of byte sequences. So buf list is actually that segmented queue. And, I might have talked about it. I I think I talked about it in the episode, where we talked about prop test and, verification, but that, that was where, I ended up writing, like, a cursor type over it, one that can essentially navigate, this, queue and, and, you know, use prop test for that and ended up finding 6 different bugs because, like Eliza, I find it very, very hard to reason about these, these things, by myself.

Bryan Cantrill:

Wow. That's cool. And, it so that's the buff list crate. Right?

Rain Paharia:

Yes. Yeah. That is that is great that Liza wrote. I ended up writing the, very incorrect at first, but now fully correct cursor implementation. That is my contribution to it.

Bryan Cantrill:

That's very cool. Ray, do are there any other crates that that you've got on your list here?

Rain Paharia:

Yeah. The last one I actually wanted to mention, because I think it it deserves a real real shout out is, winnow. So, maybe this came up in the in the chat earlier, but, so, so I got a, you know, I got a I got a degree in computer science, and, like, one of the requirements is a compilers class, and, like, I hated writing compilers, and I hated writing parsers. That was my least favorite class out of the whole thing. And, since then, I've had to implement parsers a few times, and each and every time, I've just, like, it's been been miserable.

Rain Paharia:

And nom so I ended up using, nom for something. And nom, I think, is a great library. There's a whole bunch of trade offs across all the different libraries, but I ended up using nom for something, and I thought nom was okay. Winnow is feels like the first time where writing a parser was like a joyful experience, which is not something I ever thought, I would say about a parser library. So, I did wanna, make a special shout out to Winnow.

Rain Paharia:

This is, Ed Page has done a lot of work, in, you know, on this stuff. And, you know, is is absolutely, like, I think, you know, if you wanna write something parser shaped, then you should probably take, you should either use window. Or if you wanna do your own thing, you should, like, look very heavily at window and see what it does and and and, you know, kind of use that as inspiration.

Bryan Cantrill:

Well, just looking at I mean, there's a a really complete tutorial on it. I mean, it's very like, this is a very kind of full complete crate here.

Rain Paharia:

Yeah. I it's it's one of those things, right, where it's like it says 0.6 or whatever for, like, it it it is, like, too high quality to be, like, you know, just kind of treat it that way. I think it is, like, it is a very, very mature crate. I've used that. I know a bunch of other Oxite have used it.

Rain Paharia:

Pretty sure I I think I pointed Rye to it, and he was really excited. And he ended up using it, and he was pretty happy with it. So, yeah, winnow is my my shout out.

Bryan Cantrill:

Yeah. That's a that that's a great one. That's a great one and maybe a good one to end on there. I I love their the chapter on on also debugging is, for winnow is very cool. Yeah.

Bryan Cantrill:

I mean, obviously, I'm a I'm a sucker for for anyone talking about the debugging of their crate or their parser. Well, Raine, thank you very much. Eliza, thank you in in absentia, and, thanks for holding the chat. We had a, Adam, this this is great. We ended up with a with a lot of greats here.

Adam Leventhal:

This is great. I I kind of can't believe we haven't done this before, and, we're almost certainly gonna do it again. I feel like, a good pairing with, like, our books in the box, annual tradition. But this is a good one to come back to.

Bryan Cantrill:

I I I think you're right. I think we did this is this is one we go we gotta come back to. And next time, I will have heard of PetGraph. So I I I get to be with the with the with the cool kids, which is very nice. And we can do we'll do some out loud readings from inside UFO 5440 perhaps.

Bryan Cantrill:

Well, Rain, thanks again. Steve, thank you as well, of course. And yeah. So, Adam, in 2 weeks, it's gonna be, rather 2 weeks.

Adam Leventhal:

Yep.

Bryan Cantrill:

And and say I'm not sure if we're gonna do an episode next week or not. We need to it's it's a holiday here in the US, so we'll figure that out. But stay tuned. Alright. Thanks, everyone.

Crates We Love
Broadcast by