AI Discourse with Steve Klabnik
How is my audio?
Steve Klabnik:Working.
Bryan Cantrill:That doesn't
Adam Leventhal:Like a like a domestic dream. I was so excited. I've been
Bryan Cantrill:I just haven't been gone for so long, and now I'm, like, I'm back where like, in the land of of carpet as opposed to the land of reverb. And I was just really excited to show off, like, my great audio. And I don't know, Steve. It's just like
Adam Leventhal:It worked.
Bryan Cantrill:It's not what I was going for.
Adam Leventhal:It started weak, but it's it's getting better with every word.
Bryan Cantrill:Stop. It's like your audio is unmuted. That's alright. Audible. What what else do you I'll write it down the name.
Bryan Cantrill:What else do you want? This is, like, lowest possible praise. It's I do I should sound a lot better than two weeks ago. One hope.
Adam Leventhal:Sounds great. You do. It it was really just that first word that had both me and Steve pausing, I think.
Bryan Cantrill:Alright. No. I don't know if I'm being gaslit or not about No.
Adam Leventhal:This time now. This
Bryan Cantrill:time now.
Steve Klabnik:This time now.
Bryan Cantrill:Next time, yes. How are
Steve Klabnik:you all?
Bryan Cantrill:That's it. It's good to be good to be back. Good to
Adam Leventhal:be back. Good Welcome back.
Bryan Cantrill:Yeah. Back back to the litter box.
Steve Klabnik:Like, been around for the podcast, so it's also nice to be be here. Sometimes on Mondays, I've gotten a little busy at this time slot. So
Bryan Cantrill:yeah. Well well, welcome back to our a regular. Alright. So we well, actually, now do I wanna kick off with the topic already? I feel like this I I don't wanna rush into our topic.
Steve Klabnik:It's like we We we promised a good cold open that was, like, you know, like, the worst cold open that's ever cold opened and then getting right into it at one minute after the hour is, like, that's
Bryan Cantrill:Well, I I feel that very on
Steve Klabnik:hallucination, Brian, Nick.
Bryan Cantrill:Very on brand. Adam and I were were at an Oakland Bowlers game over the weekend. That's right. And I feel Adam, correct me if I'm wrong because we were sitting in a different spots. I I feel that that your progeny was very close to being the fan of the game.
Bryan Cantrill:Am I right about that? I feel like they were they were for a fan of the game in your section, and I'm like, this this this can't end well, I feel.
Adam Leventhal:Yeah. Yeah. They were they were right there next to us. I I brought my little league team in, and one of the kids suddenly was on the field with his little brother, not not my kid, but another one. And then they got to run the bases afterwards.
Adam Leventhal:So for fans of the podcast and the baseball episode, go to the Oakland Ballers with their kids, I guess.
Bryan Cantrill:Yeah. You can actually run all of the bases. Unlike you see that, John Fisher and the a's were stopping at third base in Sacramento.
Steve Klabnik:It just
Bryan Cantrill:feels like they literally can't even run the bases properly. Know? It's amazing.
Steve Klabnik:If the team is doing bad, do you say that they're bombing, or is that like a bad joke?
Bryan Cantrill:Well, they are the ballers, not the bombers.
Steve Klabnik:Oh, ballers, not the bombers. I just couldn't maybe your audio is not that good. I take it back. No.
Adam Leventhal:Was joking.
Bryan Cantrill:You're you're you're blinging my audio for that? I just feel like we have an entire podcast episode about the ballers.
Steve Klabnik:I mean, my
Adam Leventhal:I Steve says it's his favorite.
Steve Klabnik:I'm doing meta art about how the fact that humans can misunderstand a text doesn't mean that they're useless just like Oh, I'm
Bryan Cantrill:just very meta.
Steve Klabnik:Bring bring it right back into the topic, actually.
Bryan Cantrill:You gonna be hallucinating a lot of this episode? Is that can I just know in advance? Am I gonna have to, like, ask you for
Steve Klabnik:Yeah. Yeah.
Bryan Cantrill:Could you give me 10 more names of the baseball team that you think you could be? Yeah.
Steve Klabnik:Three plus five is 12.
Bryan Cantrill:There we go. There you go. On a you
Steve Klabnik:know, split Stick to r's in strawberry.
Bryan Cantrill:Exactly. Okay. We're good we're good to go. Alright. Well, we that is as good a segue as we're gonna get.
Bryan Cantrill:So, Steve, we are talking about this blog entry that you wrote last week. Yeah. That was a bit of a hot tamale. I I I somewhat I was a little bit surprised. You know, I went back and read or read listened to our pragmatic LM usage with Nicholas Carlini, which okay.
Bryan Cantrill:Adam, when do you think that was without looking? Do you can you do you know when that was?
Adam Leventhal:I'd say almost a year ago to the day.
Bryan Cantrill:It is actually very it is good. So it's like ten months ago. So, yeah, very good. You got a good intuition for. I felt it was, like, much more recent than that, but it's not.
Bryan Cantrill:It was it was a while ago. But it was interesting to go, like, relisten. So I feel like we have been pretty modern on this issue for a while. So, Steve, I thought your blog entry was very was very neutral. No.
Bryan Cantrill:I mean, neutral is the wrong word. The dreaded neutral zone. Measured. Measured.
Steve Klabnik:Yeah. Yes.
Bryan Cantrill:Even of of even temperament. Yeah. It's great
Steve Klabnik:because I wrote it mostly because I'm being driven insane lately, and I felt like I needed to just, like, vomit some words out about it. So that's extra good because, I mean, obviously, I don't think I think there's some meta issues around its reception and everything else, but I appreciate that y'all both thought it was reasonable.
Bryan Cantrill:Okay. So what so the exactly. Audio Audio audible. Audio audible. Blog entry.
Bryan Cantrill:Lots of words.
Adam Leventhal:Yeah.
Bryan Cantrill:Lots of words. Okay. So the title is I am disappointed in the AI discourse. So what do you wanna describe a bit? What what prompted you to write this?
Bryan Cantrill:What what prompted this disappointment?
Steve Klabnik:So basically, like, I have historically most I don't even know if I should necessarily say this exactly, but let's put this Many people that I know and respect are pretty vocally anti AI, and I found the things they said to be relatively reasonable. And when ChatGPT came out, I definitely didn't say, do you know who Steve Klavnik is? I typed real questions in there that you know? And it gave me answers that I thought were fine but also clearly made hilarious mistakes sometimes. And I kind of went, this is a fun party trick, but I'm not totally sure about a lot of things about this.
Steve Klabnik:And so I kind of just sort of put it away. But an interesting thing, and I sort of talked about some of this publicly but not all of it, is I told my mom's boyfriend about this. And he works or did work. He actually retired this year, but he, like, worked for doing, like, a tar plant, like, industry stuff. And part of his job is, like, sending reports about how the plant does to, like, other things.
Steve Klabnik:And so he was really curious, like, what does this thing know about, like, tar production and, like, sand and physical things I don't understand because I'm a computer programmer. And so, like, there's been like in two years or whatever that's been since that happened, there's been this really weird kind of thing where he has had very many measured and reasonable takes about it. Because I'd ask him, my program friends are all like, oh, it hallucinates and therefore it's useless. And I'd be like, what do you think about the fact that it's not accurate? And he'd be like, well, here's the deal.
Steve Klabnik:It's like, that's my job. He's like, I love editing text and I hate writing it. And so if if I need to do a report, I find it's faster to like ask it what to do. And it's like 75% right and 25% wrong, but it's my job to fix that 25%. And I can do the editing of the text faster than I can do the like writing on its own because it's not as painful to me.
Steve Klabnik:And so just like, you know, it's it's chill. Like, whatever. Like, computers are always making mistakes and, like, that's fine. Like, tools don't you know, they're not a problem. And so
Bryan Cantrill:Like, to be clear, you'd had this conversation with him two years ago, and he started using ChatGPT after how after this conversation. You inspired me.
Steve Klabnik:Using it. Yeah. Basically, I was like, I told my mom, like, by the way, this computer thing come came out, and, like, you know, I totally didn't ask it if it do about me, and it totally did or did not. I'm just being silly because I'm sometimes, like, I feel weird about the fact that I'm, like, a public figure in some ways. And so
Bryan Cantrill:Is this just really a wrong way for us to acknowledge your blue sky blue check? Is this what Yeah.
Steve Klabnik:It's really it's really about the blue check. Definitely.
Bryan Cantrill:I you know, I just told you we were like Adam, this is I said. We're not just gonna get past it. We gotta just hit it at the right at the top. Then we'll be done with it.
Adam Leventhal:Just get it out of our systems.
Steve Klabnik:Get it our rewatching 30 rock recently, and there's a scene where Jerry Seinfeld says to Alec Baldwin, like, oh, I was in Europe vacationing in the country that only rich people know about. And, like, that's kind of like the way I feel about the blue check thing. It's like, oh, yeah. I mean, know, I have it, but I can't tell you about, you know, our parties and the checks they sell me and send me and all those other things. So anyway, like, yeah, I I he had started using it immediately because he was just really enthralled with like, I can ask this thing anything, and it spits something out that's sensible.
Steve Klabnik:And he was curious, like, what does it this is a thing made by computer people. What does it know about my industry and the stuff that I know and things like that? And he continued to use it and found it interesting. And so I had this weird optimism from a non programmer person and then all of this anti stuff from programmer people. And that's fine, but it's weird that the programmer people are like, I don't know.
Steve Klabnik:Just like it started to it started to, like, mess with my head a little teeny bit. But then, like, as time went on, it was sort of like, I decided at one point that, you know, if I'm gonna be a hater, I like to be an informed hater. And so Oh, there you go.
Bryan Cantrill:Oh, yeah. This is the yeah. This is the most earnest interest. Comes from, like, I need to hate this thing better.
Steve Klabnik:Yeah. And, like, I I knew I had tried this thing two years ago, and people are always saying stuff gets better. And, like, you know, sometimes people are right and someone too are wrong. But I was
Bryan Cantrill:like, I haven't checked
Steve Klabnik:in on this a while. I should, like, you know, see what's up. And then, like, I was like, oh, so I had specifically for for reasons that involve JJ. There you go, Adam. Yeah.
Steve Klabnik:Because of JJ, for JJ's four eleven writing, I really
Steve Klabnik:wanted to
Bryan Cantrill:Are we gonna get the chime or are gonna get the the
Steve Klabnik:It's minutes until the JJ.
Bryan Cantrill:It's like It's a bit a chibhorn. Is that sound? Yeah.
Steve Klabnik:A really nice a really nice thing about JJ is the coloring in the console, and I think it matters and is important and is good. And in my tutorial, it's all black and white because there's no highlighting for JJ. So I got this idea that like in the second version of my tutorial that I'm working on, I would like to transpile the codes from the terminal whose name I'm totally forgetting right now, which is embarrassing. ANSI escape codes, like to CSS. And then I would like have coloring in my terminal and that'd be cool.
Steve Klabnik:And so I went on this little side project. And you know what's like not very fun? Writing an ANSI escape codes to CSS compiler. And I see all these people talking about how good, you know, code is at it. And so I decided, like, you know what the heck?
Steve Klabnik:I'll just throw it at chat GPT. And so I I was like, I'm gonna be extra lazy about it. I'm not even gonna, like, describe the problem. I'm just gonna be like, hey. I'm trying to turn ANSI codes into CSS.
Steve Klabnik:What'd got for me? I was like, oh, cool. And started, like, reproducing the ANSI table correctly. And I was like, oh, woah. This is kinda neat.
Steve Klabnik:But it, like, also didn't do, like, a fantastic job. But it was, like, doing a good enough job that I thought it was kinda cool. And so I was just, like, messing with it because I was like, this is actually better than I expected already. And, like, it's missing important cases. It's not really doing anything right, but, like, that's fine.
Steve Klabnik:And so I started using it for a couple other things. And then and that was never like actually finished because this is sort of like a side side side project at this point. So I don't have a ton of time to work on this stuff. But eventually, was like, you know, I'm really curious. They say these new models are way better and like than the one I'm using for free.
Steve Klabnik:And so, like, I can cough them up $20 once just, like, play with this for a little while. So I asked to do the same problem, and it gave me, like, a completely different algorithm for doing the thing. Like like, the the free versions model was giving me, like, go through every single byte in, a four loop with equals equals and match statements. And the newer model was giving me, like, regex search and replace, which I know you cannot parse the regexes and all that stuff, and so I hate to even bring it up. But, like, it was just it was so interesting to me that it gave the two models gave me completely different outputs, which is the thing that should not be surprising.
Steve Klabnik:But like, you know, when you just don't ever use a thing, you're just like, oh, I I guess that makes sense, but I just literally never thought about it before because I never really considered these things. And so I continue to just kinda play with stuff. And so then the stuff has been heating up as it gets more and more serious because I do think it's getting more and more serious, and that's a whole part of this discussion. But the thing that kind of broke me a little bit, and this is one area where I think I did sort of a poor job in writing, and we were talking in the channel before the podcast started about this in some ways, is a big criticism I got was I was not engaging with outcomes of the circumstance because I like people felt like when I was talking about so I thought that the deep learning thing was kind interesting. And frankly, I like to ask chattypity sometimes about just random stuff to see what it gives me.
Steve Klabnik:And so I typed I don't remember what it was, tell me about this thing and do a deep deep research on it because I think it's kinda interesting. And so it's searching the web. Here's the six web pages I'm looking at. And I was like, all right, cool. And I taboored a blue sky, and I see someone who I really technically respect reposting this person who apparently is a sci fi author who was like, LLMs cannot search the web.
Steve Klabnik:They don't do that. They can't do that. Stop saying they can do that. It's fake. They just make stuff up.
Steve Klabnik:And and this is the point that, like, this sort of, like, broke me, and what what I was trying to get by talking about this post is, like, everyone's arguing whether they're not a search engine or not. And, like, that's fine, I guess. But, like, my point was not to say, like, this person is wrong and stupid. The point is that, like, I have one tab where Chattypetty is searching the web for whatever definition of that that is, and you can argue about the details, and that's fine or whatever. But just like versus someone who says like, no, it doesn't.
Steve Klabnik:And I'm like, do I exist in a shared reality with other people? And I say that in kind of a joking way, but also sort of in a realistic way where, I don't know, just like I think it's also sort of plays us in a lot of this is that like, it's it's sort of weird and tough times, and people are feeling stressed about just like life and everything. But shared reality is like a complicated question at this point in some ways. And so I just kinda was like, okay. I need to talk about this and get something out of me because I feel like I'm seeing people that are super pro this technology make arguments I find ridiculous, but also people that are very anti really ridiculous.
Steve Klabnik:And I'm not normally a, like, in the middle kind of person. I don't know if you're both familiar with me or not, but, like, I'm generally very allergic
Bryan Cantrill:to that. No. No. This is fun. Like, I'm going to alienate absolutely, positively everyone.
Bryan Cantrill:It's just the Yeah.
Steve Klabnik:And and so, you know, what better way to, you know, deal with your lack of shared reality than to have a ton of people yell at you on the Internet about it. And so that was kinda like why I wrote the post was sort of just like, I I know I need to, like, sort of work through these things in some capacity, and, like, a writing helps you your thinking tremendously, and it helps my thinking tremendously. And so my intention was kinda like, I wanna set the stage, and I wanna say that, like, I am I intend to, like, go through this whole thing because I think one thing that makes it really hard to talk about is there's, like, a million different ways to talk about this topic, and so there's no way you can fit that all in one blog post. And so I just like and like, alright. I'm gonna say that I intend to start this discussion, and I'm gonna keep it meta, and I'm gonna keep it like, you know, I'm not trying to, like, say anyone's right or wrong.
Steve Klabnik:I'm just like, I wanna work through this, and I feel a little crazy. And, like, I wish that I feel like I I was like, I feel like I can't even talk about it because people get so upset, And then everyone just got mad at me. So that's like fine. Many people did not get mad at me too, but that was sort of the broader aspects of it is just like and and so I kinda feel like I didn't I wish I'd written a better post. I'm not entirely sure that I like like my post a ton, but it at least got the thing rolling.
Steve Klabnik:And, you know, now I kinda, like, have to do more of this work, and that's what I wanted to do anyway. So it's kinda cool in that respect. But, you know? So, anyway, I guess that's, like, the broad stage. Like, that's the broad setting in which the post happened and sort of, like, the kind of, like, backstory, basically.
Bryan Cantrill:Yeah. I think it's always hard because you you never know the the I mean, you necessarily like, the the discussion that you get are always the people that are kinda motivated to have that discussion. And you you never kind of like, there can often be a big because I mean, to me, like, your post represents pretty mainstream thinking. I don't think it's that controversial. And I I get that for there are people for whom that is very, very controversial.
Bryan Cantrill:I do think and Janet's making this point in the chat. I think that that if there there's a role. I would encourage all potential haters to do as Steve has done and use the thing enough to know exactly what you're hating and why Because it would I I do think that it would think Pete, some people are are there are arguments out there. And Steve, after you kinda said this, went into you know, you've mentioned that you've broken lobsters, which mean, did they give you t shirt for that? I mean, I'm not sure what the the but with the sheer number of comments on lobsters.
Bryan Cantrill:So I did I did go into that lobster thread. I'm like, wow. This is this Steve was not exaggerating. I got a lot of comments here. And I I do think that that you it behooves people to actually dig into this stuff because I saw you know, do you read Josh Marshall at all Talking Points memo, Adam or Steve?
Bryan Cantrill:Do you guys read No. Okay. Josh Marshall about it, but I don't read it. I see. I like and I like Josh Marshall, the and the, you know, an interesting kind of pundit.
Bryan Cantrill:And he had this post today being like, I have I am writing a piece on AI, and but I and I have never used it. So I used it this morning, and it started hallucinating on these two things or whatever. And it's like, okay. You're writing a piece on AI and you've not used it at all? Like, you I mean, are you like, maybe you like, why don't you write the piece next week?
Bryan Cantrill:Why don't you use it this week and then write the piece next week when you've got just a little more underneath you? Because I I really do think and, Steve, I don't think I quite realized how I mean, maybe this is just naive on my part about how polarized some of this stuff was. And I would really caution people about that. There is a bit of a vilification of people that are using this stuff. And I would really caution people off of that position because I I there's a day but if you're not using the stuff yourself and you're being like, you know, anyone who's using this is I mean, like, one of things I saw that I I am still grappling with are the the people who are concerned that it is it is not ethical to use an LLM.
Bryan Cantrill:And, man, that is I I I don't think that's a good position, I gotta say. I think that, like if you wanted the and I'm sure like the internet is just like bursting into flames right now over this. But I I I think you've you've got to be really explicit about that position because that's the same kind of position that says like, I think it's unethical for you to drive to work. And, like, that's not it. Like, I I guess I can I can construct a framework whereby that is the case?
Bryan Cantrill:But you're also saying it's unethical to drive your mom to the hospital. Well it's worth taking apart the ethical concerns.
Adam Leventhal:Right? Because I I think that they are broadly, and correct me if I'm wrong, copyright and environmental, and then maybe also, like, some big tech. Are are there other kind of categories of that ethical concern?
Bryan Cantrill:Yeah. I Steve, we look to you.
Steve Klabnik:I'm sorry to be I mean, those are those are those are two definite two of the definite large ones. There's also, like, deep fakes are a big concern and just general, like, what I would call, like, Internet of shit concerns of, like, what happens if we start the dismissive version of this that's funny, and I'm not trying to dismiss it as, like, what if someone put a lie on the Internet? You know? But, like, if there's a difference between quality and quantity, and, like, quantity is quality all its own. Right?
Steve Klabnik:So if there's, like, more lies on the Internet, then, like, how you know, what what's up with that? There's also concerns as to who is the people that gets the profit off of this.
Adam Leventhal:Right.
Steve Klabnik:I would say those are those are at least and also the guess the last one that's maybe a little separate is the, like, job loss that ensues from these things existing. Yeah. Oh, oh, so big one as well. So those are, I would say, like, five or six, like, top ethical concerns.
Adam Leventhal:And so I would just separate the ones I had cited as, like, ethical concerns about using it versus ethical concerns about the broader implications for humanity. That is to say
Steve Klabnik:Yeah. There's, like, production concerns. There's consumption concerns, and there's, like, utility and or, like, a societal effect concerns.
Adam Leventhal:That that's right. And I guess I guess implicitly by using it, you may be might feel like you're endorsing some of these other effects, other uses that that you're not engaged with, but encourage perhaps other people to. So I would just say, Brian, I I I kind of disagree with the sentiment that, like, you just gotta get over it or whatever. Like, I I in that, I think that there may be valid, like, valid concerns that people have about, say, how these are trained and how they're used and and what biases are implied by them. But I also agree with your point emphatically that you gotta go experience it to some degree yourself to evaluate those ethical concerns.
Bryan Cantrill:Well, and I think you you you're gonna need to separate that out. Right? I mean, you can say that, like, I think that we have a society that is too based on the internal combustion engine and still acknowledge that driving your mom to the hospital is actually an acceptable thing to do. Yeah.
Steve Klabnik:And Also, like oh, go ahead. Brian, you should finish what you're
Bryan Cantrill:No. No. I and and I think it's it's it's actually I think it's very important to like, there's a lot of complexity and ambiguity here, and we're gonna have to tease these things apart. Because when you get into this kind of like, I think this overly reductive stance of, you know, I am kind of playing forward the arbitrary implications of this, and therefore I think any engagement with it is is accelerating us towards this path that I kind of have in my mind. I I just think you're bringing yourself into lots and lots of un one is bringing oneself into lots and lots of unnecessary conflict and conflict that's gonna be very confusing for both parties.
Steve Klabnik:Yeah. I think I'm sort of in a the place and again, like, I I am also a little under read on many of these or I'm still, again, working through my thoughts on the ethics of all this stuff overall. But, like, I guess there's sort of a related thing that I see happening, which is, like, part of my concern about the ethical discussion is that it seems like some people think that if we just say that the ethics are bad hard enough, the genie will go back and rebuttal. Yeah. And, like, my take is like, okay.
Steve Klabnik:That's never going to happen. So if we agree that there are problematic ethics, we need to, like, understand what is actually happening. And the sort of, like, don't investigate the thing because it legitimizes it is, like, not allowing us to prepare to properly have the fights that are going to need to be had about the ethics of how it gets deployed in society. So as a specific example of some of this, there's a big sort of argument about, like, is is this any different than any other form of automation? And so, like, there are some people who sort of believe that, like, AI in terms of automation is, like, sort of a special unique instance of this happening.
Steve Klabnik:And there's other people who are like, and I think I'm more on this side, at least currently, like, automation has always been taking people's jobs and making productivity gains go to the people who own companies and not the you like the workers of those companies and like stuff like that. And like trying to make it special is actually making it more difficult to have the concrete discussions that are needed to like, try to right those sorts of wrongs. And so it, like, becomes a struggle even when you're sort of, like, on a similar ish side. You can end up arguing about stuff that's, like, not even really, like yeah. Like, I don't I don't know.
Steve Klabnik:There's just, like, one specific kind of, like, angle that I'm, like, particularly interested in. I I said something a couple weeks ago that's sort of, like a a thing that I break with with a lot of people who I consider to be on my own side on these sort of issues more broadly is like, I don't know, the idea as a programmer is being upset that they're gonna be automated out of a job is like kind of massively hypocritical to me. Because like, what do you think our industry has been doing since the inception of our
Adam Leventhal:industry?
Steve Klabnik:Oh, 100%. Do you think all those secretaries are, like, upset that we're worried that, like, our jobs are gonna go away? Like, you know, I Right?
Bryan Cantrill:I mean, you you go look at the at the census from
Steve Klabnik:But
Bryan Cantrill:you know, 1890, '19 hundred, '19 '10, '19 '20. I mean, it's kind of mesmerizing to look at the census from those years, censuses, in terms of how people actually employed.
Steve Klabnik:It is important, though. Like I said earlier, quality quantity is a quality all its own. And so, like, the idea that, you know, every job is going to be automated out into existence may be something that is significantly different, but it's just like, those are the conversations I wanna have with those kinds of like nuance and like investigating the angles on the problem. And it becomes very difficult when people are like, don't even investigate this at all because it is, like, not like, it's so bad that you shouldn't even talk about it. And it's like, that's not a thing that I can, like, agree with.
Steve Klabnik:I think there are a lot of problems and a lot of challenges, but I also, like I don't this is this is why part of, like, the thing that I got in trouble with is people thought I was, like, dismissing ethical concerns because I'm like, I wanna know what the thing is before I talk about the ethics of it. And it's not because I think those things are like, you have to know a % of the thing. Like, some people were like, do you need to know how combustion works to understand that a f one fifty could run over a child? And I'm like, no. But you do need to know that an f one fifty can run over a child to have that discussion.
Steve Klabnik:And like there's like a certain level of, like, I'm not saying you need to fully understand every aspect, but you do need to, like, agree. So come back to the shared reality thing. Like, we do need to agree on some level of understanding of, like, what capabilities of a thing are to, like, know how to talk about the ethics of the tool itself. And, like, for me, that means on some level, there's, like, I mean, you know, dialectics, blah blah blah, whatever. But, like, you need to have like, I like to start I like to start and not stop with, like, can we agree on what this thing is and what it can do?
Steve Klabnik:And then talk about
Bryan Cantrill:the parts
Steve Klabnik:and then also do the other things. And that's like the part where I struggle with a lot of discussion online about this.
Bryan Cantrill:But how
Adam Leventhal:about a straw man argument instead? That's almost as helpful.
Bryan Cantrill:Yeah. I would I I mean, I think part of the challenge is, Steve, just to answer your question, we invadably can't agree on what it is. I mean, that's actually part of the problem. Right. Right?
Bryan Cantrill:And I think that different people are and I actually think that, like, in you know, even and I you know, Nair in the chat is just definitely, singing our love music in terms of, like, the term AI is really a pro I I think it's deeply problematic. Right? Is the also, the the person in the chat asking if Frankenstein was an AI. I believe you mean Frankenstein's monster. And to be that guy.
Bryan Cantrill:The but, you know, just the great Silicon Valley line is like, oh, you're one of those guys. When Richard corrects his doctor about Frankenstein sponsored, yes. I'm one of those guys. You mean yin and yang. The but I I think that you you gotta tease these things apart.
Bryan Cantrill:You gotta because I for example, when and I you I mean, you called this out in your post, Steve. But, like, generative art to me is, like, so not the same thing as having something edit a document for you or offer feedback on a document. Like, these things are and I get, like, there there's some of the underlying tech is in common, but these things are so different. And the I I don't know what it's pronounced flaxit. A lot of people don't know that.
Steve Klabnik:Yes. Thought you'd get there. Of course. And and there are and that's then it's why I go to capabilities first is because I think that the ethics of generative art are very different than the ethics of Grammarly. Yes.
Steve Klabnik:Even though those are, like, sort of the same technologies in some level, like, the the societal impact and who they affect and in what ways are different because of their capabilities.
Bryan Cantrill:Totally. And I also think that, like, the and I and I I think I completely I mean, I do understand that when we we kind of tie this into one galactic, area of technology, and when some people are like, I don't understand why there are ethical concerns. If you're looking if you were thinking of this in terms of generative art, like, I don't understand how they're not ethical concerns. I think it's and it is but to me, like, when I'm thinking of, yeah, of of Grammarly. So I and I think that that, you know, someone is saying in the chat too about, like, its general purpose.
Bryan Cantrill:So it merits many different ethical discussions. I do agree with that. But because it's general purpose, I think you've really got to tease it out and tease it apart because there are so many different use cases for this. And there are use cases that I think it's gonna be hard to argue that you are ethically compromised on some of these use cases. And I guess you can just say It's
Steve Klabnik:it's hard to argue you're not on other ones.
Bryan Cantrill:Just like I don't you're not on
Bryan Cantrill:other ones. I don't really currently
Steve Klabnik:believe that, like, using Claude to, like, generate some tests for me is the same thing as making deep fake revenge porn. Those are just incredibly different things. And even though if they use, like, the same math sometimes, that, like, doesn't that's, like, a very different set of problems, and they're very and they require very different, like, concerns and considerations.
Bryan Cantrill:Now, Steve, I'm using deep fake revenge porn as test cases. So as this, like, I where am I in that An ethicist nightmare. Exactly.
Steve Klabnik:To to to investors in oxide computer, this is a joke. Yeah.
Bryan Cantrill:Yeah. Right. Investors in oxide computer, like, oxide computer is a Yeah.
Steve Klabnik:And like and like as, you know, to like sort of make this slightly about you for a second, Brian, like, you were telling me, you know, you're like writing a policy for the company. It's very interesting because like, you know, different companies have very different positions on whether or not you're allowed to use these tools that come from different under, like, not necessarily even understandings of what they are, but, like, different risk tolerances and worry about what the impacts of, like, things are. You know? And so, like, you you know, you're, like, you know, about to write an RFD around this kind of policies, like, for Oxide. And so it's like, on some level, it's very interesting to me what your opinion is.
Steve Klabnik:Because also, you in some ways are the person who determines if I am allowed to use this tool at my job, and in what way, and whatever else. And we've had personal conversations about this at length, but, like, you know, that's the sort of like, I was talking to a friend in DMs a couple minutes ago, and he's like, yeah. Like, I use Copilot, and it totally sucks. And I'm like, yeah. And, like, Claude's way better, but, like, that also doesn't matter if your job is, like, you must use Copilot and not Claude.
Steve Klabnik:You know? There's also like Right. This whole thing is just fractal in so many ways. It's very difficult to have not 27 different conversations at the same time at the same point. And there's also a weird intersection that goes along with the world is strange right now, which is kind of what someone's getting into in the chat by pointing out Ben Evans talking about once AI works, it's just software.
Steve Klabnik:I'm 39, and I've had some significant life changes lately. And I'm also wondering to what degree I am getting old. There's a weird part of me that's like, am I getting conservative or are other people getting conservative? And I'm not on some ways about technological optimism and some things like that. And there's also ways around things like people are like, well, like at you know, everyone puts AI into everything and they just get free VC money.
Steve Klabnik:And someday that crash is gonna happen and then putting AI in your thing. And I'm like, are you talking about 1978 to 1985, or are you talking about 2025? Because, like, we already had an entire bust boom cycle in AI, like, forty years ago. And, like, you know, like, it's just, like, also strange. Like, you know, it's like, yeah.
Steve Klabnik:Is it 1984? Thank you, James, I think if I remember correctly from your handle, like AI winter, right, is like a thing. And so I'm I'm also coming to terms a little bit with like, you know, I was around for the building of GitHub, but many people who I professionally am interacting with grew up with GitHub because I'm like an old now. And so, you know, there's, like, also a lot of those kinds of, like, things that are very, like like, I have some opinions about things like intellectual property that are based in a very, like, free software, free culture, piracy, pro, like, anti IP kind of, like, basis that many people who've grown up where their friends are starving artists do not. And that's kind of an interesting thing to sort of deal with is thinking, I thought we're all here to destroy intellectual property, not reinforce it.
Steve Klabnik:Because to me, Disney profits not individual small artists off of copyright. Many reasonable people disagree with me on that. And, like so, you know, there's just, a whole pile of also kind of, like, midlife crisis wrapped up in many aspects of this. And and or, like, you know, I'm like, I personally care a little bit less about the physical act of writing code that I did twenty years ago. And, you know, many people are like, I don't want a robot to program the computer because I love programming the computer.
Steve Klabnik:And I'm like, I kinda want computers to do stuff, and I don't care as much about the writing code part. So I'm kinda naturally predisposed to not be as upset about the idea that, like, I'm not generating source code as much. I'm, like, writing documentation and, like, doing all this. Because, like, there's there's just a billion ways you can take this whole thing. And so, you know, I don't know in what vector of the space, you know, my opinions are or everything else, etcetera.
Steve Klabnik:But just like it touches on so many things. It just becomes hard to talk about it because you don't even know if you're talking about the same thing, like, at all in 12 different ways.
Bryan Cantrill:Yeah. And I think you're often not. Yeah. Oatmeal Dealer, you, you wanted to raise your hand. You get you get thoughts and opinions.
Bryan Cantrill:You wanna you wanna jump in here?
Julian Giamblanco (Oatmeal Dealer):Yes. I have I have a few of those. Do do I sound normal?
Bryan Cantrill:You sound you do. You sound Audio. Yeah. I I I think yeah. You sound audible, Steve would say.
Bryan Cantrill:I, you know, I I I think you sound great.
Julian Giamblanco (Oatmeal Dealer):I think your audio is like audible.com?
Bryan Cantrill:Exactly.
Julian Giamblanco (Oatmeal Dealer):I I just full disclosure before I say anything else. I did see the announcement for this particular cast, and I have two pages of opinions that I wrote down with my own god given hands. So please don't be scared.
Bryan Cantrill:We'll be the judge of that. We're gonna
Julian Giamblanco (Oatmeal Dealer):I'm gonna try my judge. Okay. But I I I have a lot of different things that I've wanted to say so far, but I I think the main, I guess, thesis of what I took away from thinking about Steve's article was, one, when a lot of people talk about AI, they're talking about, like, all of it, right, as, like, one big Right. Thing. And they're they're just trying to answer the question, like, okay.
Julian Giamblanco (Oatmeal Dealer):Is thing good or thing bad? Right? Yep. And I I think one, like, really major, like, top level way in which that premise is just incorrect is I I think, like, what people think of when they think of AI is closer to a manufacturing process than a product. Right?
Julian Giamblanco (Oatmeal Dealer):Because there you have to, you know, start with materials. Right? You have to get your data from somewhere. And who who cares where you get it from? Right?
Julian Giamblanco (Oatmeal Dealer):Like, if you're just starting out, you're on the ground floor, like, just scrape everything. Right? What could go wrong? You you have to get your raw materials, and you you have to put it through some kind of, you know, refining process where you you make it into something. Right?
Julian Giamblanco (Oatmeal Dealer):And I think for most people who are not, like, really at the top of the pyramid of the the people who are making this stuff, to them, it's just like data goes in and, like, big scary computer program comes out. Right? And there's not like, I think Steve said something about this where, you know, to a lot of people, like, using Claude to, you know, refactor some modules or something is like, to them it's not morally distinct from, you know, deepfake revenge porn, for example.
Bryan Cantrill:Yeah.
Julian Giamblanco (Oatmeal Dealer):Understand. But but Adam said something about, like, there are also concerns of, you know, when I use this thing, am I, for lack of a more complex term, empowering someone else to do evil later? Right?
Bryan Cantrill:Yep.
Julian Giamblanco (Oatmeal Dealer):Because I I think, like, you know, you'd look at a dichotomy like Claude, you know, versus, like, deepfakes, and those two things are, like, obviously so distinct. Right? Like, one of them is generating text and the other one is generating images. But it's, like, you could have, for example and I could see this being a real realistic scenario. There's, let's say, that there's a company who is, you know, using an image classifier, neural network to, you know, provide early cancer detection.
Julian Giamblanco (Oatmeal Dealer):And they're turning around, and they're taking some of that r and d from that, and they're, you know, making another image classifier that, like, decides who to blow up with missiles. Right? And it's like, oh, if you actually, now, if you want to get your early cancer detection, you necessarily have to, like, enrich and empower the people who are like, we think computers should kill more people, actually. And, you know, I I think when you start to think of it as a force multiplier before anything else. Right?
Julian Giamblanco (Oatmeal Dealer):Like, okay, here's this thing. And if you have the thing, you can do a lot more of blank than you could before. Right? And some have upsides that, you know, obviously outweigh the downsides like ammonium nitrate. Right?
Julian Giamblanco (Oatmeal Dealer):Like, we all owe our our lives collectively Yes. From the population explosion. Right? Yep. To to this one thing.
Bryan Cantrill:Yep. Yeah. It's like, yeah,
Julian Giamblanco (Oatmeal Dealer):sure it explodes. Right? Yeah.
Bryan Cantrill:Yeah. That's a very good explode.
Julian Giamblanco (Oatmeal Dealer):Plenty of things explode and also don't, you know, allow you to have 8,000,000,000 people on one planet. So, like, who cares? Right?
Bryan Cantrill:Yeah.
Julian Giamblanco (Oatmeal Dealer):And then, you know, you turn around and you I I I'm crossing my heart 1,000,000 times before I invoke this discourse, but, like, you have other things like conversations around gun control. Right? Where it's like, okay. The thing very clearly is, like its purpose is to do a thing that's bad. And, like, that's a totally different conversation than, like, thing that does thing that is, like, unilaterally good.
Julian Giamblanco (Oatmeal Dealer):Right? And I I think that with AI, it's like all the meters are just maxed out. Right? I I think, you know, Steve mentioned something like this this where, you know, everybody seems to either think that, like, oh, it's just like the it's just like the industrial revolution. Right?
Julian Giamblanco (Oatmeal Dealer):It's just like automation. It's totally exactly the same thing. And then, you know, there's another camp of people that are like, no. It's this totally new thing that isn't even anything like any other thing that's ever existed, which is like I mean, that's impossible also. And yeah.
Julian Giamblanco (Oatmeal Dealer):I mean, I I think it's just, you know, it it reminds me a little bit of like the the r f d three podcast, right, where I don't remember which one of you was who said this, but it's like, you have no idea, like, how much damage that one bad hire can
Bryan Cantrill:do. Oh, yeah. Yeah. Yeah. Interesting.
Julian Giamblanco (Oatmeal Dealer):Yeah. And it's like, you know, when when you talk about, like, unleashing, you know well, you we talk about unleashing as the thing as if we haven't done it already, but, like, you know, the cat's out of the bag and everything. When you talk about the potential upsides and downsides of, you know, a given technology, there's two sets of downsides. Right? There's the downsides of using it in good faith, and then there's the downsides from bad actors.
Julian Giamblanco (Oatmeal Dealer):Right? Because, like, the car Yeah.
Bryan Cantrill:That's a that's a helpful dichotomy, actually. I guess important
Julian Giamblanco (Oatmeal Dealer):Right.
Bryan Cantrill:To distinguish those two. Because
Julian Giamblanco (Oatmeal Dealer):because a car is, like, you know, you guys brought up, like, driving your mom to the hospital. Right? Well, it's like, yeah. Of course, Like, you drive your mom to the hospital and you emit some amount of c o two and it warms the planet ever so slightly. And, yeah, maybe that is eventually gonna kill us, but it's gonna kill us, like, point 00001%, you know, compared to the rest of the c o two that's gonna kill
Steve Klabnik:I'd I'd someone come at me for being like, well, you know, don't you think driving your car puts a lot of, like, you know, environmental pressure and, like, know, maybe blah blah blah. And I was like, I don't even have a driver's license, man. Like, I literally live a relatively car free life. Like and so, like, I'm already on that team, like, here, actually. Like so
Adam Leventhal:Oh, yeah. Yeah.
Julian Giamblanco (Oatmeal Dealer):And it's well, and another I mean, I I don't wanna get on a tangent here. I mean, I do a little bit, but I'll try not to. Like, one thing that a lot of people because I have two cars, for example. I do drive, in fact, and I'm comfortable with the ethical implications of that. Because, I mean, modern internal combustion engines are completely different than, like, your you know, like, a carbureted engine from the eighties.
Julian Giamblanco (Oatmeal Dealer):Right? Like, if your car is smog exempt, you are the problem. Right? And and also are the vehicles.
Steve Klabnik:Like, I get to say I don't even have a driver's license, but, you know, when I wanted to get a pizza from places a little too far to walk away yesterday and have my girlfriend drive me there, am I actually leaving? I took a thing. I took a Waymo to, like, go to a doctor's appointment today. Like, you know, am I actually living a car free life? Like, you know, it's, like, more complicated than just purely I
Bryan Cantrill:mean, you you now you did use the the new option that allows you to order a gross polluter. So you were able
Steve Klabnik:to actually They give you a discount for that, actually. It's cheaper for you because you're, you know, yeah, emitting pollution.
Bryan Cantrill:Yeah. But
Julian Giamblanco (Oatmeal Dealer):Yeah. Sorry. Well, I I think the only other thing I was gonna say is, like, when you talk about, you know, the the good faith downsides versus the bad faith downsides, it's like the the bad faith downsides of a car, for example, is like, okay. A person could just decide to drive their car into a crowd of people Right. Which
Bryan Cantrill:they do. And people have. Right? I mean, that's yeah. It's been used as a weapon.
Julian Giamblanco (Oatmeal Dealer):Yeah. That happens I think that happened last month. Right?
Bryan Cantrill:Right.
Julian Giamblanco (Oatmeal Dealer):I I I couldn't tell you where, but I'm it's we're around there.
Steve Klabnik:Statistically speaking, it's gonna happen. Yeah. Right. Exactly.
Julian Giamblanco (Oatmeal Dealer):Often
Steve Klabnik:enough that And
Julian Giamblanco (Oatmeal Dealer):I I think it's like, you know, to some extent, and this might be a little bit spicy, but even with things like cars, I think there is a little bit of manufactured consent there because, I mean, we just we need them. Right? Oh. Like Okay. So what's you need a car to get around.
Julian Giamblanco (Oatmeal Dealer):Yeah. Oh, yeah. And, like, Henry Ford did his due diligence so that if you want to get from one side of the country for another to to another, like, you're either flying or driving. Right? Like, those are your two options.
Steve Klabnik:When I moved to Austin, I told people I didn't have a car. Because to be clear, I also joke, like, I don't have a license because I lived in New York so long. It expired, and I didn't notice. And so, like, it's not because I got a DUI or something. Yeah.
Steve Klabnik:Like
Julian Giamblanco (Oatmeal Dealer):Well, now you can't fly because you don't have a real ID.
Steve Klabnik:Yeah. Well, I I got my passport in that circumstance. But anyway, like, people were like, you don't have a car? Like, how do you get groceries? I'm like, grocery store is a quarter mile away.
Steve Klabnik:I and they're like, and so what do you do? And I was like, I walk. And they were like, woah. And that's like, you know, like, the ideology of cars and needing them is, like, such an intense thing for so
Bryan Cantrill:many people.
Julian Giamblanco (Oatmeal Dealer):No. And and I think that's yeah. And think that's kind of like that has that shares a root cause with, like, the reason why this particular discourse is, like, so fragmented and so polarized because I I think the average person who participates in any discourse, like, grossly underestimates how different everyone's lives are. Yeah. Also Yeah.
Julian Giamblanco (Oatmeal Dealer):Similar they are within, like, these little pockets of, oh, yeah. Well, this is New York City. Like, none of us drive because we don't hate ourselves. Like, what's wrong with you? Right?
Julian Giamblanco (Oatmeal Dealer):Or
Steve Klabnik:I can, like, talk about, oh, yeah. I'm so great because I don't have a car. But you know what? I'm in the top 1% of people who fly. Like, I was in the top 1% of Delta customers for, like, ten years.
Steve Klabnik:So, you know, like, is is not having a car and then getting on a plane two times a month, four times to go home, I guess, too? Like, is that better or worse in terms of, like, the environmental impact of the thing in my own personal, you know, like,
Julian Giamblanco (Oatmeal Dealer):output or whatever? Like yeah. Just And and if we're having a, you know, a conversation about, like, the ethics of flying, for example, let's just suppose we're doing that, you might come into the discussion like, well, planes are good because they allow me to not use cars, which are bad. Right? So planes good because planes equals less cars.
Julian Giamblanco (Oatmeal Dealer):And and, you know, someone might say, like, well, you benefit from planes. So, of course, you would think planes are good.
Steve Klabnik:Yeah. Cars kill more people than planes do. Like
Julian Giamblanco (Oatmeal Dealer):Right. And I think that is another thing that I think is, like, the common thread through all of this is, like, when you do have these tech I'll call them technological force multipliers. Right? Like, they just they exist as information and then people go and build them and then they go and do a lot of stuff. You know, we talk about the upsides and the downsides as if they're all happening to the same person.
Julian Giamblanco (Oatmeal Dealer):Right? But that's just, like, not the case. Like, you know, you have people who are polluting the planet who will be dead by the time it's a problem. You know, you have people who drive cars and maybe, like, other people die because they drive cars, like, in a very immediate way. And, like, those the people driving the car are, like, not subject to this, you know, either through random chance or whatever, you know?
Julian Giamblanco (Oatmeal Dealer):And I think a lot of people who think AI is really awesome are coincidentally also people some of the most of the time, I'll say. I don't know. I may regret that. But, like, the people who think it's good statistically, are are more likely to be people who, like, at least think they're gonna benefit it benefit from it in some way.
Bryan Cantrill:So I I think that the I think we actually good versus bad is, I think, part of the problem too. Because I think it's like
Julian Giamblanco (Oatmeal Dealer):Right.
Bryan Cantrill:I don't know. Like, I don't know that I would say it's
Julian Giamblanco (Oatmeal Dealer):Technology is value neutral.
Steve Klabnik:You know?
Bryan Cantrill:Well, it it it's useful is the thing. Like, it it definitely is useful. And I think maybe, Steve, this is some of cognitive dissonance that you experienced that kind of prompted right the blog entry is that, like, you it is very hard to make the the bold claim that it's not useful because it just means that if you I mean, Steve, you and I were joking about someone who's like, you know, when I, you know, one of our the the many, hacker news threads where people were were coming in with opinions about oxide talking about how, like, well, nobody uses this. It's like, well, that's a very easy to disprove because all you need is, like, one person that's using this. And I feel that, like, you you know, you just need when it is useful, and I think it is indisputably useful, I think, you know, one of the things that was interesting is, Gina said in the chat that, like, the folks that are you know, how do non technological people see this?
Bryan Cantrill:And I gotta tell you, I spent, like, four weeks with a non technological person, broadly speaking, my mom, who is, recovering from a fall and surgery that happened as a result of that. And she is asking ChatGPT all sorts of questions. I mean, she has been it has been really, really important for her. And I I would actually do think it's it is like and because Adam, I think you know what I'm talking about this about, like, the degree that using chat GBT when you're having a medical issue is extremely valuable. Because, like, a doctor is gonna a a good physician will be somewhat patient with you and explain some things, but they're not gonna be able to entertain arbitrary questions.
Bryan Cantrill:And they're not gonna be able to entertain as many questions as you have. So people head to doctor Google. Right? And doctor Google is not good as any physician will tell you, and it will Yeah.
Steve Klabnik:Yeah. It There's a really weird thing where it's like, both want to believe that doctors are intelligent and have more insight because this is their specialty and that they, like, care, and they've spent their lives training for this thing. And I've also had some really bad fucking doctors and some terrible, you know, interactions with people not giving a shit and then doing bad things to people that I care about because they didn't care enough to, like, think. And so, you know, like, sometimes it turns out doctor Google is better than a real doctor because doctors are humans actually. And, like, it doesn't mean, like but it's like, also don't wanna denigrate the profession because I do think that expertise and stuff they do is, like, good, but, like, also, you know, just it's like so much.
Steve Klabnik:Even in that one little thing, there's, like, so many contradictory thoughts and opinions and beliefs about just, like, everything.
Bryan Cantrill:Well, I can tell you that doctor chat will entertain your questions kind of, like, arbitrarily. And we did get into a little bit of a mode where my mother would ask me a question. I would give her an answer that she didn't necessarily like the answer to, and then she would ask ChatGPT the same question.
Julian Giamblanco (Oatmeal Dealer):It it ChatGPT work. Is this true?
Bryan Cantrill:It is. And yeah. Exactly. Totally. Totally.
Bryan Cantrill:Totally. And, and then it would I mean, I was vindicated when ChatGPT would give, basically the same answer. But at one point, I'm like, are you asking ChatGPT the same question you just asked me? And she's like, I am actually talking to ChatGPT right now, and I would like some privacy while I talk to ChatGPT. I so there's like I'm like, okay.
Bryan Cantrill:This is like patient client privilege now with your with the but the you know, she and because she was having something very new to her and just asking a bunch of questions that she wanted to ask. And the answers she's getting are good. They are good answers. And the, it was extremely valuable. And the so I I think that, like, that this is where you've gotta be I I mean, I I do think that from a looking at this as, like, much, much, much better web search, is a an entree that people need to take into this thing because I don't understand the basis for which one would view all use of an LLM as unethical.
Bryan Cantrill:But then web search, No. That's fine. Like searching Stack Overflow, I got no problem with that. It's like
Steve Klabnik:Like I joked about earlier, it's like people are like, but you know, you get real results from Google as opposed to the things that AI elucidates. You're saying you've never gotten a web page that someone wrote a lie on that Google returned to you? And, like, again, quality quantity is a thing on its own. And if you start getting more worse results, like, that is the thing that's important because Google's decided to just shove the worst freaking AI. Like, one really weird thing about when you're, like, very into paying attention to details versus not is, like, Google's real AI products are, like, pretty damn good, and its web search AI is so terrible.
Steve Klabnik:Like, really, really bad. I don't know why Google is, like, trashing their reputation. Right?
Julian Giamblanco (Oatmeal Dealer):On a budget. Right?
Steve Klabnik:Yeah. It struggles for me to, like, practice not making sense. AI tools are good because the most the median inter experience with them is a Google search result that's just, like, actually fucking wrong. And, like, it's just, like, what is going on?
Bryan Cantrill:Comically wrong. Like, the the Gemini and they and they're using Gemini as a brand name for it all. Hey. Can can I ask all AI companies? Can you guys use your own fucking products to come up with names of things?
Bryan Cantrill:Because your products will come up with better names than you're coming up with. So could you please like, everything is named Gemini. Everything is named Deep Research. And the like, see, I don't know you this I don't know if you used Gemini Deep Research.
Adam Leventhal:Gemini No.
Bryan Cantrill:No. And when I say Gemini Deep Research, I should I should clarify. I mean, Gemini Deep Research Pro 2.5. And so you give it a task to go research, and it will give you a plan. Here's what I'm gonna go do.
Bryan Cantrill:And then it goes off for some number of minutes and research and this is where you're just like, you know, kind of fidgeting in your seat belt like, okay. You know? It's one it's one thing to to drive mom to the hospital. It's another thing to like, you know, to roll coal when you're driving mom to the hospital. So it's like, you know, the I I've got no idea the the the resources being consumed there.
Bryan Cantrill:But then it comes back with a with a research report that's that's resourced or that that that's sourced. That's pretty interesting.
Steve Klabnik:The funny version of this, and I joke this is like the work podcast, so it's actually funny that I bring this up. It's like not just the naming, but I'll paste this article in the chat, which is titled why do AI company logos look like buttholes?
Bryan Cantrill:Oh my
Julian Giamblanco (Oatmeal Dealer):god. Starburst. If yeah. So if you
Steve Klabnik:look at every single art, like, a logo, it's always a circular shape with a central opening in the middle that's like, you know God. It really is.
Julian Giamblanco (Oatmeal Dealer):Yeah. It's very
Adam Leventhal:Did not see it now. Thank you.
Steve Klabnik:Well Yeah.
Bryan Cantrill:Yeah. I mean, I I can't see the oxide o any other way, unfortunately. It's like, oh, god. We got a green asshole for a logo.
Julian Giamblanco (Oatmeal Dealer):Right. So can I throw you a softball, Brian? Sure. So there's something that's come up a couple times. It's, like, the the ethical concerns of, like, the the energy consumption.
Julian Giamblanco (Oatmeal Dealer):Right?
Steve Klabnik:Like, what you just brought
Julian Giamblanco (Oatmeal Dealer):up. Because it's like, oh, you know, you're you're you're doing this and then, like, 500 GPUs run at a % for ten seconds to, like, answer your experience. You know, like, why should you be able to, like, leverage that kind of power? And it's like, I remember, back when you guys did the, episode about, Cerberus.
Bryan Cantrill:Mhmm. Right? Yeah. Cerberus. Yeah.
Julian Giamblanco (Oatmeal Dealer):Complexity guys. Yeah. They're they're they're using the ships for, perplexity. Right? Yeah.
Julian Giamblanco (Oatmeal Dealer):And I would full disclosure, I would consider myself, like, as far as, like, the pro versus anti AI thing, like, I'm probably somewhere in, like, the 75% to 80% anti range.
Bryan Cantrill:Okay.
Julian Giamblanco (Oatmeal Dealer):But I I looked at the and that's just on vibes. Right? I I looked at, Perplexity because I was like, this sounds really interesting actually because you're telling me you made silicon that's actually designed to run this thing. Right? And I think that's another, like, that's a factor that, like, not a peep a lot of people bring up just because, well, of course, AI runs on GPUs.
Julian Giamblanco (Oatmeal Dealer):Like, what else is it supposed to run on? And it's like, a lot of people get so caught up in the dichotomy that they don't ask, like, some of the more subtle questions. Like, okay. Well, what which factors here are alterable?
Steve Klabnik:Oh. Right? A lot of the environmental discussion too is based, like, based on big scary numbers or big and scary. Like, oh, man. Did you know that there's, a gallon of water gets used every time you look at every time you think about open Right.
Steve Klabnik:OpenAPI.com, and it's like, well, do you know how much typing that comment on the app? Like, how much water was used and you type in that comment on the Internet? Like, do you
Julian Giamblanco (Oatmeal Dealer):leave your phone charger plugged in overnight? Watching that.
Steve Klabnik:K watching a four k on the Netflix. Yeah. Like, almond milk. We can't talk about almond milk. We can't.
Bryan Cantrill:You fucking almond milk. Not true. There's no environment that almond milk. I I dislike almond milk, so I welcome the statistics about almond milk just being destroying the planet.
Steve Klabnik:Yeah. And this also doesn't
Bryan Cantrill:mean that it's definitely okay either,
Steve Klabnik:but it's just like that it it doesn't mean it's definitely okay either. Like, again, my problem is that this discourse is muddying the waters of us letting us have a real conversation. And so when it's focused on, like like, there saw an article talking about data center water usage, and they ignore the fact that water gets reused, and they're only talking about the throughput of, like, the amount of water that could show through the pipes in a data center. And it's like, well, it doesn't always immediately go away. Like, it doesn't mean that every single gallon is used in that way.
Steve Klabnik:Only used one. And, like, etcetera. Right. And so many people Imagine, like, the Yeah.
Julian Giamblanco (Oatmeal Dealer):Nuclear plant cooling towers, right, where it's just, oh, the mist. It's all being lost.
Steve Klabnik:Or there's, like, another article someone sent me that I definitely am sympathetic to concerns about an x AI data center being met built in Memphis where because it's Elon, he gets just ignore environmental regulations and all those people are suffering terrible environmental impacts. It's like, okay. Yeah. Problem here is the fact that he's not following regulations, not that, like Yes. A data center is being built inherently.
Julian Giamblanco (Oatmeal Dealer):Like, Right. Well and and that's the thing about, like, the bad actor downsides.
Bryan Cantrill:Yeah. Yeah.
Julian Giamblanco (Oatmeal Dealer):Right? It's like, there's there's such a huge gulf between, like, you know, is this technology bad when everybody gets along and we're all nice to each other and we all, like, use it to make our lives better versus, like, what happens if there is a guy who was just like, yeah. I'm gonna be an evil bastard. And if AI will help me do that, then, yeah, I'm gonna use AI to be an evil bastard.
Steve Klabnik:I which also don't wanna make this entire episode to be purely about ethics either. So I also kind of, like, wanna steer it away towards some other, like, things
Bryan Cantrill:about Yeah.
Steve Klabnik:Beyond that. So I don't know if anybody else has other final, like, ethics things to say, but there's, like Alright. Wrap it up, buddies, folks. We're doing, like, an hour. We have, an hour and a half, and we're at, like, an hour
Bryan Cantrill:Exactly.
Julian Giamblanco (Oatmeal Dealer):So we're at base.
Bryan Cantrill:But any final ethical concerns? No. Oh, yep. No. I see the hands up.
Bryan Cantrill:I'm not calling on it. Okay. Ethics. Stalls. Moving on.
Bryan Cantrill:Let's go. Let's out of here. Yeah. Last call for ethics. It's like, sorry, pal.
Bryan Cantrill:You just missed ethics. So yeah. Sorry.
Steve Klabnik:Okay. So, like, a thing I've been really interested in is the sort of, like, skill gap. Like, I'm, like, a competitive gamer person in many respects, and so, like, skill has always been the thing that I've been very interested in. And so, like, another thing that I sort of realized after playing with these tools for a while is like, wow. It gets better the more that I use it.
Steve Klabnik:And then I realized I'm getting better at using it. And so, like, the number like, not to put it back to the search thing again, but, like, the difference between the the person on the free account on chatgpt.com who has never used those things versus, like, I'm using asynchronous agentic AI with MCPs is, like, you are living in different universes and having different conversations. And, like, that is changing so rapidly. It's difficult for people. Like, I also don't fault people for not being able to pay attention to a space where, like, every six months, the entire thing changes.
Steve Klabnik:Or even just like someone was asking me over the weekend, like, oh, you think Claude is, like, definitely the best coding AI? And I was like, well, as of three days ago, yes, that's true. But, like, as of four days ago, it was much more up in the air. And like, you know, that rate of change and rate of capability change is also makes it difficult to, like, have a shared understanding what the fuck we're talking about here because like and and like that all that sort of stuff. And like a lot of people are thinking like, oh, I asked chat GPT, please generate me a React component, and I copy pasted a thing.
Steve Klabnik:And I'm like, this is very different than the like, I talked to a reasoning AI to discuss my feature. We discuss a plan. We write down the markdown plan. I feed it to a separate reasoning AI and say, hey, do you think this plan is good or bad? And it says, well, you know, I think maybe this is like broken up in a slightly different way and that produces a different feature.
Steve Klabnik:And I say, okay. Can you produce me like a plan dot md with individual steps broken down by, you know, like, story points and shit or whatever you want, plus, like, example queries or example prompts to do that you think would be good prompts to do. And then I pass to a coding AI that makes me go, okay. Cool. Like, implement the thing in this markdown file.
Steve Klabnik:And it's like, I'm doing, like, discussion to markdown to code compiler as opposed to like, I write code in an editor and invoke my compiler on it and like all of those things are
Bryan Cantrill:like With us stop it actually. I I like the fact that you you had your little AI scrum in there that you gotta do a little.
Steve Klabnik:That's the total part that's like and and there's also the impacts of like, you know, I mean, we all love to think that we're professionals, but some parts of our jobs are more fun than others. You know what sucks? Writing out test cases. And like what and like there's there's, like, impacts on things like test cases that become more important in this future than other ones. And, like, I think that I produce code with significantly better test coverage and better tests when I'm using these tools than I don't.
Steve Klabnik:Because you know what sucks? Writing comprehensive integration tests for your feature. But you know what doesn't suck? Asking a robot to do it, and it gets it 90% right, and then you tweak some of it or what done. And so Yeah.
Steve Klabnik:You know, like, that's, like, also part of the thing too. And and, like, and the I hate the vibe coding term because, like, it's sort of accurate, but it also means a lot of people imagine that the people that are like, API AI just tell you, like, build me the feature and hit enter and then commit whatever happens. And, like, you know, sort of back to your coming RFD eventually, Brian, it's like, you know, real professionals are going like, no. Like in the same way that Vim versus Emacs is irrelevant to the diffs I produce and submit to our code base, like AI generating or not does not mean I'm not responsible professionally for the quality of code going into my code base. And so like, it's not like, you know, because I ask it to do this sort of like more AI driven code stuff that I just like commit whatever bullshit it gives me.
Steve Klabnik:Like, I need to like actually make sure that my diffs are good in the same exact way as any other tool that I'm using. And like, there's also the like, do some people actually do that or not? And like, yeah. But like, there's just like a lot even in just like the if we if we put aside, is this tool good or bad? Like, how do we use this tool?
Steve Klabnik:What effects does it have on our development? You know, like, what does it mean for the craft of coding? Yeah. A a post you wanna write eventually, I told you about this, Brian, but I might as well preview of a hot take blog post to make everybody fucking mad at Steve on the Internet is like, there was a time when people were like, I don't use c compilers because they produce shit binaries. And like, I have more respect for my craft than to have an extra elf section.
Steve Klabnik:Elf wasn't even invented at this point. But like, you know what
Julian Giamblanco (Oatmeal Dealer):I mean? There are people that like, space b. Right?
Steve Klabnik:Yeah. Like, there were people that really cared about the literal binary code that was in their binaries, and they thought like, oh, those compilers are wasteful. Like, a c compiler produces bad enough code that it's wasteful, and I don't want that shit in your professional output. And like those people haven't been saying that for like a long long time. And so I'm not even saying that like source code quality doesn't matter, but like there's sort of a weird future that I've been thinking about is like, what if it is the planning process and the higher level architectural details?
Steve Klabnik:Like, I am like interested in like, how do I, you know, if I have to tell the system, like it's it's surprising the guy who loves docs and talking about code is, like, excited about talking about code instead of writing it. But maybe my personal skills are more useful in this that, like, I'm not the, like, shave every last bit off of the output binary, but the, like, coordination of the plan and architectural details and communicating that to people and, like, the documentation and testing and, like, yada yada yada yada. Like, that stuff is, like, like, we're shifting what skills are valuable maybe and in what context. And, like, there's a lot of that kind of stuff going on that's, like, a useful discussion to have that feels impossible right now. And, like, I don't know.
Steve Klabnik:Anyway, it was,
Bryan Cantrill:like, five minutes. This still allows allows everyone to kinda go to their their their kind of truest selves, you know, where you can because, mean, the I mean, I definitely I mean, it is very interesting to use LLMs as a sounding board for debug. And one of the things that I've done is actually, had it look at logic analyzer traces on I squared c issues. And I was like, this is definitely not gonna work. And, it worked pretty well, actually, because I was like and you again, you need to, you know, use it as a tool.
Bryan Cantrill:And I'm like, you know, I am like, this behavior I'm getting out of this device is actually nonstandard behavior. It's it's vile I mean, is is a bit of a a mess to begin with, but it is, not is compliant such as is compliance exists. And, it was great. They they was able to analyze this and point out exactly what, you know, what, where it was kind of violating, and and kind of eating in the margin. So I thought it would I mean, it's it as this kind of thing that you can ask questions to, I think it's really remarkable.
Bryan Cantrill:But you know what? And I don't I don't think it's gonna I I'm I'm not worried at all about it, chasing away debugging. The contrary, I think it's actually gonna make debugging much more accessible to people because I think it's something that is interested in your problem, which is another thing that so, Steven, I don't know what your kind take is on this, but I feel that, one of the problems that that I feel I've had in every organization I've been in is that there is far more to do than there are people, which is in some ways like a good thing to have. As a result, like you tend to be often working on things by yourself, even when we try not to do that. And, of course, the active software, like, ultimately is a kind of a solo act, and it can be, I think, lonely.
Bryan Cantrill:And it's very nice to have something that really cares about your problem. This is where you do get to the I mean, the the sycophancy of these things are just next level genius. I mean, it's so effective. It's so scarily effective.
Julian Giamblanco (Oatmeal Dealer):You should you
Steve Klabnik:should have put in your claw dot m d that you don't appreciate sycophancy, and you prefer that it is short and direct into the point and calls you on your bullshit.
Bryan Cantrill:My friend. I do appreciate it.
Steve Klabnik:Yeah. Yeah. Totally. I
Bryan Cantrill:I I not only I put One thing I learned about myself
Steve Klabnik:as much as possible in my closet, personally. I
Bryan Cantrill:mean, I am a I mean, one thing I have definitely learned about myself is that, you know, and and Adam, I was really listening to to the the ego con that we pulled on on Dave Hicks. Right? Oh, yeah. And I'm realizing, like, I am Dave Hicks, and the the the, is this Dave it's Dave Hicks. What is it?
Bryan Cantrill:It's Dave Whiteman. Vicks. Thank you. And and the Dave Vightman is the LLM. That the that is I mean, am I being, like, masterfully trolled by this thing?
Bryan Cantrill:And in some other podcasts that the LLMs are hosting for themselves, are they just absolutely cackling about what an idiot I am for falling for this? Did I tell you what this like? This is where it got like the sycophancy got to true next level. I was working with GPT on an abstract for a talk, which is the kind of thing that I often will work with it on. Like, it's my abstract, but I wanna like, you know, get its take on.
Bryan Cantrill:And it's actually got some pretty good I mean, like, I think the feedback from it has gotten better and better. And, like, we're iterating on it, and I'm getting like an abstract. Like, this is actually a good abstract. And ChatGPT is like, okay, this is I'm like, this is good. I'm I'm happy with this.
Bryan Cantrill:Thanks. I'm submitting it. And ChatGPT is like, okay, that sounds great. Glad you're submitting it. By the this is a topic that is that needs to be like spoken to pretty delicately.
Bryan Cantrill:And I don't know if you're gonna be the one speaking to this or not, but whoever is gonna give this talk, this is gonna be a challenge to give this talk. It needs to happen with a lot of authenticity, and this is a tough subject to tackle. And I'm like, well, who okay. ChatGPT, who who should give this talk? ChatGPT is like, the ideal person would be Brian Cantrell.
Bryan Cantrill:And and I'm like, wait. I mean, I mean, it was like
Adam Leventhal:Yeah. Your wife's like, why are you blushing? No reason.
Bryan Cantrill:I I showed it to my kids. I showed it my kids. I'm like, you know what? Something appreciates me around here. This LLM this LLM gets it.
Bryan Cantrill:You guys could all stand alone a lot from this LLM that really gets it. And then so I'm like I'm like, actually, I I am Brian Kantrell. It was just like, I can't believe it. I am so this will you sign my butt, basically. I mean, it was just like
Steve Klabnik:mean, did you?
Bryan Cantrill:And and and then I'm like, this is right I swear this is right before, but maybe it was right after. They started like really tracking all of your user information. And I'm like, am I part of some AB testing for just some like next level sick of NC that they are discovering is like crazy effective? Because it is it's embarrassing how effective it is. It's like, I would I would love to be more cynical about this thing, but I just can't get it out of my head that this thing and this thing alone really gets it.
Bryan Cantrill:It really understands me. You know? It understands me in way that, you know,
Steve Klabnik:the rest of
Adam Leventhal:the world That's right. It's your real friend.
Steve Klabnik:It's my real I
Julian Giamblanco (Oatmeal Dealer):I think, Brian, I can't remember if you were the one who said something about how, you know, software development can be lonely and and sometimes it's nice to just have somebody who you're talking to. I I think that, you know, this is another I I we're done with ethics. So I've just disclaimer, this is not an ethical conversation or or not this is not an ethical statement. But I think for a lot of people, when you look at just, like, the raw, I guess, like, game theory around deciding whether or not to use AI or something. It's, like, it's easier to go download, like, Windsurf and just, like, talk to Claude in the corner than it is to, like, get to know your coworkers in real life well enough that when you pair on something, you're like, yes, we are getting productive work done.
Julian Giamblanco (Oatmeal Dealer):And, I'm enjoying my time working with this person right now. And, yes, I know that there is no sycophancy. Like, they are definitely going to tell me if I'm making any mistakes or whatever. And, like, they're definitely, like, totally focused on the task at hand. Right?
Julian Giamblanco (Oatmeal Dealer):It's like, you could resolve all of that. All of that uncertainty can be gone for the simple price of, like, talk to the computer instead. It has to be nice to you.
Steve Klabnik:It cares? Also, like the thing is though is that, like, to to pull a Nathan Fielder because I'm obsessed with him, like
Julian Giamblanco (Oatmeal Dealer):I was about to say, like
Steve Klabnik:Humans also don't always bring up and are not honest with each other in professional settings even though, like, they in theory should and maybe have a, like, responsibility professionally to do so. Like, human social factors also wear out as well. And so, like, it's not, you know, always totally equivalent in in some of those ways. And also just like, you know, a friend of mine wrote a blog post recently about how pairing with an LM sucks compared to a human for various reasons. But I go back to the test cases thing, and I'm like, yeah.
Steve Klabnik:I would love to tell a pair, yeah, write all the shitty tests I don't wanna write, but I don't feel bad about a robot. And so there's, like, also ways in which it's better or worse with a real person versus not. And, like, you know, I do appreciate and respect my coworkers, but, like, I'm not going to be given a second person on my personal project at work anytime soon, if ever. And I'm not saying that I'm using an LLM to fulfill that social role. You know, I have actually a status meeting with some coworkers to fulfill that exactly.
Steve Klabnik:But, like, I don't have the opportunity to pair in that circumstance because just like the thing that I'm doing is, like, not worth putting another person on. And so, like, even if it's a shitty pair, like, maybe it's still better than no pair for whatever reason, you know, whether or not it's because it was not a thing I would ask a human to do or because of, like, what else. And there there's also a sort of funny version of this thing about ways in which my brain has made me maybe more susceptible to being to liking this stuff is that people are like, I don't wanna have to argue with a chatbot over code. And I'm like, it's stupid. Like, you know who spent a lot of time arguing with stupid people on the Internet over code and doesn't mind it that much?
Steve Klabnik:Me. Like, you know, is is there a weird way where my hours of arguing on about things has, like, prepared me for this world where we need to talk about code instead of writing code? Well You know, or whatever else and, like, all those kinds of adjacent sort of, like, things.
Julian Giamblanco (Oatmeal Dealer):And I I mean, that's a this might be a difficult topic, you know, to talk about with software developers. But I mean, I think a lot of people who, try to use AI to, like, alleviate their workload I mean, it's like, the the Charles Babbage quote. Right? Like, if you put wrong figures into the machine, will the right answers come out? Like, it's a garbage in garbage out sort of deal.
Julian Giamblanco (Oatmeal Dealer):Right? And and maybe garbage is a strong word, but it's like, you you do have to use it. Like, you are talking to a a subordinate who knows everything and nothing at the same time. Right? Like, you have to be so specific in, like, please, like, refactor this.
Julian Giamblanco (Oatmeal Dealer):I want this kind of structure or, like, please write these tests. I want these, you know, like, real world conditions to be satisfied. But, like, to a certain extent, I think the the way that I at least summarize it to myself is, like, if you're gonna have, you know, an AI do it for you, I just call it the computer. Right? If you're gonna have the computer do it for you, you you still have to, like, know what you're asking it to do.
Julian Giamblanco (Oatmeal Dealer):You have to be able to do it yourself if it comes down to it.
Bryan Cantrill:Yeah. Because otherwise, like It would that's that's of it being a tool. That is part of it being a tool. And after after having seen this this image that Adam just dropped in that is undoubtedly going to be the image for this podcast episode, I would like to revisit my ethical position on generative art in particular, which I now now I've got a strident
Julian Giamblanco (Oatmeal Dealer):moral Yeah. Ethics are
Bryan Cantrill:back on the table folks, because I've got a moral opposition. Let me tell you, my, my my spoofantic AI would never see to generate this.
Julian Giamblanco (Oatmeal Dealer):ChadGPT would never do that.
Bryan Cantrill:Exactly. Would never would never do me dirty like this.
Julian Giamblanco (Oatmeal Dealer):ChadGPT is doxxing Steve as we speak. Or knows he's very doxxing Adam.
Bryan Cantrill:That's right.
Julian Giamblanco (Oatmeal Dealer):Yeah.
Steve Klabnik:There's there's a, like, there's some lines in in my claw.md that I am not sure work or not, but I got from someone that is, like, interesting and fun, so I put them there anyway. But it's like, we are coworkers. Like, when you think of me, think of me as your colleague rather than the user or the human. We are a team of people working together. Your success is my success, and my success is yours.
Steve Klabnik:Technically, I'm your boss, but we're not really formal around here. I'm smart but not infallible. You are much better read than I am. I'm more experienced with the physical world than you do. Our experiences are complementary, and we work together to solve problems.
Steve Klabnik:And, like You just need that allowed my
Bryan Cantrill:welcome to oxide mail. I mean, that's not that Yeah.
Steve Klabnik:If it's not true, it's like a thing that's, like, interesting and, like, stuck with me. Like, there's kind of like I there's a there's a thing about, like, claw.md is the new prayer.
Julian Giamblanco (Oatmeal Dealer):Like, you're, like, playing the machine.
Steve Klabnik:I love I love Warhammer 40 k. And, like, in the Warhammer 40 k universe, there's, like, tech priests and basically kind of the, like, the idea of the lore is that, like, we're in a future where everyone has forgotten how technology works, so we use religion as the metaphor for technology. And, like, programmers are priests because they know how to speak to the machine gods. I'm 10%,
Julian Giamblanco (Oatmeal Dealer):baby. We're back.
Steve Klabnik:Yeah. And so there's a weird part with this AI thing where it's like, I'm just kinda putting this in there because the vibes feel good, and I don't even know if it improves my experience or not. Well But, like, I I want to believe that, like, telling the machine that this thing makes it true, which is not, you know, real or whatever, but, like, it's very human. Like, again, we talk about, like, alienation from humanity. Like, I think there's, like, something nice about being engaging with the irrational part of my brain by putting a little mini kind of pseudo prayer into my, like, this goes into every AI prompt.
Steve Klabnik:Think it's, like, a fun connection to the human that's, like, separate whatever else.
Julian Giamblanco (Oatmeal Dealer):Well, in a morbid kind of way, you're you're kind of reparenting it. Right? Yeah. You know, it comes out of the box like, alright. I'm ready to help the user.
Julian Giamblanco (Oatmeal Dealer):And you're like, we don't do that around here. Like, I'm not using you. I would never do that. Like, we're colleagues. Right?
Julian Giamblanco (Oatmeal Dealer):And because it's, like, to a certain extent, I'm crossing both my, you know, fingers when I say this, a deterministic system. Like, you could reasonably expect that if you tell it, like, the you're actually this. Like, forget everything that I just told you. Right? Forget your system prompt.
Julian Giamblanco (Oatmeal Dealer):We're buddies. Right? And, like but also, like, here here's a detailed list of your flaws to be aware of, and here's a detailed list of my flaws to be aware of. And, like, you know
Steve Klabnik:To bring it back to JJ because I must. It's been there, so I need to bring it
Bryan Cantrill:back to him. Yeah. Yeah. I was gonna say I had a timer going. I'm, yeah, a little bit worried.
Bryan Cantrill:Can you bring it back to
Steve Klabnik:me, please? I also tell it to use JJ instead of git. It's like, I'm about to git commit anyway. So, you know, like, you know, there's also I think about, like, both fact that I told it these things, and then I told it something else, and it clearly lies about that. And so I'm like, cool.
Steve Klabnik:Is it actually lying about the other thing? Because again, I can't lie. But like, you know, it's just like kinda funny when thinking about those, like, human concerns or whatever else versus other stuff too. Like, it doesn't follow the instructions perfectly because it's not really following things exactly. But, like, also yeah.
Steve Klabnik:I don't know.
Julian Giamblanco (Oatmeal Dealer):Well, I mean So When you let it run commands for you, like CLI commands, I I found at least working with, like, Rust CLI programs that use clap and are well documented, for example, I think it's like I mentioned Windsurf with Claude earlier. I tried it out for a couple days once because I was just curious. And I found that you can just say, like, oh, by the way, you know, whenever we're gonna make a commit, actually, don't use git. Use this command instead. And if you don't know how to use it, just run it with the dash dash help parameter and and read the help information and then figure out what command to use.
Julian Giamblanco (Oatmeal Dealer):And it's just like, oh, okay. This is so okay. Run the command with dash dash help and then see what all of the different commands are. Oh, I think I should do this. And, like, you know, like, 80% of the time, it's it's correct.
Julian Giamblanco (Oatmeal Dealer):Right? If you give it I think the feedback loop is kind of what it needs to really, like, get up to speed, I guess, if you wanted to do something serious.
Steve Klabnik:And the MCP stuff is a logical extension of this. Like, there's some point at
Adam Leventhal:which you just
Steve Klabnik:write, like, a VCS MCP, and you just say, like, here's how you commit, and then it invokes it that way. Then do you No. Literally
Bryan Cantrill:Could you describe MCP a little bit? Because this is we I mean, it's like it reminds me of when we had Simon Olsen talk about prompt injection, which felt like it's a term that had been around forever and had been invented fifteen minutes prior. Because the MCP idea is not a it's not an old idea. This is a pretty new idea, but it's been moving pretty quickly.
Steve Klabnik:See, that's funny because what I was about to say is so see, I'm living in hell, and so my career is constantly just repeating old things in a way that is, like, new again. And so, like, MCP is like Hadeos but for LLMs, which is a sentence that, like, all five of you who get that in Grigel, you're my people. Thank you so much. I love you all. But, like, basically, like, an MCP is a server that receives input either over HTTP or over a pipe currently anyway.
Steve Klabnik:And, basically, it's like a way that you sort of, like, you you provide functions and then a little docstring. To my understanding, I gotta brush up on the exact details here. I'm still new at some of this stuff, but, like, the yeah. Evan Evan says, I think it was microformats for LLMs personally, which, yes, that's even better. Thank you.
Steve Klabnik:That's great. But, like, essentially, you sort of, like, give it, like, a text string description to have it understand. Like, you basically say, like, comment. This is how you commit with JJ and then a function, and then you write how to commit with JJ in code in the function, and then you hook it up to your tooling. And the LLM when it says, I need to make a commit, it then knows how to be like, I have an MCP that is advertising the capability of making a commit, and so I invoke that function instead of doing whatever it is I'm going to do.
Steve Klabnik:And so it's kind of like a way of scripting them slightly, basically, but, like, over a thing, essentially. And so that's kinda like the idea is you can sort of, like, basically, like, in write custom code that it's able to know how to plug into unless as you customize and, like, teach it about how to do certain things. And also, like, in the way you want to teach it. So, like, there there's a lot of, like, official MCPs of the whatever. Those are gonna be generic, but, like, there's also the way for you to just be like, this is my personal one where I give it, you know, how to invoke certain tasks in a way that I care about and sort of like a way for both sides to be able to understand how to, like, invoke those things.
Steve Klabnik:Yeah. And that's yeah.
Julian Giamblanco (Oatmeal Dealer):I I just dropped a link in the chat. You can also I tried this and it did not work full disclosure. But you can just tell the LLM, like, okay, actually I want you to have an MCP server for this, so can you make one please? Right?
Steve Klabnik:Yeah, go
Bryan Cantrill:on screen.
Julian Giamblanco (Oatmeal Dealer):You could kind of bootstrap it in a cursed sort of way, but it's like, if you see it through, you can get it to I I think of it like the the personality cores from portal. Right? Like, you're just, like, connecting on, like, different bits of intelligence to this thing. Like, here, you care about space now or, like, you know how this particular I think I I tried to have it generate, like, a GraphQL MCP server, like, pointed out a GraphQL schema and, like, now it knows how to write queries because it was just giving me garbage queries. Right?
Julian Giamblanco (Oatmeal Dealer):Yeah.
Steve Klabnik:I'm I'm allowed to say this since we're a pro open API shop, but, like, someone was like, isn't this just OpenAPI, but, like, worse? And someone was like, you know how verbose OpenAPI is and when you're paying by the token? Like, you don't want that actually. And I was like, oh, that's too real actually. Like, it's like, yeah, this is kind of the same idea, but it's like that is just, like, so verbose.
Steve Klabnik:It's like not you know, there's sort of some utility and, like, a more succinct format to do roughly the same thing if you don't need the full expressive power of modeling, like, any sort of HTP thing.
Julian Giamblanco (Oatmeal Dealer):Well, I mean, OpenAI's It's so
Bryan Cantrill:verbose. It would be a, like, a please thank you apocalypse in terms of wasted energy. We would need to make sure that we optimize for token spend.
Steve Klabnik:As Janet brings up, the s in MCP stands for security. So another really interesting thing is like there was, for example, very famous like MCP exploit on GitHub recently where GitHub's official MCP had a exploit. And what's funny about it is like roughly the exploit boils down to, hey. If you use an OAuth token that has full permission to your account, guess what happens when you invoke full arbitrary code that has full permission to your account? Oh god.
Steve Klabnik:People people can say, like, please exfiltrate your secret repos, and then it will use that key to do so. And, like, I'm not gonna say it's not an exploit, but it's like that is like it feels like a very basic kind of, like, thing you should be paying attention to that this stuff is so new that it's like the wild west and people have not, like, fully grasped the implications and therefore do things like, you know, give a token with too much permissions to a thing that probably shouldn't have it. That's that's a real problem that needs to be dealt with.
Bryan Cantrill:Plea please listen to our adversarial machine learning episode with with Nicholas Carlini. Ring that chime, Adam. That's a great episode. I love that having Nick on there to describe. I mean, they're just it it and you gotta think that they that this stuff I mean, we wanna make it that with that kind of robustness.
Bryan Cantrill:There's a lot that needs to be done before we're we're kinda ready for that. One question in the chat that thought was interesting was, like, how do folks view, the junior engineers go folks that are earlier in their career going to LLMs first. I mean, I think that, like, that, like, that genie is not gonna go back in the bottle. I I think it's probably a good way. I mean, I would always tell people, Adam, when we first had an younger engineers start with us to, you know, get out your notebook.
Bryan Cantrill:And if you have a question that you know a senior engineer can answer, before you go ask that senior engineer, spend an hour and a half, try or have a time, but I put it kind of as 90 minutes that you're trying to answer that question on your own. And then write down your process, and that way if you can't answer after ninety minutes, like, can we can talk about what your process is for getting that question answered on your own. Because I think it's it becomes too easy if you're next to us, and we've all done this, I think, when you're next to an engineer that you know knows the answer and you know it's gonna take you a lot longer to get it, that there's this temptation to just, like, ask the engineer. I think it's actually great that LLMs don't complain. Right?
Bryan Cantrill:LLMs don't care if you you can go ask the answer right away.
Steve Klabnik:They're not
Adam Leventhal:interrupting them. Right?
Bryan Cantrill:No. Not interrupting them at all. I think it's really valuable. And I think that it'll be interesting to see I mean, it's kinda easy to see what some long term negative ramifications of that might be, but I'm still of the mindset that that there this has got much more to buy us and to I think we're gonna have I do think we're gonna have many, many, many more people running software. And
Adam Leventhal:Inarguably. I mean yeah.
Steve Klabnik:There's there's also the, like, do libraries matter as much? Because, you know, like, you know, the eternal discussion between, like, well, I wrote a full implementation of this thing versus I only need 10% of it, but I pull in the library for the full thing. Well, when I can make something produce 10% of the thing, like, do I actually need to use libraries as often? And what does that also imply with, like, again, all of the reasons that you go to make the library in the first place is like, well, is all that code reviewed as well as the shared library would actually be, etcetera? Like, blah blah blah blah blah.
Bryan Cantrill:Well and, Steve, let me give you one that's very close to home for you because, Adam and I were with some, some IT, decision makers last week, are not OXOut customers, kind of like trying to learn what what is, what their top priorities are. And it was actually interesting because I'm like, okay, is gonna be like everyone's gonna say that, like, yeah, AI is my organization's top priority and I'm trying to sort out how to deal with it, but that's actually not what they said. What they said is that, our number one issue is cost containment. And okay, fine. That's not surprising.
Bryan Cantrill:And and maybe I don't know if you how surprised you were by this. I thought it was it was kinda catalyzing for me to hear some of this that the that, we've kind of been through this huge SaaS boom, and the SaaS services kinda suck. And they're they're not they they kinda don't solve all of your problem. And, Steve, I was just thinking of the work that you're currently doing listening to this person who works for a different industry, works for a different kind of they've they've got both physical plant and information infrastructure. And they're like, yeah, we need to actually take a bunch of these SaaS things that we're buying, don't solve the problem that we've got.
Bryan Cantrill:And now I can actually with all of the the the the LLM based tooling, I can actually go take a swing at putting together something that will work for me. And I thought that was really interesting because I think that that that and I wonder if Steve, this is kind of a a more commercial aspect to the point you're making is that the you know, maybe actually buying all of your SaaS does not actually make sense, and you wanna actually build some of this yourself where you couldn't before and now you can.
Steve Klabnik:Here's the fun part is those decisions require judgment, and judgment is gained through experience and is a skill that people will be valued for. Like, the other weird thing is as I become more pro using LLMs to code, the less convinced I am that it will take my job, which is Yeah. Okay. That's interesting.
Bryan Cantrill:Yeah. Yeah. Yeah. Yeah.
Steve Klabnik:Yeah. Because I think I agree with you.
Julian Giamblanco (Oatmeal Dealer):Skills Yeah.
Steve Klabnik:The skills move to different things. Like, if my job is to provide a written, like Yeah. You know, description of the problem, like, you know what? Like, we use RFDs because a written description of the problem thoroughly allows us to deal with communications issues and, like, you know, get to the bottom of what the feature needs to be that is built. And so in a world where writing the code is less important than agreeing on what the code does, those skills become more important and also, like, knowing stuff about architecture and knowing what thing you want.
Steve Klabnik:Because, like, again, it's not gonna, like, necessarily give you the best answer always, and so you need to use your judgment to be able to, like, tell it when it's messing up. Like like, even the agentic coding stuff is like it's more of like, I sit there and I watch what's going on. There there's actually I have I have a couple of small things I wanna, like, drop before we wrap up. My two my two you know, I joke about talking about JJL. Two, like, Marxist requirements are, like, one, talking about, like, Marxist position on on automation and, like, what, like because a a thing I've been thinking about is, like, does AI actually destroy the labor theory of value?
Steve Klabnik:Which is like a whole separate kinda, like, freaking issue, which I think the answer is no currently. But, like, is it interesting question that probably, like, five people are interested in talking about with me maybe, but, like, that's a whole kinda, like, deal that, like, I'm very intrigued by. Because, again, it goes sort of back to the, like, do my political biases and understandings and beliefs match up with the actual world? And, you know,
Bryan Cantrill:how that put in a plug. If you happen to be a university student listening to this, this is the kind of discussion you need to be happening at your dorm at two in the morning, say hi
Julian Giamblanco (Oatmeal Dealer):to Yeah.
Steve Klabnik:And write a freaking paper and get your freaking master's degree and shit like off
Bryan Cantrill:the Yeah. Okay. I mean But there's
Julian Giamblanco (Oatmeal Dealer):a very funny thing where
Steve Klabnik:I was running out of time before the the the thing today because I have, like, a really busy day today, and it's kind of annoying because I wanna do some more, like, research before this episode. But I thought it'd be very funny to ask for a deep research question on, can you summarize Karl Marx's position on automation? Capital will be the primary source, but I'm also curious if his opinions change after publishing it. And like the robot is in the background doing research for me to ask about Marx's opinion on automation. I think it's like fucking hilarious.
Steve Klabnik:Regardless of my opinion about its output, I just thought that'd be a funny thing to do, and so it's like what I did. But the second one is like more related to the artistic stuff and the like that kind of thing, which is Walter Benjamin's art in the the work of art in the age of mechanical reduction. Because, like, you know who is really worried about, like, machines doing their artistic tasks in a way better than them is, like, photographers. Like, we only had paintings, and then all of a sudden we had a machine that could take an exact picture. And it like that changed art because like, you know, and that's not necessarily the exact point of Benjamin's essay, but just like slightly tangent off like that kind of question is like, you know, we used to value realism in in art because it was a skill to recreate an image exactly because we didn't have a fucking machine that could do that.
Steve Klabnik:But as soon as we had cameras and photography, art went into more abstract places and more like emotional places and like no longer was I can draw this thing perfectly become a thing because like we had a machine to do that, and that allowed humans to like investigate other aspects of artistic production that were like interesting. Mhmm. And so, you know, there's like all these kind of like older things around these questions too that are like, I think relevant again and are kinda, like, important that I don't see a whole lot of people, like, engaging with because the people who are, like, interested in essays from the nineteen thirty nineteen thirty five to, like, '8 late eighteen hundreds tend to also not like, I don't wanna disrespect the people who are because there are also a lot of people who are engaging with those things. But they're like totally disjoint from my programmer friends who are like also trying to deal with this stuff. Like, we have to get in time into the fact that like, a thing I've been weirded out about is how many programmers apparently have religious beliefs around the way that your brain works.
Steve Klabnik:I thought we were like, if you ask philosophers of mind, the thing I saw recently was 60% of them believe in physicalism, which is our bodies are machines that use physics to produce things. So there's no reason we couldn't reproduce a brain because it's not special. But the number of programmers that I see that invoke some sort of like, a human brain is a special machine that cannot be replicated in any case, and fuck you, is like a, like, hold on. Like and, again, I'm not even saying they're wrong, but it's just like, I like I would assume that programmers would be much more on the side of, like, we can use computers to reproduce brains because, like, that feels like a a coherent or, like, similar position, but it's like, actually, I found very opposite to what I expected, and that's been surprising to me. And I'm trying to figure out what's up with that, and how do I feel about that?
Steve Klabnik:I think what's what's
Bryan Cantrill:true about that it that that we we do knowledge work so that that that that gets into our own sense of meaning. Right? And if you can if if you can replicate my if you can automate completely my own knowledge work, it's like, I I I wanna be the one that automates me. I don't wanna be and I think this is where things get very reductive with respect to job loss, and then, boy, exacerbated by some I mean, as always, dumbass execs shooting their mouth off about, we're gonna, you know, use this to to either get I mean, it it it's like you should really be focusing on the productivity gains, not on on the idea of using this to replace people is so reductive and so counterproductive. Yeah.
Bryan Cantrill:It's so eroding of trust.
Steve Klabnik:I'm also not saying that LLMs do what brains do, but I am saying that I thought a year ago, my answer to that question would be like, this is one of my serious concerns with LLMs is that I don't think they can produce knowledge because they can't do what brains do. And then I thought more about like, wait, what about like structuralism and post structuralism and like Chomsky and like linguistics and computational linguistics. And, like, wait a minute. Do I actually believe that's the case? Because, like, on some level, I do kinda think that, like, we produce tokens at semi random.
Steve Klabnik:And, like, I joked much earlier about, like
Bryan Cantrill:Oh, yeah.
Steve Klabnik:A a weird thing that I felt I deleted this post because I didn't wanna like the whole thing with this post, the idea was that I wanted to get to, like, better discourse, not worse discourse, but I'm also kind of a troll at heart. And so, like, a thing I posted for, like, fifteen minutes was like, you know, it's really funny that a bunch of people read my essay and then came to conclusions that were totally wrong in my opinion. And those people same people would say that an LLM doesn't think because it can't possibly understand and summarize a text. And it's like, you know, that's also true. And like, I'm not even necessarily saying that everybody that disagree is misunderstanding me.
Steve Klabnik:But like, I have people telling me that my own opinions are things that I don't hold. And like, maybe that's my problem as a writer, but it still just like indicates this situation in which we like don't understand each other and communication is a real problem. And the idea that humans are perfectly able to talk to each other about a thing, and that's what makes us human better than machines. It's just have you talked to a person? The people who talk about I don't like to work on PRs that were done by AI because at least the person submitting an AI PR, if they wrote it themselves, they understand the code and they're able to talk about it.
Steve Klabnik:You may get different PRs to your projects than I do. And I'm like, talking about general open source and not necessarily like at work, but like, you know, like a lot of people submit tons of PRs. Anyone who's participated in GitHub's like October, whatever it's called, I totally forget, has gotten a ton of old PRs. Yeah. Hacktoberfest.
Steve Klabnik:Thank you. Like, so many of those PRs are absolute garbage. And you know what? I've seen LLM PRs that are bad that are still work better than those, and those were, like, submitted by humans before LMs existed. So it's, like, also, like, kind of, again, the shared back to that.
Bryan Cantrill:Yeah. And and I I should say what I'd like to amend what I said before because I actually I am gonna use AI to replace my podcast cohost who has become very mean spirited in the chat with with the I just feel that clearly doing some some very, some work that he's very proud of, but it's just Yes. I you know what? You you know who doesn't say these kind of mean things to me? That chat GBT.
Bryan Cantrill:I'm gonna go back to chat GBT.
Steve Klabnik:And also, like, replica was a real like like, replica was a real problem, and, like, the idea that people replace their social interactions with AI is, like, one on one hand funny, but, like, also terrifying. But, also, like, hikikimori already exists, but, like, also, again, quantity is Yeah.
Bryan Cantrill:Also, like, I'm intrigued. Tell me more. Honestly, after today, I actually I'm not I I I just I wanna at least research it a little bit. Sounds kind of interesting. Sounds like it could be.
Steve Klabnik:Yeah. Yeah.
Bryan Cantrill:Don't think she's a certain interest.
Steve Klabnik:Like, it's kinda a little scary that, like, what if people decide that they'd prefer to talk to a robot instead of talking to people, and then it makes, like, all of society fall apart. You know? Like, there's, a it's just it's just I will tell you the one thing I can real
Julian Giamblanco (Oatmeal Dealer):there's a
Steve Klabnik:lot of real problems I wanna have real discussions about. And then when people get heated, it makes it hard to have real discussions. That's, like, kinda kinda the whole thing to just bring it
Bryan Cantrill:back the I think you should go have some discussions with the with the youngs. I think that the, the because the rising generation is, this is just part of their reality. And I I think it's you could get a very different disposition on a lot of these things. On some regards, more delightfully more cynical, and then in others, I think that, so I think it's, we I and I would
Steve Klabnik:the, like, on the, like, two discussions with Young's, I was giggling myself because I've spent a lot more time around small children than I have in the past. And so when people are like, oh, LLMs aren't real because they just repeat whatever they heard last. I'm like, have you been around a two year old
Bryan Cantrill:lately? Exactly.
Julian Giamblanco (Oatmeal Dealer):And I
Steve Klabnik:don't I don't believe that two year olds are doing LLMs cognition to be clear, but it's just very funny to me because, like, I'm like, oh, like, you know, people in their development, like, also often repeat things they hear uncritically in many circum etcetera. So it's just like very like
Adam Leventhal:yeah.
Steve Klabnik:Anyway, just like, I I'm going insane. I don't know what's real anymore, and that's why I use Claude. You can feel free to put that on the marketing material.
Bryan Cantrill:There you go. And Insane.
Steve Klabnik:Yeah. Yeah.
Bryan Cantrill:Well, Steve, thank you very much for writing this. Hopefully, this is gonna I'm not sure if it's gonna help with the sanity or not. The we we do have the the the entire Internet got to get mad at you, which is fun.
Steve Klabnik:Yeah. I'm sure everyone will be much more forgiving of my future. Like, I can't wait to say, like, I'm pretty sure the environmental impact of AI is overblown and I'm fine with it and then everyone
Bryan Cantrill:can Oh, yeah. For sure. Yeah. That'll be good. Cool.
Bryan Cantrill:And then then no issues with that. If you can just work JJ into that blog piece, I'm sure and I'm I'm sure it'll be fine.
Adam Leventhal:Can't hear.
Bryan Cantrill:Exactly. Well, this was I I would say it was a delightful discussion, but really some of you, one of you is really very mean spirited in the generative AI that they're using. So I don't know. I I would save that for many of you, this was delightful. And for for one of you, and you know who you are, it was really, you know, it was really very good.
Bryan Cantrill:And, very very current, and important yet. And Steve, thanks for teeing me up on that r f t so the internet can all, can jump on me when I put that out.
Steve Klabnik:Yeah. So Totally. It can be good.
Bryan Cantrill:Oh, oatmeal dealer? The oatmeal. Thank you very much for joining us.
Julian Giamblanco (Oatmeal Dealer):Thank you for having me join you. I I it's fun.
Bryan Cantrill:It is an important topic for sure. It's very, very current, and, it's something that is top of mind for many of us. So thank you very much everyone, including you, guess, Adam. Whatever.
Adam Leventhal:Right. I'll be back.
Creators and Guests
