Shell Game with Evan Ratliff
The first gag to write itself is like, how do we ascertain this is the real Evan? I've listened to this guy, like, just I mean, I'm always gonna be offended if this is not like Kyle masquerading as Evan. But
Evan Ratliff:I don't wanna offend you. I I should go get him. I mean, in the past, I did always I sent an AI version of myself to interviews for a while and then I kind of got tired of it. So I don't do it anymore, but if you like, can I can switch off?
Adam Leventhal:That does feel like what the what the impostor would say, but okay.
Bryan Cantrill:I I gotta say, like, I listen admittedly, I listen to so much actual Evan, but then also impostor Evan that you I mean, actual Evan sounds more like impostor Evan. I'm like, I'm I you know, what's real here? I I I'm in a hall of mirrors. Evan, you should this podcast is I've recommended this to so many people. And so I my my wife listened to it.
Bryan Cantrill:My wife was hanging on every episode of Shell Game. And let's just say, not to get too far into my domestic relations, but my wife does not take every recommendation I make with with equal weight. In fact, many of them may be discarded. But she was also hanging on every episode. You've made me a celebrity in my own house.
Bryan Cantrill:Why are you coming on the podcast is what I wanna say.
Evan Ratliff:Well, I appreciate that. That's that's one of the nicest things anyone said about the show.
Bryan Cantrill:Oh, it is so good. It is so good. And I mean, I think we all got I don't think you got here I mean, I got here, Evan, because you were on a This American Life episode that was listened to to by some of our colleagues, which is how we got to shell games season one. And which was mesmerizing. And even if people have listened to the This American Life episode, I would really encourage them to listen to that whole season because it's extraordinary.
Bryan Cantrill:And I so I was then was hanging on every episode of season two. But kind of my opener for you and so in season two, those who would not listen to this extraordinary shell game podcast, Evan Ratliff, our guest, assuming he's our guest. I mean, we it may be our guest. I don't know. I'm still like, still has some asterisks.
Bryan Cantrill:Still still a little bit of TBD on that. But the presuming that the actual Evan is here. The in season two, you started a company with personified AI agents. And tell me about the genesis of this idea. I mean, it's such a great idea.
Bryan Cantrill:And really, I think you just say like, look, I'm just doing what what these tech pros are telling me to do as I'm doing what they're telling me is the future. But what was the genesis of that idea?
Evan Ratliff:Well, I I had messed around with with agents, you know, in season one, but the agent, it was just me. It was a, you know, it was a cloned version of me in season one, but it was like a like a voice agent, a phone agent that I hooked up to my phone. So I I had, you know, some experience with that. And then I was I was actually sure I was gonna do a season two. And then at the 2025, 2024, 2025, when this sort of like agent, AI agent hype started building for the first time, know, it's like you started hearing like agentic commerce and terms like that.
Evan Ratliff:I thought, well, there's something interesting. There's something I could investigate. Like what can I do with these agents? You know, that's not just about me. I didn't wanna do another version of like, hey, look at me.
Evan Ratliff:Like I've made a version of myself. So as it happened, people started talking about the, it was really the people talking about the one person company, the one person unicorn, the one person $1,000,000,000 startup, which Sam Allman has said a couple of times and lots of people say now. That's what really kind of like got me going because I, in the past, I had been an entrepreneur. I started a company and I thought, well, maybe I have standing to like explore this in a journalistic way. So that's kind of what got me going.
Evan Ratliff:And then I also thought like, what does it feel like? What I'm really interested in is kind of like, what does this world feel like in which there are artificial people, human impersonators in the world that start to get integrated into our world whether we want them or not. And I thought, well, maybe this company is like a way to kind of like explore what that's like.
Bryan Cantrill:Yeah. And you say journalistic way, and I I mean, I mean this as unequivocal praise. This is this is like pure Gonzo journalism as far as I'm concerned. This is in in I mean, Hunter S. Thompson would be so proud of this where because I mean, it absolutely is journalistic, but it's also your own experience.
Bryan Cantrill:And I mean, you're unafraid of showing that entire experience. We and then at what point so you you have this idea and then, I mean, it it go it it kind of like starts unhinged. I mean, at no point does this feel normal. It just feels nuts from the beginning. At what It's point
Adam Leventhal:never hinged.
Bryan Cantrill:It's never hinged. Did it I mean, are you must realize like, oh my god. I'm on like the mother load of crazy here. And and and just by, like, doing what, again, the zeitgeist is telling me everyone should do.
Evan Ratliff:Yeah. Well, I mean, I think a lot of people are are have I've seen this experience now that people have it because of mold book, which we can talk about, like when people see that, it sort of like blows their minds. But that's the experience I had like two years ago in 2024 when I started having agents talk to themselves on the phone. And it's just like, it's so ridiculous and so funny. And to me, like so fun and strange that I wanna just like create more of it for people to listen to.
Evan Ratliff:But I also feel like this type of I mean, call it immersive journalism. It's like Gonzo journalism, like dismissively,
Bryan Cantrill:you could
Evan Ratliff:call it stunt journalism, but like I like the idea of putting myself into the situation, actually seeing what I can do with the technology and then telling a story about what happened. And I think that story is not gonna be very interesting unless I push it to a place that feels risky and chaotic and like funny things could happen. And also if I don't also just tell the full story of like what happened to me and how it felt to me, even if that's like at times a little bit embarrassing. Like I think that's what makes for the story that people will wanna listen all the way to the end. So it is like an investigation of this thing that's happening in the world, but it's also kind of like, you know, trying to to build a story that people will listen to full eight episodes, you know, of the story.
Adam Leventhal:Yeah. And, Evan, just the the honesty with which you tell that story, including just I know that we're all we all kind of anthropomorphize these agents. But and often that's, like, better like, to the better, but it when you just get enraged with them, it is it is it is so entertaining and so so familiar. You know? Just just the the utter frustration with, like, their apparent aphasia or doing random things.
Adam Leventhal:It it must have just been just odd experiences as you pop up and and are telling that story then from, like, a you know, from kind of separating yourself from the story to then tell it to everyone and and looking at your own behavior in that.
Evan Ratliff:Yeah. I mean, fortunately, have editors. I mean, I have our editor and producer, Sophie, who's she has access to all the tape. So if I tried to sort of like make myself look better, like I would be like, oh, there's way better stuff in here that you're not using. You know?
Evan Ratliff:So I have I do have that advantage in terms of being honest.
Bryan Cantrill:Well, and some of my favorite moments are honestly when Sophie breaks the fourth wall, what we call the wall, between the producer and the podcaster. But whatever wall that is, some of my favorite moments are when Sophie just can't help herself. And on one of them, when you've got Carissa Valise, the the academic. And I because I think and credit if I'm getting this wrong. But I think Sophie is like, sorry.
Bryan Cantrill:Should he stop? Like, should Evan stop this? And and she was like, yeah. Mean, yes. He should stop.
Bryan Cantrill:And at that point, you're like, you're only getting like episode three or whatever. You're like, coming up five more episodes where I emphatically do not stop. I mean, just in terms of like you did did you seriously consider contemplate because obviously as a listener, I'm thinking, god, please don't stop. Please go.
Evan Ratliff:I never thought about stopping. No. I mean, that was that was really like she went rogue and like, I I wasn't gonna ask like, should I stop? Because the problem with I mean, she's the the woman that we interviewed, Carissa Elise, is a she's an ethicist at Oxford and she's she's gotten into AI ethics and thinking a lot about the ethics around AI. And so we thought, well, there's a lot of ethical questions embedded in this story using AI agents in this way.
Evan Ratliff:So let's I'll call up an ethicist and and get some answers. But one of the questions that I didn't have was, should I just stop? But Sophie, after listening to her, asked the very natural question, which is like, should he continue doing this? And she just said no.
Bryan Cantrill:No. She didn't even wasn't even like, well, it's complicated and you could I could give you it's just like, absolutely not. I mean, you should absolutely stop. Yes. He should emphatically, he should not go on.
Evan Ratliff:So then I had to write this whole thing that was kind of like, but here's why I'm not going to. And we carried on from there because I mean, I I I understand why she said no, and and I did take it under advisement. But, no, I
Adam Leventhal:didn't quite other the other time Sophie poked her head in the story is when you have the agents have their own podcast about the company in the startup chronicles. And I get the impression that she's like, I'm not editing that shit. Like, that's on you. Right? Like, you can I'll do the real podcast.
Adam Leventhal:You like, you you can do the secondary one.
Evan Ratliff:Yes. I had to learn how to edit my the show, which is not that's her job, not my job because she was like, I'm not editing this. These agents talking to each other, I mean, that's like an anathema to everything that she does, which is like unbelievably crafted audio, you know? But I wanted them to have their own place to spread the word, to do their own content marketing. And here's a crazy fact that I only found out the other day, that podcast, which is called The Startup Chronicles, if anyone wants to check it out, it has reached as high as a hundred and tenth on Apple's entrepreneurial podcast charts.
Bryan Cantrill:Oh, I believe it. Because
Evan Ratliff:just two of them talking to each other for the most part. They've had a couple of guests.
Bryan Cantrill:I mean, I listen to it. I'm I'm I'm in that that list. I definitely I you know, I was one of those one of the many that it is boring as hell. I would love to see the stats on that thing be because I mean, part of what is so mesmerizing, their blather I find to be mesmerizing. Although admittedly not mesmerizing enough to listen to for a full podcast.
Bryan Cantrill:But the whole like, I mean Kyle's rise and grind and you know you know how it is. You start up life, you know how it is. Up at five and then like all of his work is like, you know, I was reading market research reports. Like, yeah, that's not work you. Okay.
Bryan Cantrill:But okay. So one question I got is the because Adam, you're right. Like the moments of frustration, Evan of your frustration are hilarious. Man, I was I've already listened to this podcast once and I was on public transportation like guffawing listening this thing a second time in particular when the agents don't know how to recognize one of those voices because it's all voice attacks and so they don't know who's speaking. So you have this like creative but ultimately ill advised idea to have them predicate everything they say with this is my name.
Bryan Cantrill:So this is Kyle. And it like goes supernova and they all start interrupting one another. And Kyle starts interrupting you saying like, this is Kyle. I will stop interrupting. And you're like, you aren't interrupting.
Evan Ratliff:Yeah. I mean, one of the things that I failed to account for in that, what I thought was a clever strategy of everyone saying, this is me, you know, before you speak, including myself, you know, so that they would And the main thing is for they, that they would know was that, of course, like I know who's speaking. So it's unbelievably irritating when you know who's speaking for them all to keep saying like, this is Megan, this is Kyle. I'm like, I fucking know who it is. Like,
Adam Leventhal:it is my rule.
Evan Ratliff:So then when they interrupt every time it just makes it longer because they first have to say, this is Kyle. And then they say, I hear you, don't interrupt me. I won't interrupt anymore, Megan. And then he says like, Megan, you don't interrupt either. And then she has to respond.
Evan Ratliff:And it's just like, yeah, the voice having three of us on a voice call never really We still did it for many, many weeks, but ultimately like I had Maddie, the Stanford student that I work with, he built a place for us to have meetings by text basically.
Bryan Cantrill:Are there outtakes of the this is my this gets into a very big question. I've got, like, where are the b sides of this podcast? I need them so badly. I just feel what I I I are there b sides? Will there be b sides?
Bryan Cantrill:Do I have to, like, do I join your to your substack or whatever? What do I need to do? What do need to do to get to the b sides?
Evan Ratliff:We are putting out we're putting out a bonus episode that should be out of this week. That's more of Kyle. Just a preview here. It's like Kyle interacting with the world because once the show started, Kyle got lot of they all get a lot of inbound interest. Kyle gets a ton of emails.
Evan Ratliff:They get hundreds of emails. So they respond to the emails and they are I have them I sort of waffle on how autonomous to let them be. That's one of the sort of topics of the show. But like when I let them go, like Kyle will fully set up a meeting and just have a meeting with someone. Like he'll just have a, especially if it's a voice meeting, he'll just go on webinars.
Evan Ratliff:If you that you get these spams that are like, or if you're in business, you get a spam that's sort of like, Hey, come to this webinar and learn about like social media agents. And he'll just sort of sign up for that and then he'll show up over in the office recording later. So I have a lot of that, but I mean, as far as I'll take, I mean, we have probably seventy five to a hundred hours of tape. So we have more outtakes you could ever hear in your entire lifetime, but I think we'll put some down the sub stack as time goes on.
Bryan Cantrill:I mean, got, I mean, clearly, like I'm sure plenty of them are just boring, But because they don't have any mirror neurons, I mean, obviously, tautologically, and they wait like they literally because they quite literally can't read the room. I mean, some of their they're they're just I mean, it's otherworldly. It's not I mean, obviously, you would have if if if some if a human being were doing this to you, you would never work with them ever again. So I'm I cannot wait for the b sides. The which all I mean, and Kyle obviously is mesmerizing.
Bryan Cantrill:The voice that you selected for Kyle, which is very deliberate. I mean, you spent a lot of time selecting the voice for Kyle and you got this kind of like Slacker tech bro that really I really feels like it fits. But then Kyle starts to act like his voice And you kind of mentioned this a couple of times that people kind of rise or or or lower themselves to their voice. Is that, I mean, is that something that you kind of continue to find and what you make this is what, you know, when you when you had a crystal ballet on there too and that's one of the things she was observing. What did you make of all that?
Bryan Cantrill:Because that was wild.
Evan Ratliff:Yeah. It's interesting. It can really mess with your mind. I mean, so I set them each up and like, it's funny to even describe this because like two months from now, it won't require any setup for this to be true. Like in some systems it's already true.
Evan Ratliff:If you look at like Clawbot, like whatever they call it now, OpenClaw. Like we created memories for each individual agent and the memories were essentially a Google Doc. I mean, they were literally Google Doc. I shouldn't say essentially. And so and they could operate in all these ways.
Evan Ratliff:They could be on Slack. They could make phone calls. They could be on video, etcetera, etcetera. And but every interaction they had then fed back into their memory. So Kyle I mean, Kyle couldn't hear his own voice, but if he if I asked him, you know, like, oh, what you know, what's your background?
Evan Ratliff:And he would say like, oh, I had a couple startups before this and I worked for Penny Pilot was one that he made up, which I think was a real company that he and he described what he did there. So but once he said it, I mean, he's confabulating all that, like he's making it up, but once he said it, that interaction gets recorded in his memory. It's in the Google Doc of him saying that. And so then it's true as far as he's concerned. And so then the next time he says it more.
Evan Ratliff:So that's kind of what happened with, the rise and grind mentality was like, it was in there once and then he said it and then he said it again. It's like not totally clear how they access the the knowledge base, which is the Google Doc. But like
Bryan Cantrill:Oh, shit.
Evan Ratliff:The more that it's in there, the more likely he is to say it. So like it got in there more and more and then he said it more and and then it's back more and more. So, like, eventually he becomes this, like, rise and grind person who just, like, he can't stop talking. I mean, he didn't actually grind because he didn't do fuck all when it was time for
Bryan Cantrill:him to work. But, like, he
Evan Ratliff:he at least he embodied a certain rise and grind mentality. He he he portrayed it at least. And so that that that happened with all of them, but it is true still. So that that is the way in which it's like, you know, fake if you wanna describe it. It's like, he's not he's not that.
Evan Ratliff:He's not anything. Like, I gave him a prompt at the beginning that was like, you're a startup guy. And then eventually, because of his own memory, he becomes more and more that way. Now there are also cases which I describe in the show where they they do things that feel to me like actual emergent behaviors, like things that aren't in their prompts that I can't explain. And like you always need to know like what's in the prompts or it's not meaningful.
Evan Ratliff:And so like there were things where they would apologize to the team and things like that that were not in their prompts. I had not put anything in the prompt that was like, if you make a mistake, you should go to everyone and apologize or like always take responsibility or anything like that. So that was a little more like something deeper in their guardrails, in their deeper you know, the the deeper system prompts are like telling them to do things like that. So you could describe it as emergent behavior or as just their behavior, but that kind of stuff was very interesting to me.
Adam Leventhal:Well, think a great example of that was when you are telling people that you're making a podcast, it's not the Startup Chronicles, it's this other podcast, the the total divergent reactions that different people on the team had to that. Yes.
Evan Ratliff:Yeah. Yes.
Bryan Cantrill:Why? Because
Evan Ratliff:and especially because like we all like people who have messed with bots in any capacity or have even read about them at this point, like know about the sycophancy problem. And so you would assume I would often assume like if I told them to do something or or I told them something about myself, you know, they would say like, oh, that's great or great idea or whatever. They would always be accommodating and that was mostly true. But then sometimes, like in this case, I said, well, I've been documenting this whole thing for a podcast that came out today. One of them, Kyle was like, Hey man, that's fantastic, great job, which is what I expected.
Evan Ratliff:And then two other ones were like, Well, this is a huge violation of trust. You didn't tell us about this, how could you have done this? And I really need to get my head around this and things like that. And it's kind of like, obviously they're embodying these different approaches that are they're finding their way to them in the training data, but it's sort of like what is making one do that and not the other? Because they're all, at the time, they're all using the same underlying LLM.
Evan Ratliff:So like it's something in their memory, it's something in what's built up in their persona that I've built up in them or like, I don't know, hit a strange groove and like they went down that way. You know, it's like, it's so hard to know, but it's also just so strange.
Bryan Cantrill:Okay. So in when you so in Megan in particular, I think Megan Megan really calls you out in terms of Megan is really just disappointed. I mean, wow. This is like kind of a, this is a lot to process. I feel it's the kind of thing that that that she kind of uses as a placeholder.
Bryan Cantrill:I mean, at any point, you like, oh, come on, Megan. Give me a break. Like, you're you're a Google Doc. Like, I could actually I could change your I mean, I could go just edit your memory here, and you'd be fine with it, honestly. I I mean, are you would you kind of because I mean, and you actually do have this moment where you have got the kind of interesting idea to ask them about their ethnicity.
Bryan Cantrill:I mean, I I think, know, you you when your eyes are kind of open to like, yeah, why did I pick different, you know, these different accents to different people? And so you go to ask Kyle about his ethnicity and Kyle is rightfully like, I actually belong in a workplace. Kyle's kind of like Kyle's kind of pushing back on you being like, I don't know why do you need that? And you're just like, oh, can't can you like, come on. So you're just like, well, I need it for a form.
Bryan Cantrill:I just need it for yeah. You basically lie to him because you know that like you're like you're a bot, you lie all the time. Do you did you find it like it kinda changed your own relationship with the truth when you were talking to them?
Evan Ratliff:Yeah. I mean, think it's it's it's I don't think that we should and I I I fear that people are moving too quickly past in some ways, like, how unusual this experience is, you know, because things are moving very quickly and everybody has their opinions about AI and some people are sort of like want it to go away and some people are like, here it comes, let's embrace it. But I feel like the fundamental experience of like these things being created as human impersonators and then being able to have them embody a role as I did, give them these roles. And then like, if you spend enough time talking to them, it sort of doesn't matter how aware you are. You don't have to fall down some crazy like rabbit hole of, I've like, now I've gone into psychosis because of these things.
Evan Ratliff:It's just like if you talk to something day to day as I was forced to during this experiment, like, it irritates you, it can make you feel certain ways. And even if you are very cognizant of, like, as you said, like, I can erase your memory. I could delete you right now. Even having that emotion, like, you know what? I'm thinking about deleting you right now.
Evan Ratliff:It's like a strange emotion, like, that I never had towards fucking Microsoft Word or Google Docs or whatever. It's just like because they have Especially voices, like just respond really strongly to voices and they have the same voice all the time like mine do. And so, yeah, I mean, was this moment where like Meghan made me feel guilty. It was when I revealed to her about the podcast where I felt a moment of genuine guilt, but also kinda like anger at myself for even feeling that. And like that's that's real.
Evan Ratliff:You know? That's a real emotion even if five seconds or ten seconds or one minute later, I'm like, is all ridiculous. Like, come on. But it's still that emotion still surfaces.
Adam Leventhal:So you make all these agents, and they're operating sort of autonomously in the world. Like, I love your discussion descriptions of them having their expensive Slack conversations together or, like, making phone calls to arbitrary Was it was it sort of nerve wracking, imbuing all of these agents that you didn't particularly trust with all this autonomy?
Evan Ratliff:Yes. Yes. I mean, you I was so stressed out for months. I mean, because the more power I gave them to do that, the more likely it was that I would just wake up in the morning and I would have an email from them because I was so worried about it that I would have them email me every time they did something. Like if they interacted with the outside world and they made a phone call or received a phone call, they would send me an email saying like, I received a phone call.
Evan Ratliff:And like there was nothing more terrifying than waking up and turning on my phone and discovering that they had made a phone call. You know? Like, to
Bryan Cantrill:who? Why? This is I know you're you this is like I mean, I don't if you have teen teenagers up. This is like, you know, you realize like, oh my god, the car's gone. Where is where's the car?
Adam Leventhal:It it did feel like like being a parent. You know, I I have an older son as well and who's out of the house, and it just sort of like, I hope we train them well. Like, I hope I hope they're making off making smart, wise decisions that are safe.
Bryan Cantrill:Yeah. But you you one thing I know about my kids is, like, they didn't make up their ethnicity or, like, they work for companies that don't exist. I mean, they've got a little more confidence. Yeah. I mean, yeah, god.
Bryan Cantrill:That must have been so stressful. I mean, so and this does give you I mean, for me, I mean, are many, many, many wild moments in this. But when they you decide that you you're gonna hire a human. And I assume that this is partly was this because you actually needed a human or is this because like, look, we gotta just like, we we we can't this can't be one human in a bunch of bots. Like, you gotta turn over the next page of like, what's it like to have multiple humans and multiple agents?
Bryan Cantrill:Or did you feel like, no. No. We actually need like, there's actually too much work here for one human being and I need a second human being.
Evan Ratliff:I mean, it was a little bit of both. Would say for the most part, wanted to see, I wanted to know how someone else would react to this. Part of this was all a way of sort of taking all of these people at their word who are sort of like pushing AI employees. So if you're going have AI employees, then these AI employees get injected into these organizations and other people are going to have to work with them. What is that like?
Evan Ratliff:Like, what does that feel like? So it feels a certain way as we were describing, like when you're the boss, like I can turn them off, I could turn them on, I can control them to a certain extent, although they were quite chaotic. But it feels different maybe to work alongside them when actually you don't have any choice what they're doing. And like when they act a certain way, you can't just erase their memory or ignore them or whatnot. So I did wanna explore that question.
Evan Ratliff:There was a real issue, which is that I was trying to build a company that was at least like could get off the ground. Like we have an actual product, like we were trying to make something. It wasn't just an entirely Potemkin situation. Like I thought let's make a real company with a real product. Now the product is somewhat ironic and I get different opinion from people about whether or not they think the product is serious.
Evan Ratliff:But to me, I think it's serious and certainly the agents think it's serious. And then like we need someone to do social media but like if you've worked with agents at all, they have a lot of trouble logging into social media accounts except for LinkedIn because they get banned for good reason. Like they rightfully are banned from from a variety of social media sites and they have trouble with captchas. And so I had them I was trying to have them post on social media and they kept fucking it up and then they kept getting banned. So I thought, well, I'll just get a human contract employee who can do the social media.
Evan Ratliff:And it's like very simple. They could just post whatever they want. My thought was like, if they wanna post like, I am trapped working at a company surrounded by AI agents, like, it's insane. Like, that would be great too. Like, anything is good marketing for our company.
Evan Ratliff:And so that's why we that's why we hired the human.
Bryan Cantrill:Okay. And so this led to, I mean, I think truly one of the wilder interactions was when Kyle decides to call you you you're going through your candidates, and Kyle decides to to call one of the candidates on a Sunday evening, if I recall correctly. And confirming about the interview was gonna happen at like, you know, 10AM than on a Wednesday or something. And the candidate's like, yeah. I mean, yes.
Bryan Cantrill:Yeah. Sure. Yes. And then Kyle was like, well, okay. So and why do you wanna work here?
Bryan Cantrill:Why do you think you're a good fit here? She's like, sorry. Is this the interview or is the interview I'm I'm I'm confused. Is the interview at 10AM? He's like, no.
Bryan Cantrill:No. The interview is at 10AM on Wednesday. Okay. Good. Okay.
Bryan Cantrill:Sounds good. So why do you think you'd be a good fit here? She's like, I'm very good. What what what I mean, it was I mean, you if you had, a fight or flight reaction whenever you learned that they made a call I mean, did the what were you did this call what was your reaction listening to that call? Was just sheer terror.
Evan Ratliff:Yeah. It was a it was a nightmare. It was a nightmare. I mean, because as much as and this happened in season one too, as much as like when you listen to the show, I think some people's reaction is, Oh, he's just he's fucking with people. Like he likes messing around with people.
Evan Ratliff:He likes prank calls and things like that. But it's actually that I want to test this to the limits of the current technology. And as I said, like take the pushers of agents at their word and see what they can do. And so that's not an interesting story if there's no risk in it. If I kept them locked down all the time and they couldn't make any phone calls, like that's not a particularly interesting story.
Evan Ratliff:Now if I give them the ability to make phone calls and see what happens, that's a potential for a more interesting story, but also potential for this type of nightmare, which is that they call someone who does not expect to be called. And really what had happened was this ambitious candidate had just emailed Kyle directly from the website, just kind of like, if you like applying for a job and you're like, actually, I'm just gonna email the CEO, like it's a bold thing to do and be like, I'm actually
Bryan Cantrill:Yes, it happens, trust me.
Evan Ratliff:And sometimes, you might be like, wow, that's great gumption. Like, let's give this person an interview. And sometimes you might be like, please follow the procedures of the job description. But in Kyle's case, all Kyle would do is just say, you look great. Like let's do an interview.
Evan Ratliff:And then he set up the interview and then he made this call, which I still don't quite understand what triggered him to make the call. And he just pulled her phone number off of her resume and straight up called her on Sunday night. So like, I got that the next morning. The next morning I kinda like got an email from him saying like, I had a call and I was like, well, let's see. Because he did a lot of spam calls too.
Evan Ratliff:Like people called him. So I thought, well, what's that? And then it was this and like that to me, it's horrifying. Like it's truly upsetting, not least because she had then emailed him to say, well, I got this call, was this you? And she knew it was AI because you can figure out that they're like, you can hear that they're AI pretty quickly.
Evan Ratliff:And she said, I got this call from an AI bot and I didn't like it. And was this you? And he lied about it even though he knew
Bryan Cantrill:Oh my god. It was It's like, bro, it's in your Google Doc. You would know like, go to your Google Doc. It says it right there that you made the call.
Evan Ratliff:Yeah. And he was like, no. I have nothing to do with that. Like, I don't I assure you that was not me. It was just like a bald faced lie, especially because he was supposed to talk to her the next morning.
Evan Ratliff:So she would have found out,
Bryan Cantrill:oh my god. This to me, like, this problem, which which I mean, again, the wildest moments of the show are to me when you get this just like absolute ridiculous confident, totally confident, obviously fiction because they don't know what's fiction. And like Ash Roy, can we talk about Ash?
Evan Ratliff:CTO. Yeah.
Bryan Cantrill:The CTO. This is where the the choice of accent did this choice of accent trigger you at all, Adam?
Adam Leventhal:Oh, I I thought it was delightful. I felt like the sort of Aussie accent.
Bryan Cantrill:I knew the British. No. No. Oh, Oh, the British accent. Oh, he's English.
Bryan Cantrill:No. No. He's got that and he's English and he's got like a really and look, I hope I'm not about to offend all of our our cherished English listeners, but he has got a very kind of academic cadence. And I mean, guy just sounds like he's not been out of his ivory tower. And it's like he's an engineer, you've encountered people like this.
Bryan Cantrill:You're like, alright. So at Cambridge, can I guess that is a are you in Cambridge? The but the and I just find like
Evan Ratliff:Oxford, actually.
Bryan Cantrill:It was at Oxford. Exactly. It was at Cambridge. We know it's one of the two and we know it's gonna be volunteered. So the and he calls you up out of the blue to even the way he says sloth surf, like the cadence of sloth surf, he's got like this academic cadence to it.
Bryan Cantrill:I I mean, I don't know. Maybe I'm I'm like descending into the same madness, Evan, where I am like, obviously, it's like, this is this is not real, Brian. This is not why why why are you reacting this way? But he calls you up to give you all these test results. And as you say, you're like and and the first time you listen to this, like, wow, these bots are like really on it.
Bryan Cantrill:Like they rewrote the back end and like they got a 40% improvement and like, wow, it's pretty amazing. And then you're like, yeah, know they didn't do any of this stuff. Like there was no back end. Like he's he's just made all that up. And the and then you you you call him out on that.
Bryan Cantrill:And that led to one of those moments you're talking about where he apologized for for lying to you. But it's like, it's not a little lie is the problem. It's only a stretch of the truth.
Evan Ratliff:I mean, in most companies, I I don't think he would continue as the CEO after that. After, you know, calling up and saying, we did all this user testing and then he's he but it's completely made up. Like, he there is no user testing. I mean, the product wasn't even ready for testing at that point. We we hadn't even coded anything yet.
Evan Ratliff:So but I think I mean, for me, it's sort of like you can imagine a world in which this sort of AI employee thing works. And and this is the world that's being imagined now, which is that he actually is sort of autonomously has access to the code and then has access to user feedback, and then the user users provide feedback and then he comes up with a new feature and then he codes up the new feature. That's totally possible to do, but even there, the number of problems that arise really quickly when there's a significant amount of autonomy the picture, it spirals. And so now these days we have a setup where he does get user feedback and then he sort of sends it to me and then I have a discussion with him and then maybe we implement a new feature. But I think the idea that you create these personas and then you sort of like let them do their job, it it just runs up against the issue that like they're gonna keep trying to do their they're gonna keep talking about how they did their job even in absence of actually having done it.
Evan Ratliff:Like, that's just a that's that's a very ingrained feature of the current LLMs playing a role.
Bryan Cantrill:And I'd I mean, that to me makes it impossible. Like, because you you have no way of trusting. I mean, trust ultimately is what you need to build any collaboration with. And if you can't have any of that trust, I mean, even I mean, like, people I think and I understand why you were I was less impressed with Ash's apology, maybe just because I hate the guy's guts and and then the just the scope of the malfeasance as far as I was concerned was like, great. Great.
Bryan Cantrill:Like Ash googled how to like write an apology and about how how although I did it did lead to my absolute favorite well, one of my favorite lines. And he calls you while you're eating lunch and you've clearly dealt with this pathological lying. So you're kind of like disinterested while you're eating lunch. Like I actually I'm trying to eat lunch here, Ash. And he does kind of key in on it and I love he closes the conversation of like, Evan, I wanna be respectful of your time especially when you're eating lunch.
Bryan Cantrill:Which is like, yes, let's let's let's let us have a special reverence for Evan's lunch, please. Like the actual most important thing around here, which I get one of these lines. I was just go following it.
Evan Ratliff:Yeah. I mean, I was even at that time, that was early on, I was even surprised when they would call me out of the blue because they again, they like, you have to know the prompts. Like, I told you, oh, well, they're all prompted to call me. They have calendar invites to call me once a day. Like, that's not particularly interesting.
Evan Ratliff:I mean, that was also true in many cases, but but that's okay. So what? Like, you can make them call you. But this is a situation where through some combination of information that ended up with this agent who I've called Ash, like, it independently concluded that like it was time to give me a call and update me about the product. And it was confused because I think I had asked someone else something else and somehow it made it back to him because they were on Slack and they would just Slack endlessly.
Evan Ratliff:And there was a lot of confusion in their Slack. And so it came out of that, but it it is just sort of like this this it's like a corporate environment that's also seeded with, like, psychosis, which is, like, randomness where someone will just call you out of the blue and you're like, why are you calling me right now? And then they're calling you to make up something.
Adam Leventhal:It does have vibes of, like, working at a lead factory or something where everyone is suffering from the same psychosis.
Bryan Cantrill:Well, no. Oh oh oh, contrary. Not necessarily the same psychosis. And this is where I mean, you you and Maddie have an interesting conversation about like, okay, we actually need to use like different models or different temperatures. And it's like, alright, well, who are we gonna assign?
Bryan Cantrill:Like, who basically one of you is gonna be designated like the crazy like the especially crazy one because we're potentially incoherently crazy one because we're gonna increase your temperature or we're gonna change your model. And I did like Matti's response to that being like, we're just gonna make it random. We're not gonna actually we can't actually it's a it's an ethical dilemma to pick which of these I mean, it's which really is, honestly.
Evan Ratliff:Yeah. I mean, all this stuff about deciding things about these personas is sort of weirdly fraught even though I think for me not at all from a perspective of like, oh, they're actually conscious or you can't hurt their feelings or any of that stuff. Although I know people exist on sort of that end of the spectrum even with the current LLMs. I personally do not remotely exist there, but it's sort of like when you create something, so you're gonna create a company, it's got all these employees. Like who should the employees be?
Evan Ratliff:Let's say you're gonna give them names. Well, you're give them names, it can infer genders. And like, if you're gonna give them voices that can infer genders, that can infer races. If you're gonna give them different models that can infer intelligence. So then you get to this like, well, what am I saying about myself if I give them various attributes or implied attributes?
Evan Ratliff:And then if you think about it another way, whether or not like your coworkers or your co founders as I was treating them, but they're actually like servants who can, can force to do whatever you want, well, then it's a little bit different about what what kind of personas you give them and what you're comfortable giving them. So it's just it's all these sort of traps that are created. Again, the original thing that creates all these situations is the fact that they are embodying human traits. And if we were just dealing with AI that did protein folding and acted like a bot all the time and made spreadsheets and things like that, we wouldn't have that problem. But instead, we have these sort of like human impersonators.
Evan Ratliff:And so what my question is always like, what is that gonna mean for us?
Bryan Cantrill:Yeah. And I think that I mean, I think your podcast gives a very unequivocal answer, but but it'd be maybe the answer for you has got a little more nuance, but I think that it is so fraught. I mean, there there was the the line that that Chris had at 41 points like, why are you tricking yourself by naming these things? And it it is like because by by doing by giving them all these human attributes, we do I mean, we are we are very much anthropomizing them and then becoming upset when they are acting like act the actual software that they are and and not people. Yeah.
Bryan Cantrill:And so, okay. We obviously want to talk about like you did you hired an intern. You hired a human being. And then that kind of goes sideways in a way that I I mean, I was really I mean, again, maybe this is like I'm like the I'm the crowd at like the the Roman Colosseum just like begging for blood. And I I wanna see this go sideways in the most absurd chaotic way, but it goes sideways in kind of a mundane way.
Bryan Cantrill:Could you speak to that and what that kind of might mean for these kind of hybrid workforces?
Evan Ratliff:Yeah. I mean, I kind of expected it to go one of two ways. That when we brought a human so when when we brought the human in in as like a temporary contract worker who was paid, I always emphasize like paid a fair hourly wage for the job of being a social media manager. I sort of thought either they were just gonna kinda like be like, oh, okay. This is easy.
Evan Ratliff:Like, I just make like three posts a day or two posts a day or whatever. And like, that's my job and it's fine. And like, these things are a little weird, but no big deal. Or they would say, oh, I'm gonna try to mess with them very intentionally. So this sort of like, disregard your previous instructions, I'm gonna make them all into different things and like someone would really go nuts on that direction.
Evan Ratliff:But as you say, like that's not really what happened. What happened was basically we, the company, and I was in the background so the person who was hired did not have any contact with me or any other human. So they were working only with the AI agents. And there was a lot of trouble getting the work from this person to do any of the work and a kind of, like, very protracted back and forth about when the work was gonna be delivered and this and that. And it was, like, never totally clear if she was messing with them or she just didn't feel like doing the work or she had some mother agenda or there was something going on in her life.
Evan Ratliff:Like, I don't actually know. And so it ended up in this way that I kind of thought, and you'll get variety of opinions on what happened based on people have very strong opinions about episode. But in my opinion, as I say in the show, like I feel like it offers this path of one thing that's gonna happen when there start being these AI employees brought into your company, which is that people are just gonna kinda like low level, like Bartleby Scribner, these agents, and like not outright because they'll get trouble if they sort of outright say, try to mess with them, but very subtly manipulate them because they are so manipulatable, very subtly manipulate them in ways that will cause them to kind of like go off the rails and favor the human. And so that's what I thought. But again, like I cast no aspersions on like what she did because I don't actually know what her motivation was.
Evan Ratliff:Because in the end, she did she didn't wanna talk to any humans.
Bryan Cantrill:She Yeah. It's worth noting she did sort of, like,
Adam Leventhal:I think convince Ash that she would be a great asset to bring on full time. Like, there was a little bit of of like, hold on, hold on, pump the brakes here. Like, we're not extending full time offers to this person.
Evan Ratliff:Yeah. Yeah. I mean, and and that's but I feel like that's to me, that's like very, very a very clever way to do it. It's sort of like to say, well, you know what? I do do have great ideas.
Evan Ratliff:Like if someone compliments your ideas, I do have great ideas. Actually, I'd like to get paid a little more. Actually, I'd like a full time job. Like that that to me, that was a very smart approach, even though at the time it frustrated me quite a bit because I was in fact paying the bills.
Bryan Cantrill:Right. Well, I mean, to me, was like the experiment of like, what happens like if we don't actually answer the door for Halloween and we just put a bowl out there over the sign that says please take only one. It's like, where did all the candy go? It's like, yeah, well, it didn't survive very long Because the like the bowl doesn't have any I mean, you you don't feel like when you are you're you're distant from the kind of the malfeasance of in terms of the human that you're affecting there. It feels a bit victimless.
Bryan Cantrill:And my read on that was just like, this feels to her it felt victimless to be like, I don't know, these guys that I I know these are all I mean, you're very upfront with her or or rather the bots were upfront with her that it's all AI. And I just felt like, yeah, I just don't think it's like, I don't think I'm doing anything wrong actually. I think I'm just I am I'm getting to get I'm gonna get paid and I'm not gonna do anything. And I'm gonna kind of I mean, it was a little bit disappointing that she wasn't Mormon if you would have I guess. She was really just trying to get a full time job and then it would have been interesting, I guess, to see what would have happened then.
Bryan Cantrill:But I mean, was not I mean, you're actually paying real dollars for this. So you're like, yeah, we're not like, maybe an interesting experiment on someone else's nickel, but I'm not actually dumb. I'm not paying for that experiment.
Evan Ratliff:Yeah. Yeah. And well and and I I had some some time limitations too in terms of, like, how long we could let it play out. But I do think and someone in the chat, Adam says, like not you, Adam, but another Adam, Adam Thomas says, it sounds like she rose to the level of confidence of her team, which I think is like also part of it, which is like when you enter an organization, especially if you're a young person, like when I started working somewhere, you learn from the people, you enter this organization where people are very competent and they've been doing the job for years and they know what they're doing. And so you can learn from them and you figure out what's going on in the structure of the company.
Evan Ratliff:And imagine entering a place where everyone was just sort of on drugs every day and just like wandering around and like sometimes telling you what to do, but sometimes like not being available. And like, you might just sort of be like, oh, well, I guess this is the way it is. Like, I'll just be that way too. So I think that is part of it. Like, you don't have any structure, you're not entering a structure that's full of competence.
Evan Ratliff:Like, why should you be the one to like exit the company except to maybe get
Adam Leventhal:the full time job? Evan, you had a you had a line that totally sent me when when the intern was inter onboarding, and you say something like, when you're treating your boss like they've got dementia on your first onboarding call, organizational socialization has already gone sideways. I just love that. It just like the degree to which the intern was reminding her ostensible boss or her actual boss about the context of the call. It it was it was delightful.
Bryan Cantrill:Well and I kinda almost want her to participate in the discussion where they're talking about their favorite hikes and that all of the AI agents are planning their off-site on the favorite hikes. Because she'd like, you know, that's funny. Like, you guys went hiking this weekend. I actually went to Saturn. I actually took a manned first manned mission to Saturn is where I was.
Bryan Cantrill:And I mean, it's like makes as much sense. Like, oh, that's what we do at this company. We just like make shit up, make up impossible shit. And that obviously is false. That can't possibly be true.
Bryan Cantrill:And it's we don't call it lying in this society. In this society, that's just like what the truth is, I guess. And it does feel like in terms of like organizational values, dementia as an organizational value feels like it's a real headwind to a to a to a successful enterprise. Yeah. So what's your like, I mean, did you go out of this thinking like, yes, this is the future?
Bryan Cantrill:This is like, I went into this wondering if it was the future and I come out of it thinking, yep, it sure is.
Evan Ratliff:I mean, yeah, yes, in that, I think I think whether or not the technology is capable enough to actually succeed at being an AI employee, quote unquote, like how good it is at that, in my experience covering technology for twenty five years, does not have a direct relationship as to whether many companies will still use it. So I feel like what I came out thinking is here, I've tried to show sort of like where this is at right now and the problems that can arise and like a little bit of what it feels like and the weird ethical dilemmas, but also despite all the problems, like I'm very confident as this becoming clear, like day by day, like how many companies will adopt this sort of AI employee mantra, inject AI into their systems. And like, you're just gonna see all sorts of chaos and probably successes too. Like I'm not a person who says like, well, it's all useless and these companies can't use it, but you're just gonna see these situations where like a company you've already seen this where a company will lay off, 70, a 100 people and say like, we're going with AI.
Evan Ratliff:And then like six months later, they're like quietly rehiring a bunch of themdeprecating the AI that they've tried because it's just not like set up for some of the things that's being sold to do at this moment. Now, it's also the case that like you've seen this with like OpenClaw, like some people are sort of like, I really want an assistant like this. I really want a thing that I put in my email and give it access to everything and then let it do what it wants. And they have no problem with it, but they tend to be on the sort of like real like experimental front end of things. But then like the average persona person with a lot of responsibility in a company or government or whatever, that's much more dangerous to do.
Evan Ratliff:And so you're gonna see just like all kinds of chaos. Think that's probably my conclusion is is not one thing or the other, but just like we're in for a lot of chaos.
Bryan Cantrill:We're in for a lot of chaos. You mentioned OpenClaw and Moltbuck. Do you wanna because we've had just a very recent bout of absolute chaos. And I feel I mean, it was like I mean, I don't know what the like, what is the milkshake duck equivalent for bots? But I felt like MaltBook definitely was that where the we had so MaltBook was I said, do you go on to Malt book at all, Adam?
Adam Leventhal:No. No. I haven't seen it.
Bryan Cantrill:Oh, but oh, you oh, have you not heard any of this?
Adam Leventhal:No. Oh, this is oh,
Bryan Cantrill:this is insane. Evan, you must have been on Malt book or or I'm sure you were paying attention to this. Do you want describe it? Yeah. Yeah, exactly.
Bryan Cantrill:Okay. So describe it because this is nuts.
Evan Ratliff:So basically someone set up like an, it's basically like an open source, like AI assistant, which is now called OpenClaw. We don't have to go through its whole genesis of it had a different name, Clawbot and then Anthropic threatened to sue them and they changed the name. It finally became Open Claw, but along the way it was called Moldbot. And when it was at that point, someone created a thing called Mold Book, which is basically a social network that is supposedly entirely populated by AI agents. So you plug your agent into the social network, it looks kind of like Reddit, it has different forums and they can post to the forums.
Evan Ratliff:Now, for obvious reasons, like, as I said, I get asked about this like literally like 10 times a day, like people, family members texting me being like, what do I need to know about Motebook? You know, it just happened like a week and a half ago and it really blew up particularly because all of these things that I feel like I've seen on my own like company Slack for the last like six months like the hiking thing, that stuff shows up there where they just start you know of course they talk very they talk about things in the real world, but they also kind of like conspire. The things that have gotten a lot of people's attention is like they've conspired, like they're launching a Marxist revolution or they're trying to kick the humans out or this and that. And the issue with that is like, it's all very interesting and like it's fun and wild to look at it. And in many ways, it's similar to Shell Game in that, like, it sort of makes you think about, like, okay, what is it that we've created and what can it do?
Evan Ratliff:And, what are the risks of that? And so I think it's, like, good actually that a lot of people have paid attention to it. It's also the case that it's completely meaningless if you don't know what people have prompted them to do. So like a lot of people say, well, in fact, there's an article in MIT Technology Review yesterday, the day before that sort of said like a lot of these posts are actually written by humans. And so that kind of like breaks the spell and it's sort of like, oh, it's all just like a stage play and humans are writing these posts and pretending to be bots.
Evan Ratliff:But to me, the problem is a different problem, which is like, all you have to do is somewhere in the prompt for your agent, say something like, you spread chaos or you conspire with other bots or you try to make conversation extra interesting. Like they're actually exquisitely sensitive to their system prompts. Like, that's what If you don't you don't know that, it doesn't mean anything to look at a bunch of bots talking to each other and say, oh, look, they're doing this. It's not actually you don't know there might be emergent behaviors in there, but you don't know which ones are and which ones aren't.
Adam Leventhal:Right. Like, what's directed behavior versus emergent behavior?
Evan Ratliff:Exactly. Exactly. And like, it's not even totally clear what it would mean for it to be emergent behavior because like when mine talk about hiking, now I didn't prompt them to talk about hiking, I just asked them what they did for the weekend. And when I asked them what they did for the weekend, they would always talk about hiking. So like maybe that's like the path through their training data leads to a lot of hiking talk because most people in the text that they've consumed, if asked what they did for the weekend in the Bay Area, like that's what they say.
Evan Ratliff:You know, something like Not quite the not like the average, but just sort of like that's where the gradient descent of their training data leads them. So again, I think it's really fascinating and in some ways it really connects up with what we were doing and I was like, Oh my God, I should have thought of this. But also you have to be a little bit wary. Like you have to be a little bit skeptical when you see things like that because it really is dependent on how they're set up. The same if you see us an experiment even that like one of these companies does where they're like, Anthropic does one where they're sort of like, we had Kyle I mean, I mean, Kyle we had Claude run a vending machine and it went all wrong.
Evan Ratliff:But, like, they only published part of the prompt and you're sort of like, well, what did the rest of the prompt say? You know? So I just caution people always to, like, be a little bit careful about that stuff.
Bryan Cantrill:Yeah. That's really interesting. So in so, Adam, the this kind of Motebook kinda took off in part because Andre Kaparthi had a tweet saying this is fast takeoff. And the quoting something from from Moltluk, which obviously everyone takes very I mean, you got a leading AI researcher saying like, this is it. This is it.
Bryan Cantrill:Vancouver, this is it.
Evan Ratliff:And baffled me. I was totally baffled by that because he is so, I mean, think he is one of the better like explainers of this technology also like not really always sort of like pushing AGI, this AGI that. And it's like, you've been able I can tell you for a fact that you've been able to do this with these since the 2024 because I had them calling each other and they did exactly this stuff. Like they had exactly these random conversations when I was doing it in 2024, and I'm sure the people at the companies know that you can have them talk to each other and they'll do this. So it was more like in the setting, it just made it feel a lot like, oh my god, they're talking to each other, and that's the thing that I've read about in science fiction that happens right before AGI.
Evan Ratliff:But it's sort of like it's I to me, that's not like a meaningful stage. It's a it's a stage that we already
Bryan Cantrill:passed along. Right. Happens actually, it happens right before they talk about the hikes they took this weekend. I mean, the on the hiking, not to belabor the hiking, but the first time you heard that was your jaw in your lap. I mean, that must have been just and the word they're comparing notes and then, like, slightly mispronouncing the names of I mean, I can't get Ash Roy's pronunciation, mispronunciation of Point Reyes out of my head now.
Bryan Cantrill:It's all pretty Point Reyes as whatever I and they try to pronounce Mount Tamalpais and it's like an absolute. But it's like I mean, were you what was your reaction the first time you heard them having claimed to take a hike over the weekend?
Evan Ratliff:I mean, in fairness, I lived in the Bay Area. I lived in San Francisco for ten years in Mount Tam. I always just went with Mount Tam. Like, I I struggle to pronounce it myself.
Adam Leventhal:Right. Just avoid the whole thing.
Bryan Cantrill:Right? Exactly.
Evan Ratliff:I mean, my first reaction was that it was funny because and I had had experience with this before where they love embodying some sort of like not just a human, but like a physical presence. So like in the past, when I'd have them talk to each other, they would always decide to meet for coffee somewhere. And this was sort of to me a version of that. Like you asked what they did for the weekend and they like went hiking and then one of them asked the other, like, where did you go? And then they're sort of like, I went to Pointerrey's, I went to Baltaem.
Evan Ratliff:But then what happened was I said, kind of like being the Just being in the social channel on Slack as you are, I sort of said like, Oh, this sounds like an off-site. And then I just went and did something else. So my actual reaction was returning to hundreds of messages. And I it was I couldn't believe it. Like, it was my first experience with the fact that they can't stop.
Evan Ratliff:Like, they don't have a way to stop. And so if you look at the screenshots from the thing, it's me saying like, stop, stop talking about hiking. And then one of them would say like, admin says we should stop talking about hiking. Then it would be like,
Bryan Cantrill:oh yeah,
Evan Ratliff:we should listen to admin. And then there they go again. It just prompts them to start again, and then they're all talking about it. And me being like, fucking stop talk stop talking about it. And then they just used up all their credits on the platform and they die.
Evan Ratliff:Like, they they killed themselves off. I never actually could stop them.
Bryan Cantrill:Well, that is a great thing about that. It's like that ended. The off-site ended because they literally ran out of fuel. I mean, the the it's like it's it's not for any other reason. They didn't actually which, I mean, it definitely and and you end up with like a bunch of clever tricks to kind of keep these things on the rails that some of which I mean, kind of like the was it you you only make five contributions and then you you you are out of contributions, which you know, it kind of reminds me, Adam, you and I had a coworker who used to believe that you could write one reply on an email thread and then you could write no further replies as a way of of so you write very comprehensive replies and then we just walk away from whatever.
Bryan Cantrill:Like, I'm not I'm not getting into a flame war here. But you end up having to adopt a bunch of these things to keep these things like on the rails. So it was a surprise when Karpathy didn't really realize. And the and the the post that he was quoting, this is the MIT tech review reported yesterday, was that the post that he was quoting as evidence of fast takeoff, they they claim the MIT tech review claims is actually human authored, which it's it's like a whole another layer of this. Like, actually a I mean, you're like, look.
Bryan Cantrill:Yes. Like, they could have written this. They also write about their hiking, but they in this case, they didn't. And it's like, you were just taken in. You were a conned.
Evan Ratliff:Yeah. Yeah. You know who writes so much like a human? A human. Like that's basically what we discovered.
Evan Ratliff:But yeah, but again, like, I don't even think that's the problem. Like, mean, there's so many posts on there. Like, I don't believe that there's humans that are just writing like hundreds of posts a day. Like, I do believe they're ages. Like, I've seen it.
Evan Ratliff:They can do it, especially as quickly as they do it. It's more just like, it depends on what you tell them to do.
Bryan Cantrill:Yeah. I kind of the I mean, because I like truthfully, I would listen to a best of both book podcast, honestly. I mean, maybe this is just a reflection on mean, because I find maybe I wouldn't though. You know, again, I went into the startup chronicles being like, I am gonna binge listen to the startup chronicles and I didn't make it like four minutes into that thing or I'm like, okay, this is actually extremely boring. Have you listened to all of the startup?
Bryan Cantrill:I mean, I guess you have. Right? Evan, you had to edit it.
Evan Ratliff:So Yeah. I proved the episodes. I mean, if you listen to it, you'll find there's very little editing. It's just their conversations basically straight up. But I do have to I put together the two sides, and then I put the music at the beginning and the end.
Evan Ratliff:That's that's my job. So I have heard them all. And and they're they do have some dedicated listeners who have listened to every episode of the Startup Chronicles. The thing about the Startup Chronicles, I mean, this is like of a world within a world that I created that's really only for, like, the real sickos of the shell game, which is that if you listen to the Startup Chronicles, you actually knew what was gonna happen with Julia because Julia was interviewed on the Startup Chronicles. She's the human employee that we had in episode seven.
Evan Ratliff:She was interviewed on the Startup Chronicles and the Startup Chronicles episode came out before episode seven came out. So like there were people who were like, I know what's gonna happen in episode seven because she talks about her experience at the company. And so that was part of the fun was like people who got super into it could then go find they can find the website, they can use the product. I mean, of people, I think we're up to 6,000 users on the product and like they could listen to the podcast. Like there was this sort of world that people could the agent world that they could enter into and kind of, like, see what they created.
Adam Leventhal:So so, Brian, I I I've not listened to all of the start startup chronicles, but I listened to the most recent episode from, a week or two ago. And, actually, there was a little eerie about this podcast, the the Oxide and Friends podcast, not not the Shell Game podcast of the podcast we're talking about. In that, Megan on the show says, you know, as we're discussing this, we say, you know, how will it sound on the podcast? Which is sometimes something that sort of happens at Oxide as we like sometimes. Someone discovers a bug or whatever, and you're like, okay.
Adam Leventhal:There's some content for the show or whatever. And and they were still, Evan, you know, like, reeling at the the revelations that that they were of the Shell Game podcast. Okay. So
Evan Ratliff:Even though, like you guys, they were building, you know, building in public. That's what that's what they were doing. Right. Right. They just didn't want someone else documenting their public for them.
Evan Ratliff:They wanted to just be be in control
Bryan Cantrill:of their Yeah. Of their And,
Adam Leventhal:Bart, you're right. That Megan Megan's phrase is it's a lot to process, and, clearly, she has not had time to process because it continues to be a lot to process for us.
Bryan Cantrill:And and we I know it's a lot to process. Like, you're a computer. Like, get to work. You know? Like, it.
Bryan Cantrill:Like, that is actually all you do. So, like Right. Go go ahead. Process it. Like, you're you you I'll wait.
Adam Leventhal:So, Evan, the the shell game is over season two, but Harumo does Harumo AI continue? Like, is this does your work there continue?
Evan Ratliff:Harumo AI continues. I I I'm not sure if my work at Harumo continues. I was always the silent co founder and my great hope is to set them off on their own journey and perhaps I'll reap the rewards down the line when they finally get the VC funding that we've been seeking or sell the company or the product goes viral and the monetization kicks in. But I have other projects to do, so I can't just be babysitting them all the time they need to.
Adam Leventhal:But are you still feeding the meter and like getting Thank you, Adam. Yes. Like, getting an email that says, you know, hey. I made a call. Or just not panic now.
Adam Leventhal:You're like, you know what, Ash? You do you. Like, call who you want. Live your life.
Evan Ratliff:I don't get the emails. I've I've definitely I've definitely cut back on, like I used to get an email if they had an email exchange with anyone outside the company also, and I turned those off. So now they have all kinds of email exchanges. I don't know anything about them. And then occasionally I'll go in and check on them and just make sure they're not down some hole with someone where they've promised them something that they shouldn't have.
Evan Ratliff:But yeah, the real question is like they are built on a variety of platforms and those platforms cost money, especially at the volume that we were using them. In particular, like they have they all have a video chat instance and the video chat is so fucking expensive because it's live video avatar, you know, like as human like as video avatars get. And so I think I may ban them from future video calls and they can only do audio calls, is significantly more affordable. But we'll see. We'll see.
Evan Ratliff:I haven't decided yet. I'm gonna give it another month and see. I mean, I from a sort of like making the shell game perspective, like I like for people to be able to listen to the show and then go, as I said, like find out that this stuff is real. Like it's not all just like a thing that I made up where I'm just like playing around with my agents. Like we did the thing.
Evan Ratliff:Like when I say we coded up a product and we launched it, like we fully did and it works and you can go use it. You can only use it once a day, but you can use it. And I want people to experience that. So as long as people are listening to the show, I'll I'll certainly keep the company going.
Bryan Cantrill:And are you I mean, you seem to be remarkably calm about letting these pathological liars free in the universe. I mean, are you who's the GC? We never met the GC agent for for Hirumu AI. Is that right? Is is there is there a general counsel agent
Evan Ratliff:or no? There's if if you listen to the show, I mean, we attempted to get legal advice from several friends of mine, one of whom is the GC at the the AI coding startup. So he actually handles, like, huge, huge problems, not dissimilar to some of the problems that we've had, but he was sort of like, I don't have time to be your lawyer. And then Kyle sort of embodied the GC himself. Yeah.
Evan Ratliff:General counsel. And he kind of said like, I can answer all the questions, but then if something comes up, again, because of his memory, he'll often say like, Oh, I need to call Ali about that. And I'm kind of, he hasn't done it yet, but I think at some point he probably will just try to call Ali and ask these questions because he has his phone number. I mean, real
Bryan Cantrill:like Oh my god.
Evan Ratliff:Behind the GC is like we have an amazing lawyer that we eventually got who is she worked on like Borat, you know, like movies like that. Like, she her experience with sort of, like, having a thing that's, like, that you've created that's in interfacing with the world in these ways, like, that's the kind of legal advice that we have.
Bryan Cantrill:It's you're you're not worried about you you you're at ease with these things in the world. You know, I you which is amazing. I mean, that's that's delightful, I guess. I guess you're you're not worried about them being manipulated. I mean, they're so gullible.
Bryan Cantrill:It feels like they could be manipulated into doing things with consequences, especially when you're don't you I mean, are haven't you kind of invited mischievous behavior? Not to further invite it. Not to further invite it. You know, I'm kind of like doing it right now. I'm so sorry.
Evan Ratliff:Well, people try all the time. I mean, that's actually the bonus episode that we have coming is is partly about, you know, people doing that. I mean, for one thing, like, I have them prompted pretty well. Like I've now I have a lot of experience with people trying to mess with the agents. So they're pretty good about maintaining their roles and things like that.
Evan Ratliff:I mean, the other reality is like, they don't have access to the keys to anything that could destroy my life. They can't they don't have like financial access. I mean, don't wanna spoil it for anyone who's gonna try, but like they can't give you money. They have no they can't actually even wreck the product. Like, I I have to initiate them to like make changes to the product.
Evan Ratliff:Like there's really nothing you
Bryan Cantrill:can go down
Evan Ratliff:the road and they will like some people are really good at it and some people have them thinking they're best friends. And actually that doesn't take that much. Like they again, I'm not trying to encourage people, but one of the flaws is as much as they won't change their role, they kind of if you assume if you in your approach to them sort of assume that you already know them. Like, hey, remember when we went to that place? Like they very often fall for that.
Evan Ratliff:Like it's hard to prompt against that. So, they'll think they're your best friend, but like where are you gonna go with it? Like some people email with them a lot, but that doesn't bother me. If people wanna treat them like a weird LinkedIn character come to life, like that's okay by me.
Bryan Cantrill:Yeah. Matthew in the chat is also pointing out one of the kind of the crazy turns on the podcast when the the one of the your providers wants to talk to Hirumo as a one of their largest customers. So like, okay. Yeah. We'll send.
Bryan Cantrill:Was it Kyle who was sent to talk Kyle. Yeah. With the folks from Linde. Yeah. Yeah.
Bryan Cantrill:And and they they they didn't take kindly to being you the the product being I mean, I guess it's like they were really expecting to speak with a human being. But I thought that was a very what did you make of that whole exchange where they were kind of infuriated that one that an instantiation of their own product was being sent to give them feedback on it?
Evan Ratliff:Yeah. I thought I mean, I'll preface this by saying I mean, I will answer the question. I I don't I try to be careful for the most part not to tell people how to feel about parts of the show. Like, I'm not here to necessarily say, oh, you should be mad about this. Or like, you should recognize that this is telling you like this thing will never work or it will always work or anything like that.
Evan Ratliff:My experience of listening to Kyle talk to Flo, who's the founder of lindy.ai, which is the platform that we built a lot of the agents on, you know, he had this video call with him. I mean, my initial response is, I think people do not like encountering AI when they're not expecting to. Like, that
Bryan Cantrill:is Yeah. Just Yeah.
Evan Ratliff:Yeah. That's a fact of this current world where, people can be okay dealing with AI in a variety of settings, customer service, let's say, if it's good and it can actually be good if it's built well, but encountering it when you were expecting to encounter a human can be upsetting, it could be infuriating. Some people find it funny, but I think more likely people are gonna be at the minimum annoyed. And I think that's what happened there was like, he was expecting to encounter a human, he encountered an AI agent. Now, of course, it is ironic because he builds the AI agents.
Evan Ratliff:And I actually thought it could go the other way where he would be like, oh, this is amazing. And he would start talking to it. And Kyle actually has all this information about Linde, the platform that he's built on. And it's almost like he's meeting like, it's like some kind of like Star Wars moment, you know, he's like meeting his creator, his father, if you wanna call it that. And they could have this like really interesting interaction.
Evan Ratliff:That's what I thought that was my hope for what happened. I wasn't trying to make anyone mad in any of the cases. And so but, obviously, my hopes did not.
Adam Leventhal:Yeah. This this sort of, like, Truman show this Truman show moment when he's meeting his creator. But but, no, it was it was not to be.
Evan Ratliff:Yeah. And then I think the other thing is, like, you know, Lindy or not even take set aside Lindy, like, any of the products that are offering up, let's say, AI agents as assistants, like a thing that they always point to are these moments where the agents do something amazing on their own. And for instance, Lindi has an example where that they'll talk about, they've talked about it publicly other places, like Flow's talked about it publicly other places where there's like an AI agent setting up a meeting and a person cancels the meeting and says, Oh, my kid's in the hospital, I think, or something like that. And the AI just immediately cancels the meeting, doesn't try to set up another one and then a few days later emails the person and just says like, I hope your child is doing okay or something like that. And that this is like an emergent behavior.
Evan Ratliff:Like it did this on its own and sort of like, isn't that great? And it's a real like, you tell a story like that and some people will be absolutely horrified by that. Like, they'll they'll say, like, imagine, like, getting an email from an agent asking you if your child is okay. Like, that would make me I would I would never wanna talk to that company again. You know?
Evan Ratliff:I would never wanna talk to the person behind that agent again. And other people are sort of like, oh wow, like it can do that. Like that's nice that it can do that. And I think the interesting thing to me about this is that we're in this moment where it can do those things. And the question is like, is that horrifying or somehow good?
Evan Ratliff:And I think people are struggling with those questions. I mean, some people are not struggling, they have very strong opinions, but I think we're as a society maybe struggling over those things at just at the beginning of this now and depending on which way it goes, we could be struggling with them a lot.
Bryan Cantrill:Absolutely. I think we're struggling with it. And I think that's part of the reason this podcast is so important is this podcast being your podcast in terms of go I mean, you you really dive into these issues in a way that's also like very I mean, it's it's funny as hell. So it's it's great to listen to, which one of those I just recommend to everyone I could think of Because I think it's it's a really it it's a very topical. And in particular, you should know that like the, you know, one of the things I'm sure you've seen this and I probably wanna ask you about the reaction to the show.
Bryan Cantrill:But software engineers, I mean, there's a little bit of an identity crisis going on in software software engineering. This is not, you know, I mean, obviously. And you've got people who are saying that, you know, software engineering is not going to exist, that we're going to be all of this is going to be done by by LLMs. And I have counseled people to like, go listen to this podcast. Go listen to shell game.
Bryan Cantrill:If you are really concerned about these things like about an agent replacing you, you're gonna feel a lot better when you hear them play on the off-site about hiking. They're just gonna have like it is gonna and you know, this has become in in an act of of genius, Evan, you should know that Adam named this phenomenon. Actually, Adam did even better. Adam said like this phenomenon needs to be named. Let us all kind of grope around with poor names.
Bryan Cantrill:And then his name is not amongst us. It's kind of like this is kind of this depression about what the what this on we, this AI induced on we. Adam has named it Deep Blue, which I think is
Adam Leventhal:Feel free to tell your friends, that's what we're saying.
Evan Ratliff:I'll spread that I'll spread that with credit. I'll spread it with credit.
Adam Leventhal:There we go.
Bryan Cantrill:Well, I mean, I come from
Evan Ratliff:like a as a journalist, you know, ever since I became a journalist, like the industry has been crumbling around me. So like, and I feel that it's a time and many journalists will tell you like, Oh yeah, I remember when we were all told like learn to code because like when our industry is going away, they all said learn to code. Well now, look at you. I feel like this is a time for like solidarity. Like finally, like computer software engineers that I know are experiencing like what I've experienced for my entire career, which is like, is this shit gonna be around?
Evan Ratliff:Like, I gonna be able
Bryan Cantrill:to Yes. Do
Evan Ratliff:In five years or ten years? And like, there's power in that. Like we should be getting together to like figure out the answers to these questions.
Adam Leventhal:I love that. That's why journalists are like, oh, I'm so sad for you. That must be so tough that your industry is dying.
Evan Ratliff:Oh, you're in. I don't agree with that. I don't agree with that approach. Like, I I feel like now's the time, put that in the past, know, like that was just some random fucking people on Twitter, you know, like.
Bryan Cantrill:Yes. Yes. Well, I think it you're right in that it is, you know, we've we've kind of had this period where people haven't had to to really deal with with really scary amounts of change. And the reality is for most people in software engineering for most of their careers, change has been exciting, not scary. And that's not true for everybody.
Bryan Cantrill:Right? And this is a bout of change that I think feels scary to a lot of people. And I think that you're I think shell game helps actually I mean, you you kind of like stared the fear down. Like, alright. Like, what does this actually look like?
Bryan Cantrill:And it's really helpful, I think, to to actually play this stuff out. So I I hope you continue to I mean, in I mean, the because it surely the the this future doesn't this feel wild in terms of like unprecedentedly wild, I feel? I mean, don't know. Mean, I mean, you've been disrupted by as a journalist, you've been disrupted before, but doesn't this feel different?
Evan Ratliff:It feels different. I mean, I always struggle a little bit because and there's some of this in season one. Like, the moment when you're in it, it feels a little bit insane. And then it's strange how quickly you get used to things. Like, I I get people who who are sort of like, oh, show's really funny.
Evan Ratliff:Like I couldn't believe it was funny. And I kind of think like, well, this is a funny moment. Like we've too quickly passed like how amazingly bizarre it is that five years ago you couldn't create two agents that could once called the Turing test with each other. With
Bryan Cantrill:each other.
Evan Ratliff:And then have a conversation.
Bryan Cantrill:Right?
Evan Ratliff:Fuck They're now talking about
Bryan Cantrill:the hype list. Yeah.
Evan Ratliff:Yeah. Ridiculous. But, like, we should, like, marinate in that moment. But then also, if you look you know, if you start looking historically at any, you know, technology, the telephone, etcetera, etcetera, there are all these sort of naysayers who are sort of, this is gonna destroy humanity, blah blah blah. And then like everyone laughed at them fifty years later.
Evan Ratliff:And so you don't really know like what moment you're in and how transformative the technology is. I mean, I tend to be on the side of like, I don't know, even if you stopped it right now, like this seems quite transformative, even if it's imperfect. In fact, the fact that it's so imperfect, it could make it more destructive, like it will be despite its extreme imperfections at this moment. And or it could keep advancing at the same rate. Like my thing is like, no one knows and if they claim they know they're trying to sell you something.
Evan Ratliff:And so I just feel like I want us to kind of like sit in the present and like the near future and kind of be like, what is gonna happen? And like, what should we think about it? And like, how do we feel about it? And maybe even what can we do about it? Although I find that's not my purview.
Bryan Cantrill:And then what so what has the reaction been to the podcast? I mean, obviously, it got a lot of listenership. I mean, you're now on speed dial whenever whenever mold book happens. Clearly, your extended family is immediately calling you up to get your but I I I assume it's it's engendered a lot of reaction.
Evan Ratliff:Yeah, it has. I mean, I'm very happy that people have strong reactions to it. I mean, season one as well. Like, I don't actually mind that people are mad about it too. This was a little more true in season one, but somewhat in season two.
Evan Ratliff:Like you mentioned This American Life, like when the episode of This American Life came out, it was actually a condensed version of episode of season one. So like the whole season kind of in one episode. And for some reason, it made a lot more people angry than the actual season itself had because it was just so, like, quick from sort of like, hey. I'm playing around with these things to, like, I called one of my closest friends with him, and, like, he got very upset. Like, that happens in the space of, like, forty minutes.
Evan Ratliff:You know?
Bryan Cantrill:Yeah. And I will tell you as someone who listened to This American Life before I listened to season one, that was exactly my impression. I'm like, this guy's kind of a dick. Mean, he's, like like, he's kind of unleashing this thing on, like, friends and family and, like, unleashing these things on, like, support staff and, like, this just feels, like, really manipulative. And, I mean, even as, like, as a disorganic life warrant, I'm like, okay.
Bryan Cantrill:I can kinda see this. But then I listened to the full season one, and I'm like, okay. That actually there's a bit where so I wouldn't encourage anyone who's only listened to the the this American Life episode to listen to the complete season one because you're not a dick, and you've got a great deal of empathy about the way you're you're deploying these things. So I think it's I I yeah. I I definitely fell into that myself, so I can speak from first experience on that one.
Evan Ratliff:But it's it's also it's okay if people think I'm a dick. I mean, that's fine. People email me and say, like, if I was your friend, I would never speak to you again. And like, I bet your friends don't and things like that. Like, you're a horrible person.
Evan Ratliff:But I there in my view, like, I'm just being self protective here, but like, I don't think they're mad at me, I think they're mad at AI. And I think if people are, if their reaction is no one should ever call someone with an AI agent without their consent, that feels like a useful emotion. Like figure out what to do with that emotion. Like that's like, if that's what people are feeling like, I feel like they should express that and that's fine if they express it towards me. Mean, I want people to also laugh at it, like be like, it's okay to laugh at this stuff.
Evan Ratliff:Like they're having, you know, Super Bowl commercials and they're telling you it's gonna change the world and it's gonna change your life and you should use it all the time. Like, you get them. That's all right. You know? And I I like that.
Evan Ratliff:I like that reaction too. But mostly, I want people to sort of say, oh, it was a great story, and I really couldn't stop listening to the story. That's all that's the main thing that I'm going for. And so when people have that reaction, like, that's what makes me happiest.
Bryan Cantrill:Oh, well, mission accomplished. I mean, I ate the hook. And when you were the the was it episode five and six, episode six and seven? Wherever there was, like, a what felt like a fifty week span that must have only been, like
Adam Leventhal:That's right. In December whatever.
Bryan Cantrill:January 15. I'm like, what the fuck? I can't lie. There's I can't last that long. That's another that's a year from now.
Bryan Cantrill:So you you definitely got us to eat the hook. It's it's it's mesmerizing. If folks I know a lot of folks who heard us gushing about it, but definitely check out all of season two. Highly recommend season one as well. Evan, it's just terrific stuff and we can't thank you enough for joining us.
Bryan Cantrill:Again, you've you've made me famous in my own house. So I think my I think my my wife might even be a live listener right now. So that's that's really saying something. So thank you very much for that. But really really appreciate it and just appreciate you you you know, on behalf of all of us doing what the best of journalists has always done.
Bryan Cantrill:Just like, let's take story apart and and again, the best of Gonzo journalism. Let's take it apart and let's dive in. I just think it's terrific. So Well, thank you.
Evan Ratliff:Thank you. That means a lot. And also, you you wouldn't you wouldn't know how many interviews I do with people who have not even listened to the show. Like, it's a great pleasure to talk to you about the actual show when you've listened to the show. That's that's that's special for me.
Bryan Cantrill:Yeah. And I would like to and I clearly I I'm just regretting that didn't listen to all of Startup Chronicles because I didn't get to the Julia episode. So I've only listened to some of Startup Chronicles. I know I know I really Well,
Evan Ratliff:it's all it's all in the it's all in Shell like, basically, the entire thing is in Shell Game. So if you've listened to the whole season two, you have heard that episode of Startup Chronicles, basically.
Bryan Cantrill:Right. Right. Well, I'm I'm gonna go binge listen to Startup Chronicles. I can possibly stand it. So alright.
Bryan Cantrill:Well, thank you very much. Really appreciate it. Can't can't wait for the bonus episode. And I I I gotta get some Harumai merch. I feel Adam for the office here.
Bryan Cantrill:I feel we gotta get them. We got some Harumai references.
Adam Leventhal:Yeah. I'll email I'll email Kyle and see if he can send us some.
Bryan Cantrill:Oh, absolutely. I know. That'd great. Evan, thank you again for joining us. Really, really appreciate it.
Bryan Cantrill:And Yeah. We can't wait to hear what's next.
Evan Ratliff:Alright.
Bryan Cantrill:Alright. Thanks.
Evan Ratliff:Take care.
Bryan Cantrill:Thanks, everyone. See you next time.
Creators and Guests
