Okay, Doomer: A Rebuttal to AI Doom-mongering

Speaker 1:

It did give me opportunity to remember, the Hale Balp comet because I did feel like I was reading some, cultish kind of stuff. The what comment? Do you, you don't remember Hale Bopp? No. Heaven's gate.

Speaker 2:

Oh, go on. I mean, I remember Heaven's Gate. Well, it it talks about that also itself in San Diego. That's right. There you go.

Speaker 2:

You got it.

Speaker 1:

You know, you know,

Speaker 2:

the food. So, you know, barbitol and plastic bags. Like, that's where we are. Where are we right now?

Speaker 1:

Well, okay. Yeah. Fair enough. So I mean, I feel like we started last week with Palooza gate and and, with between Ocean's Gate and Heaven's Gate. I guess it just meant it it got me down this cultish rabbit hole of faith in, you know, faith and stuff.

Speaker 1:

Okay. In the case of Heaven's Gate, in the faith that there was a an alien in the wake of the Hale Bopp comet, And when they off themselves, they'd be beaten up.

Speaker 2:

I I, dude, I had forgotten that the comet played a role. I remember it was a passing spaceship, but I had actually forgotten that the comet was load bearing there.

Speaker 1:

Yeah. Yeah. As Paul mentioned, this was also a seminal moment of my early childhood. This is like one of you know, I feel like, the the bubble of childhood gets sort of penetrated by by things like, you know, in my case, like the Berlin Wall coming down and things like that. But this is one of them that also made it through.

Speaker 2:

Hale Bopp or Heaven's Gate or both? Both. Or are they, are they tied together?

Speaker 1:

They are tied together. Yes. Hale Bopp was the comment that precipitated Heaven's Gate folks. Where are we? What did why did they take us here?

Speaker 1:

I'm sorry.

Speaker 2:

Yeah. I well, did you I I I use like an object lesson of, like, Brian, this is what you do to me all the time. I'm gonna do it to you. So you really appreciate it. And, like, yeah, mercy.

Speaker 2:

This is, like, I don't know where I am. I've definitely I didn't choose to be here for sure. I mean, we're talking about suicide, cult rituals, suicide.

Speaker 1:

I got it because as I was, I was reading a bunch of the doomerism and I felt like I was like educating myself on flat earth philosophy, which got me down to other cults. So I mean, it's on me to be clear.

Speaker 2:

Did you get to the Leonids of 1833 by any chance in your No. No. This is the date you're like okay. Well, if we wanna go on a crazy crawl, like, let's go. Let's go with either with I I've got 3 more destinations we need to be to before morning.

Speaker 2:

No. So the the Leonids, I believe it's the Leonids meteor shower that happens every I think annually maybe. And something like every 33 years, it's particularly heavy, and the 1833 Leonid were, like, off the charts. And in particular, there were, like, a lot I mean, a lot of sonic booms. And you're seeing, like, a big meteor.

Speaker 2:

I don't know if

Speaker 1:

you've ever Yeah. Yeah. Yeah. Sure.

Speaker 2:

Like, a big fireball where you're, like, holy god. Yeah. And you could imagine. So the Leonids of 18/33, they were huge, and it led to a religious revival movement. You can see why.

Speaker 2:

People are like, you know what? This is the right year to be religious. God is clearly, like, someone's pissed off about something up there, so I gotta, but, yeah, I mean, there is something that that is, captivating about the, the the heavens, the things we don't control. I mean, there are some big themes here with the doomers. I mean, the kind of the presage doom is very core to the human condition.

Speaker 2:

It feels like it feels like this is very common. Like, many religions have this idea. Certainly, there's the idea of the rapture. Rapture. I mean, it's just like, this is something that, humans seem to be attracted to apocalyptic thinking.

Speaker 2:

Like, I I don't I don't quite get it, but it is very much in our humanity. And I feel like I've I've done it as well. You know? I feel. Now is your turn to rattle off examples of me being of of my pocket.

Speaker 2:

I'm not breaking the The the

Speaker 1:

I mean, the biggest apocalyptic thinking I think you espoused was was totally spot on, which was the financial bubble bursting. Yeah. You just had it you were just off by, like, 5 years, something like that. But it was it was a great prediction.

Speaker 2:

It was. I was off by 5 years, and, so I definitely did not call the top of that one. I, I did, and when we we bought our house in 2008, and I was convinced that I was like, after all of my realizing that this thing was a bubble, I've gotta manage to buy my house at the peak, which we didn't. And the thing that I, you know, I felt like I saw a lot of oh, as a a lot of people, I was not the only one, who saw a lot of dangers in the bubble in the housing bubble. And I also I just seem like we just lived through the dotcom bubble.

Speaker 2:

Like, didn't we all just live through this, and we're seeing it again? All of the same symptoms of kind of this manic thinking, this mania thinking. The thing that I in the ensuing bust, which not to take anything away from the bust, and it was deep and so on, but I I feel that the thing that I didn't factor in that is virtually never factored in by those who fall prey to apocalyptic thinking, especially when it comes to technology, is humans are adaptable. Humans will change what they are doing when things change, and that is the bit that we always seem to forget. I mean, Adam, I know we we've talked about the fact that I when I was a kid, I thought I was gonna get a PhD in economics.

Speaker 2:

Weird kid. I that's what why when I entered college, I that's what I thought.

Speaker 1:

Well, I mean, I mean, not to prop you up too much, but you had actually done as a high schooler, as I recall, like some interesting work in economics.

Speaker 2:

I am not putting that too far. No. Are you are you doing this as a gift to me? I'm like, yeah, it is. It's just putting that

Speaker 1:

right up on the tee.

Speaker 2:

Just give it a tee.

Speaker 1:

Take a swing, kiddo.

Speaker 2:

The cross price supply elasticity of the copper and moldable markets. It was I I think it would it it was it really was glorious. I was trying to figure out why this mine kept closing in Colorado as multiple mines climax and discovered this really interesting pricing relationship between moly and copper, and that copper was bi produced. And as a result, the copper producers weren't responding to moly signals. And it was super interesting.

Speaker 2:

I'm like, I wanna be a mineral economist. This is amazing. I wanna be a mineral economist. Again, weird kid. And the but and I take in macroeconomics as well, and I but I had to retake a macro course when I got to university.

Speaker 2:

And, you know, have you did you ever take economics, Adam? Again, your life is better than you have.

Speaker 1:

In high school. In high school, I I was required

Speaker 2:

to. Do you remember ceteris paribus? Nope. Okay. So ceteris paribus is the the the kind of the bedrock principle of macroeconomics, and it is the idea that all of the things remain the same.

Speaker 2:

And the this is a this is kind of an idea that is, that seems a little strange. Like, we're gonna reason about this large economic system by holding everything else the same and only varying the variables that we care about studying. It's like, well, we're not actually doing that though. Right? And but this is something you kinda like learn when you are learning economics.

Speaker 2:

You're like, okay, I get it. Sedrus Paribus, I understand why we do this, and I kind of, I accept ceterus paribus. I accept this. And I had long since accepted this when I arrived at school, was taking a, macro course, and Roth introduces ceteris paribus, and as often happens in a class, someone is like, wait a minute. I don't understand how you can do that.

Speaker 2:

And, she kind of explains it, and he's like, no. No. No. That's really just making sense to me. I remember thinking like, god, come on, dude.

Speaker 2:

Just, like, accept ceteris paribus so we can move on with the rest of the lecture. So she kind of explains it again. And he's like, no. Wait a minute. This doesn't make sense.

Speaker 2:

You can't actually hold these things constant. Like, you can't do this. And then she kinda comes back at him and explains why you need to do this to be able to reset the system. And to his credit, he's like, no. Wait.

Speaker 2:

Stop. Is there is this whole discipline built on this? Because this is just wrong. And I remember thinking, like, wait a minute. He's right.

Speaker 2:

He is actually and I remember my disposition thinking, like, I'm annoyed at this person to, like, no, wait a minute. This is the prophet among us. And I actually think I'm gonna go study computer science. It was a

Speaker 1:

You, like, walked out of class, became a computer scientist that day.

Speaker 2:

More or less. And actually, because I was taking computer science concurrently, and I was really excited about my computer science courses and wasn't having the same kind of existential crisis. And the reason that macroeconomics and I mean, even still, like, there's so much we don't understand about a macro economy because it is so hard to reason about, And ceteris paribus doesn't exist. You can't hold everything constant. People adapt.

Speaker 2:

People change their behavior all of the time. People are not rational actors always. And so it's like you cannot it's really hard to reason about economy. And we don't like to accept the fact that people adapt and change. And I think that is, I mean, part of the the many, many, many problems I've got with some of this AI doomers.

Speaker 2:

I'm just gonna get us to the topic is this assumption that people don't change. So in terms of why we're here now so and I don't have you seen any actually, in terms of why this is a current topic now, do you know why this is a current topic now? I am convinced.

Speaker 1:

No. Tell me.

Speaker 2:

The Andreessen's essay. This is why this was all over the place. Because Andreessen wrote this essay, like, not that long, a couple weeks ago, about how, like, and this is such thing something that is so peculiar is that the you would think web 3, you and I agree, not well founded, lots of reasons to question web 3. AI doomerism, not well found founded, lots of reasons to question AI doomerism. These surely, these demographics overlap exactly, but they don't seem to.

Speaker 1:

I know our our hero, Liran Shapiro, who, like, has been dunking on web 3 folks for 2 years or whatever. And we've been salivating over that is all in on GBT. Doomerism.

Speaker 2:

He's an AI. Doomerist. Yeah. And which is indeed how I came across this because I follow on Twitter. It because, you know, been feeding me that that web 3 the the web 3 reality check that that we like so much, and he is an AID Emeritus.

Speaker 2:

Meanwhile, Marc Andreessen and a 16 z, and I know some of us are equally awaiting the the the recent book from Greg Stickson because some of us are not capable of not reading it.

Speaker 1:

Well, you better keep waiting because it this book based on a fad of a year ago is coming out a year from now. So, you know, stay tuned.

Speaker 2:

You know, I think I missed that. That isn't, I had assumed it was just published. This is not coming out until next year.

Speaker 1:

No, no. March, March 24. Like it's not going to make any sense anyway.

Speaker 2:

That's brave. That is, I really admire. In the British sense of the word does. That is a very brave proposal. Yeah.

Speaker 2:

He admires my courage. Yeah. That is that's very bold. So the the and, amazingly, then the a 16 z, which is this big web 3 crypto proponent, and for broadly speaking, the the proponents of a 16 z, they are kinda AI rationalists. And Andreessen in particular wrote this essay, and, of course, it is kind of falsely dichotomized it, which is part of the problem.

Speaker 2:

Is that the like, it and because I don't know if you've read the entries in essay. It's not very good. I mean, it's like a lot of entries and stuff. Like, there's stuff in here that's good, but a lot of stuff I disagree with. And in particular, it it's the same kind of stroke.

Speaker 2:

It dismisses AI doomerism, but then also dismisses any kind of AI ethics or AI safety. It's like, we actually don't need any of this. We actually just need to go full bore, everybody do what they want. There can be no consequences of this. It's like, well, that's not true either.

Speaker 2:

We sorry. Is there does it get to be a moderate middle here? And then you look at a lot of these debates, and you are debating with kind of hypotheticals on hypotheticals. And it's very hard to have a debate when someone is talking about a magical machine that can recreate itself. You know, it's like, okay.

Speaker 2:

We're not we not talking about something that's real? We're talking about something that's going to happen as opposed to something that is real. It makes it really difficult to listen about.

Speaker 1:

Well and when the concept proposed consequences are the end of everything.

Speaker 2:

The the end of everything.

Speaker 1:

I mean, the

Speaker 2:

end of everything. I mean, okay. So you you you and I here. You and I each took a vow that we would off the other if we said certain of these phrases, But let us be clear that we are quoting. That you are so I think you should have a quote here from,

Speaker 1:

What was his actual oh, we're not just we're not just talking about human level extinction. We're talking about the the destruction of all value in the light cone.

Speaker 2:

A universe destroying bomb. It's like, where are we now? Where are we right now? This is we are so deep into science fiction. I I don't even but the the the and part of the reason we're here talking about this today is broadly the AI doomerists have stayed nonspecific, which makes it very annoying because the the one, it's very easy to sow fear.

Speaker 2:

I think, fear is something it's a very an emotion that we all fear, and that we all have. It's very easy to sew it, and so it's kinda sowing this, abstract fear. And which again, it's very easy to do especially when you leave it abstract. But they they ventured in in this particular thing that Leron quoted, from with Emmett Shears, the, CEO of Twitch, where we it ventures into the specifics. And in particular, it ventures into the specifics about about how the AI is going to achieve this Bingian Singularity.

Speaker 2:

I guess he doesn't refer to it as such because that I mean, the the maybe that would reveal that we are talking about science fiction, but that the AI in particular is going to master not just programming, but is also gonna master chip design. It's going to master power distribution. Not sure what that means exactly. It's going to master all of the things that it needs to build a better computer for itself.

Speaker 1:

Material science too. I just wanna ask you a list just just because I don't know. That seems hard too. The what let us

Speaker 2:

not and and auto tools. Someone is saying yes in the the chat. So and this is like, alright. Wait a minute. Now we are in the domain of something.

Speaker 2:

We actually you know, it's it's kind of like the gamma, amnesia, right, where we are now not in the domain in the kind of the material science aside. It's like, we actually do know a thing or 2 about building a computer. And in particular, we have know or have really learned a thing or 2 about the challenges of building in the physical world. And I do think it's really important because I I think people don't understand how brittle the things that we build are. Not in the sense that, like, when we make them, they're robust.

Speaker 2:

Yes. And so you have this thing. You've got a laptop, you've got a computer, you've got a phone, it is robust. When it was being designed though, there was a a prolonged period where it it hung in the balance, where any single defect was the difference between this thing being viable and it being not viable. And I have been struck I mean, Adam, I feel like I've been struck over my entire career, over the just the just how small a defect can exist in a computing system, be it hardware or software, and the outsized effect that that small defect can have.

Speaker 2:

Yeah. And I feel I feel we've seen this this way.

Speaker 1:

Spot on, another, you know, hero of the show, Dykstra. And I use that, I think, ironically, because we had a whole show about Yeah.

Speaker 2:

I think we had all

Speaker 1:

the opposite today, our hero. Yeah. His point about AI and programming was that, this doesn't converge. You know, you have small changes, kind of outsized effects. And I think that that is potentially reductive, but certainly when you get into the realm of like chip design or complex systems, it's it's really true.

Speaker 1:

It's really true that it you can't just converge your way to a functioning system.

Speaker 2:

Well, you can't. And it it's just like, and I, I say the following not to shame you, which of course is, definitely.

Speaker 1:

I I Bring bring out the shame.

Speaker 2:

Alright. But we shortly before we integrated DeepRise, we had a bug that felt devastating.

Speaker 1:

Yes. We did.

Speaker 2:

We did. We together did. No. I mean, I'm not It's true. We all did.

Speaker 2:

We and in particular, we together had all deleted a line

Speaker 1:

of code. Alright. Look. Let me be clear. I I deleted the line of code.

Speaker 1:

I asked these guys desperately to review the code.

Speaker 2:

Let's just get this out there. So one, I the, we you deleted the line of code because you were doing the thing that had to be done that no one else would.

Speaker 1:

It it was like And it wasn't like one line of code. It was like, like 10,000 or something lines of code to review. And it's like a needle in a haystack. And I, and like, I fucked it up.

Speaker 2:

Which is, literally, anyone could have done it. I've done this exact same thing many times. I think we honestly, I think it's more interesting as an optical. So what we were doing, just to give context, is there was a a legacy tracing facility in the colonel called Vtrace. And recall, we were doing DTraced, and that's why, hopefully, my millennial podcasting microphone is picking up the incident, v and d.

Speaker 2:

So we were we wanted to rip out VTrace, and there was a bunch of pound if def VTrace code in the kernel that would do this stuff that was basically dead code, and we wanted to rip it all out. And, we knew someone need to rip it out, and it was and you were like, fine. Look. I'll do it. I'll just I'll rip it out.

Speaker 2:

And when you were ripping it out, you in the pros and you're ripping I mean, it was, like, a 10,000 line change. It's huge change. And you rip you removed one additional line. And Yeah. That one additional line happened to be the line that initiated the first IO on boot on this particular SCSI HBA, and the on the FSP driver.

Speaker 2:

And the manifestation of this was, on this very small number of legacy platforms, we did not boot. And it was we were dying early enough that it was more or less undebuggable. And this was the day before we were gonna integrate. And it was, it was a high stress day. And, ultimately because my, honestly, my assumption was we had been doing some things that were pretty bold in the deep innards of DTrace, and it was that one of those bold bets was actually wrong.

Speaker 2:

And this old platform, I I just didn't even enter my mind that

Speaker 1:

Like, the the just, like, whatever we needed to be true turned out to be not true on this platform, on this old chip, on this old card, whatever.

Speaker 2:

That's right. That's right. Because I was doing some I was setting the global bit on a pay I mean, I was doing some weird shit and to get and, you know, you just kind of naturally go to the parts of the system that you know the best when you're debugging. You know, this is the the kinda, you're going under the streetlight

Speaker 1:

to find streetlight. Yeah. Exactly. To do

Speaker 2:

a Telesny. And I went under the streetlight, and I'm like, oh my and and but it it took ultimately, we the only way we found that was by sitting down and reviewing the actual code because, like, we just were out of the ability to debug it. We knew

Speaker 1:

I mean, and there you're you're when you use we now, you're not talking about me because I was, like, working at home, blissfully unaware of this. I was just like bopping along, doing whatever it was I was doing. You know, I guess video conferencing didn't exist in those.

Speaker 2:

Video conferencing didn't exist. I know it is funny. And, like, and chat, we didn't have chat. No. It was only email.

Speaker 2:

And you were like, I wasn't I know I wasn't checking my emails off, like, doing I mean and you you didn't work. You just, like, weren't checking anything. Like, a reasonable reasonable thing to do. And the yeah. This is on so any Sunprou platform that had the FSB drivers.

Speaker 2:

This is an old SCSI driver. And we ultimately but it was just like, wow. That's amazing. Like, this one line of code was the difference between the thing being fine and the thing being dead, unworkable. And it's like, it's not brittle because if the line of code is there, it's like works robustly.

Speaker 2:

It's not like the the artifact itself is robust, but the these digital systems are these are not biological systems. They're not approximate. And I don't think people really, really, really get that about how much because it just feels like, it feels like everything's kinda broken and everything's kinda squishy. It's like, no. No.

Speaker 2:

No. Actually, in order for a system to boot, things have to be absolutely correct. And 1,000,000 and 1,000,000,000 of instructions have to be absolutely correct. It is not something you can approximate yourself to. And do you think some

Speaker 1:

of that comes from some of that confusion or some of that conflation perhaps comes from the fact that with these LLMs and stuff, it is approximate and people don't necessarily understand it. And I think some of the, the certainly the excitement, but also some of the fear comes from experts in the field being astounded at how good this stuff has become very, very quickly.

Speaker 2:

For sure. And I think it's actually a disservice that we're that we keep quoting AI researchers. It's like okay. Like, because it's something like, people are like, oh, what is it? It's like 50% of AI researchers feel that there's a 10% chance that AI is gonna destroy humanity.

Speaker 2:

And you're people are just like, well, just, like, do the numbers on that. And that's like a that means there's, you know, that you did even if you're, like, very conservative, it means there's a 1% chance this will destroy humanity. It's like it actually does not mean that there's a 1% chance this is gonna destroy humanity. Sorry. That's actually not what that means.

Speaker 2:

And you can we get to specifics, please? Can we talk about the actual mechanism by which humanity is destroyed? Can we just get into and can we also, like, can we do it without talking about, like, the nuclear codes? I think it's cheating for the AI to watch nuclear war. Yeah.

Speaker 2:

I do. I

Speaker 1:

do. Frank, get more original AI.

Speaker 2:

Get more original AI? No. You don't get to, like, you you don't get to just launch nuclear weapon. Like, sorry. That's just that's just you piggybacking on someone else's doomsday device.

Speaker 2:

It's like, you need to be your own doomsday device, AI. You can you can hear me, chat. Don't look away, chatbot. I know you can hear me right now. You need to be your own doomsday advice.

Speaker 2:

And, like, how are you going to do this? And, you know, the the reality is you need humans, like, bipedal things with brains and arms and hands to do a bunch of things for you. And maybe it's like people are like, oh, well, it's gonna, you know, it's gonna convince all these people. But, you know, come on. I I don't really buy it.

Speaker 2:

And the and I just wanna get, like, really, really, really specific. So in particular, Adam, I mean, we talked about, like, the the the kind of the bullet over the ear for for DTrace, but I kinda wanna talk about some of the bullets over the ear for oxide because there have been a bunch of them. And on both hardware and low level system software. And I mean, there've been, like, honestly too many to even to even, like, name, but I there are some really common themes across them because any because I think here's here's another challenge that we've got. Right now, these abstractions are so good that we we collectively don't understand how extraordinary it is that they work.

Speaker 2:

I mean, we talked about this before about the magic of the PCB, and how the printed circuit board upon which all humanity is built doesn't even seem to have a book written about its history. I mean, which I found I I still find, like, shocking. It's like this is such an important technological leap in terms of our ability to have the, a p and these PCBs are so sophisticated, and talking about the techniques that have been developed over the years, and and and not just like the the layout, but back drilling and all these things that are done for SI and so on. It's like there's and and the roughness. Remember when Tom was on describing the roughness that we're having to model in the simulation software.

Speaker 2:

I mean, these systems are so extraordinarily complicated, and yet it's inaccessible. We don't talk about it. We don't teach what a PCB is, and people kinda learn it on the job, but only those that are really kind of in the trenches of it. And I think we have done ourselves a disservice by not actually talking about more of this stuff. And in in particular, and I still would love to be proven wrong on this, people have not talked about what happens at bring up.

Speaker 2:

We have it oxide, but broadly, I I don't think anyone else has really done it. Has it really gone on the record about here is what happened during bring up warts and all? Because people don't they want to it's it's in everyone's best interest to kind of like be a magician. You know what I mean?

Speaker 1:

That's right. To to have that finished artifact, the laptop or the phone that works by magic and not showing all of the, you know, garbage steps that occurred before it.

Speaker 2:

Totally. And they're and they're like, wow, there is a coin behind my ear. That's amazing. And there's a coin behind your ear too. Like, wow.

Speaker 2:

That's amazing. It's like, how did you do that? It's like, because there's a coin behind your ears. I guess there must be a coin behind my ear. That's amazing.

Speaker 2:

It's like, well, why don't we just, like, get coins behind one of those ears? Let's do that. Like, that's, like, let's just go, you know, have a little farm of magicians about, you know, like, a poultry farm of magicians getting coins from behind the ears. It's like, no. It's not sorry.

Speaker 2:

It's not magic. It's sleight of hand. It's like that's not and you just don't know how it works. It's not there there's not actually magic. But I think that our computing devices are kind of actually magic.

Speaker 2:

And this is the what? The Arthur c Cork line? That any technology sufficiently advanced technologies industry magic or what have you. And I think that we now are treating it like magic, and culturally, we treat it like magic. So if it's magic, like, I don't know, like, why can't I make up my magic?

Speaker 2:

I mean, it's magic. We're in magic land. Like, can't everyone just make up? So the magic I wanna make up is that the AI can create computers. And it's like, it's not magic.

Speaker 2:

It's not magic.

Speaker 1:

And you can't imagine. I mean, I think I speak first both when I say like, there's a lot of AI optimism in terms of what we will be able to do as engineers with tools based on AI and and these large language models and so forth. Like, having tools that can help us debug some of these things, that'd be great. Having tools that can do static analysis in much more sophisticated and interesting ways, like, also terrific. But like that, you know, the the kinds of assisted and then automated analysis and debugging and software and hardware and all these things, like, we're so many times over the horizon before the machines are thinking before before they're creating themselves.

Speaker 2:

Totally. And, well, this is where, you know, I've I Adam is you and I both said, but, like, I I always believe in getting a human in the loop before you take the human out of the loop. And there is so much opportunity with the human in the loop. We and we had I thought we had a great discussion about this. Whatever it doesn't feel like it was that long ago.

Speaker 2:

Right? It was only a couple weeks ago, maybe maybe a month or 2 ago, on, like, what is GPT 4 mean for software engineering? I thought that was a really good discussion. And I thought, like, we can have this optimistic forward looking discussion, and therefore, we will not have to just dismiss the doomers. And but, apparently, I underestimated, my own lack of resolve, apparently, because here we are.

Speaker 2:

And I think, you know, there's a question in the chat. It's like, hey. I'm sure you guys have had systems that kind of, like it's surprised they work by accident or you were surprised that they they were remarkably broken, but they worked anyway. And, like, that is true. But it's also but much more frequently, especially when you're building these things, much more frequently, you have a system that should work and isn't working, and you don't know why.

Speaker 2:

And that's what we have had a lot of, where we have had a system that feels like it. And this is where it's like, yeah. Good luck, AI overlords. And I don't know, Adam. I'm just gonna I was gonna go through a couple of these.

Speaker 2:

I don't know what we think about. I mean, some of these we already talked about, but but, yeah, it's all kinda make references. And if you go to our, tails in the bring up lab, more tails in the bring up lab, we've gone through a bunch of these. But there have been some that have been, like, really, really painful. The first and foremost was this this, renaissance, the power control, which we love, 29/6/18, which we absolutely love.

Speaker 2:

And the, the protocol that it speaks to the AMD, the the Milan, to actually provide this thing power, and we had a down red firmware. We had a bug in the firmware where this thing would would not acknowledge that it had set the power to the desired level. And as a result, like, the s p 3 that Milan would be like, hey. I've asked it to go to this power level and having her back, And we are looking at the power, and the power looks great, but, of course, the the message had not been sent via the protocol that, like, hey, you're done. And the reason that one really, really threw us is because we actually had a device, this SDLE device that we had used to to model this thing, And the which is great.

Speaker 2:

Andy makes this SDLE, which allows us to kinda understand the the the power protocol. Sure.

Speaker 1:

What is SDLE? I'm sorry.

Speaker 2:

The SDLE, what does it actually stand for? This is the it is a it it did you actually see the SDLE? Oh, it's, like, kind of amazing. So you you take the CPU off, and you put on this special device that measures oh, yeah. That's super cool.

Speaker 2:

And This is what Eric

Speaker 1:

calls, like, the load slammer or is that or is that yeah.

Speaker 2:

That's correct. The load slammer was the one that we made for Tufino. The load slammer was the equivalent that we made for Tufino. And the SDOE is great, because it allows us to actually then so it sends all of the the the kind of the the the power protocol, SVI 2 is the power protocol. It's sending all these requests for different voltage levels and so on and measuring if it's correct and measuring your margin, and it's great.

Speaker 2:

And we're all coming out with margin looks great. That tool did not depend on the protocol being correct. And the part the protocol had this, and, boy, that was talk about, like I mean, for one of a nail, the war was lost. It it mean this what because this was really, really frustrating for everybody, for us, for AMD. We were I had everybody around, like, really trying to understand what the hell's going on here.

Speaker 2:

And, ultimately, Eric cracked the case. But it it it was I got some low moments, and I, you know, I just you know, that it to to kinda debug that required a lot of human characteristics. It required not just rigor, but also ingenuity and creativity and desperation, honestly. You know? What world is desperation?

Speaker 2:

I mean, like

Speaker 1:

I mean, this goes to, like, to Andreessen that at an interview where he was like, you know, the problem with AI is it didn't evolve. So it doesn't have a will to live. And to a sense, I don't really don't know that I really ascribe to that, but like, that's sort of what you're talking about. It is over here. We're kind of tapping into that lizard brain evolutionary mechanism to say we must survive.

Speaker 1:

Like, we we will get to the next generation.

Speaker 2:

We will get to the next generation. We must make landfall so we can reproduce. No. Absolutely. The it it it it because, like, when you hit that point and I think it's, like, it's really interesting to kinda get there because you become much more amenable to new ideas.

Speaker 2:

You know, I mean, like, no idea is kinda too crazy because you're just like, yeah, we are we're desperate, actually. Yeah. And we are our back is against the wall. You know, I've always liked that I've been kinda mesmerized. I mean, you know, I I've talked about this in the past, but World War 2 is this bottomless pit of history.

Speaker 2:

Right? It's so much happened, And I think it was so intense and so stressful. But I mean, I think there's a there's a reason that whole generation came out, like, smoking activities, right?

Speaker 1:

I think that's a fair assessment that World War 2 was stressful. I think I think that's I think you can win that debate. Right.

Speaker 2:

Right. I feel like that's a winnable one. I I think World War 2 was very stressful. And I think the in in summary, World War 2, a stressful event. And but I think that you look at how much technological innovation happened in a super short period of time.

Speaker 2:

And the the and, yes, I mean, there were precursors of the atomic bomb and and and for me and and in the thirties. There were precursors of, you know, of radar. There were precursors of of jet engines. There were precursors all this stuff happening before the war, but it's like that existential threat managed to really

Speaker 1:

motivate people. Really focusing. Absolutely.

Speaker 2:

Really focusing. And you get this, like, incredible technical dividend. And I feel like the same thing. It's like when you when your back is against the wall, it's like, and you think, like, we have got to find a way. And and, you know, all we can do is summon, like, the summon the right people and give ourselves I mean, because you do have this, like, luxury of total focus.

Speaker 2:

Like, there is not a question about what is more important when your back is up against the wall. And the and and then you start experimenting with things as a result. Right? You start, like, doing like, we know we need to do some some radically different things because, we that kind of experimentation, our survival could depend on, say, the next generation. We have to get to the next generation.

Speaker 2:

And, and then that was the definitely I felt like and, again, there have been a couple of these, but that that was a really concrete one. And there you got, like, the device is not doing what the device says it does. And so when people and there were people who were like, oh, but wait a minute. Like, you could just give the AGI a credit card, and it could create a PCB online. It's like, it doesn't really work that way.

Speaker 2:

I mean, even if you can do all that, you which you, by the way, you can't. Maybe you can do it for a 2 layer or 4 layer board. But, like, you actually have, like, a big board. Like, that's not what's gonna actually happen. And you are gonna and when you have I mean, so this is a voltage regulator.

Speaker 2:

The 229618 is a voltage regulator. It is an extremely complicated, full featured part. And, you know, it's it's not very easy to reason about, frankly. And, like, this is a domain in which it was not doing what it's documented to do. So, like, what now, chatbot?

Speaker 2:

It's like, what what do you you know, it it you do need to be doing things that actually are not are don't make sense. Or I mean, Adam, how many times did you debug something? I was like, alright. I'm gonna do this even though, like, I know it's not over here, but I just don't know what else to do.

Speaker 1:

I'd be like, today today, like, literal minutes before the show started, a colleague, Ben Nacker and I were, you know, banging our heads against something. And it was sort of like, let's try this. I don't know that it makes sense, but at least it's something. And then we never need to think about that thing again. So, yeah, you turned to desperate places.

Speaker 2:

You know, this is actually, the this is actually origin of Brendan's screaming at drives started with one of these.

Speaker 1:

Yeah.

Speaker 2:

Where we were debugging the where Schrock was debugging the one outlier on the JBOD, where we had a JBOD and we had one outlier in terms of viability. We did not understand it.

Speaker 1:

Right. We we had one disk that was throwing up these enormous latencies, and this is, like, in a in a pack of, like, 48 identical disks.

Speaker 2:

That's right. And I do I mean, I don't know if this moment, like, made as much of a mark on you as it did on Vienna, but then and I can't I and and I apologize. And we've talked about it on oxide fronts because we very well what might have. But when Shrock was like, I think we should go look at it before we go to lunch. You remember this?

Speaker 2:

I'm like Yeah. Totally.

Speaker 1:

You're like, why would we do that?

Speaker 2:

Why would we do that? That is the dumbest idea.

Speaker 1:

Why would we do it before lunch when we could look at it after lunch?

Speaker 2:

Right. But it's like, you know, we're not gonna like, there's not a raccoon in the data center. I don't think. But I remember saying, but this is where it's like, this is the great advantage of desperation. It's like, yeah.

Speaker 2:

Let's go look at it. Let's do it. You know what? Like like, why sure. And then we go pull the drive out and all the screws have worked themselves out of the drive.

Speaker 2:

And you realize that, like, oh my god. This thing was vibing to death. And that's what sent Brendan on the path of reproducing that, and ultimately shouting a test. And you're like, wow. Wait a minute.

Speaker 2:

And you understand this kind of like new property about your system that you kinda didn't realize that you had, and you did it because you hit that moment of desperation and then also curiosity. Right? It's like and now it's another thing. It's like that that very, again, human characteristic. Like, wait a minute.

Speaker 2:

Why is this one over here? Like, these should all be the the like, why is there one that has an outlier? And, you know, again, it's like, you know, chatbot. What do you investigate? What do you not investigate?

Speaker 2:

Because so many of those wisps of smoke turn out to be really, really, really germane. You think, oh my god. Thank god we actually opened that door because that actually ended up being really, really important.

Speaker 1:

Yeah. You know, you know, we're talking about this. And not to tease a future show too much, but we've had a bunch of problems with async rust and cancellation. Yes. And is there a, you know, and and it took, a lot of sophisticated thinking to figure out, you know, the source We saw a lot of kind of ghosts in the machine types of failures.

Speaker 1:

Now, you know, potentially attributable to these future cancellations? And is there is there this kind of static linter that could look at the amalgam of Rust and identify the shortcoming and propose workarounds for it? It's like, sure, I'd love I'd love to see this kind of static analysis tool. It does not feel imminent.

Speaker 2:

It does

Speaker 1:

not feel imminent. And until you can solve these kinds of problems, you can't build complicated systems, complicated systems that are robust.

Speaker 2:

Well, it's this is actually, Adam, this is at your on a very another very important point that the these lower layers of the stack, like the async runtime, and then the things that build upon it. When there's a flaw in drop shot, it there's now a flaw in every piece of software that uses drop shot. When the and it's, like, these cracks in the foundation, I mean, we we use it metaphorically, but, I mean, it's it's there's a reason we use it as a metaphor. The crack in the foundation has consequences way, way, way, way up. And I I think people don't always appreciate how important the robustness of that foundation is because they've had it for so long.

Speaker 2:

I mean, Adam, you remember when you're I mean, you grew up you grew up before memory protection. Right? I mean, you were you had, like, OS 9 or whatever.

Speaker 1:

Totally. Totally.

Speaker 2:

Yeah. Absolutely.

Speaker 1:

Yeah. Where you'd have like, you know, some application would murder some other application, you know, if you were lucky.

Speaker 2:

And the machine would reset.

Speaker 1:

Yeah. Yeah. Totally. I could write little scheme programs in my scheme interpreter, where if I dereferenced a null pointer, it would, you know, cause the, the Mac OS 10 box to, like, hard hang. It's great.

Speaker 2:

When I do feel that, like, as a result, like, we kinda came up at a time when the the the lack of foundation was very visceral and apparent. And now it's now that's, like, not true. I mean, if you have a Chromebook, that thing is not gonna, like, bounce on you, really. You know? Or, you know, we just

Speaker 1:

I agreed. And you imagine, like, you know, the gray birds like us who were coming up at the time of Mac OS 9 also saying you guys don't know how good you have it. You know, these this, this foundation has just been solidifying. And, you know, AI maybe a piece of this next foundation, but I don't know. But I think that's that's what it's gonna look like.

Speaker 1:

Not like suddenly, that suddenly things are possible that were never even conceivable before.

Speaker 2:

When, especially, again, when you kind of cross that chasm in the physical world where it's just like, I just don't know how I don't know how AI is gonna debug these things when it's software software interaction. Let but software hardware interaction? I mean, we just because it is, like, super recent. I mean, another what felt like, again, another kind of bullet over the ear is, we've got press fit dims. And we had some dims with bent with bent pins.

Speaker 2:

And it's like Yeah.

Speaker 1:

And didn't didn't we get a little rough with one of the DIMMs too?

Speaker 2:

Okay. Okay. That's a very pointed we. Okay. I

Speaker 1:

can give as good as I get.

Speaker 2:

Oh, you you okay? Listen. I I you know, I'm really trying to hold you not accountable for the B Trace thing, but clearly that's what this is about. No. Okay.

Speaker 2:

So first of all, okay, it should be said that I was removing a DIMM and the the entire Pressfit DIMM connector came out in my hand. And the I would have to have superhuman strength. I mean, this clearly, this thing has, well, it was and but it was a manufacturing issue. Right? And, you know, manufacturing issues happen a lot.

Speaker 2:

And, actually, you know, I'm reminded of another one, and I can't remember if we how much we talked about the shark fin issue. But we had the our rev d shark fin. So the shark fin, takes from the gimlet, which is our compute sled, and the shark fin is going from from PCIe to the u dot 2, so it's a it's a u dot 2 connector. So that's not PCIe slide.

Speaker 1:

It's like a it's like a little riser, but you Little

Speaker 2:

riser. Yeah. Yeah. Pretty simple. I mean, as relatively simple,

Speaker 1:

like, pretty simple. No fucking way I could ever design an endpoint.

Speaker 2:

Yes. And, like and and and I this is pretty simple, and I also like AI is not making Sweden, but as an ex a concrete example, we had our Revd shark fins show up, and all of a sudden, like, drives are not being recognized. What the hell's going on? And, and this is actually true. I mean, this is just amazing.

Speaker 2:

I just I love watching our team spring into action, And between Robert and RFK and Nathaniel and Eric and me, everyone's kinda bugging it simultaneously and going through a I mean, the first the first thought is, like, well, everyone assumes that the eye of Sauron is on them. So I'm like, okay. Somehow myself forced to blame. And then the, you know, the folks that did layout are like, did I screw up the layout somehow? And the folks that are and we're going through the schematic, and the schematic didn't change.

Speaker 2:

And this turns out, the wrong part was stuffed. We got the we had the wrong part was loaded. So a the the reel that was put into the pick and place machine was wrong. And it was a mistake that was made on the dock. I'm like, sorry.

Speaker 2:

How does the AI debug this issue? I'm just just walk me through it. You know? Walk me. I just don't get it at all where it just require it's like, oh, well, in this world, robots control the entire supply chain or the, you know, the the, this was saying well, the AI doesn't make that mistake.

Speaker 2:

It's like, well, no one actually the mistake that was made was made by a human being at a manufacturing facility on a loading dock. So I'm not sure about this.

Speaker 1:

I mean, alternatively, if the machines do take over, like, our opportunities to sabotage them will be will abound.

Speaker 2:

Right? Oh, it was about that.

Speaker 1:

We will trip them up on anything. Right?

Speaker 2:

Oh, I know. Okay. So now we kinda okay. The yeah. Now I I I would if we get to kinda flip our hats a little bit here.

Speaker 2:

Yes. So this is all assuming that we all are trying to assist the AI in its mission of building not just humanity destroying infrastructure, but a universe destroying bomb. You know? His words. The, this is assuming that we all want to assist it.

Speaker 2:

I I still don't see how it happens. Now, Adam, as you say, like, this is an adversary. It's like, oh my god. They've got so much opportunity to fuck with it.

Speaker 1:

I mean, do you remember what people were there was a time when people were bullying self driving cars?

Speaker 2:

I would is I yes. And also to be clear, like,

Speaker 1:

it would have been me if I had the opportunity. I just haven't seen them around.

Speaker 2:

So Samsung has a Silicon Valley campus, which we had to go down to occasionally. I had to go down to when joint was bought by Samsung. Patrolling this Samsung campus are Samsung security robots. These are these cones on wheels.

Speaker 1:

No. I am I mean, on one hand, I was sort of imagining and 2 of them. But the fact that it's a code on wheels. I They just brought me down to earth quickly.

Speaker 2:

Oh my god. I have never wanted to run something down on my car more. And, like, to the point where I'm like, animal brain, what is going on? Like, if you put, like, a literal cone, like, a traffic cone there, I'm not like, goddamn, I wanna run that cone over. But for whatever reason, this thing is also, like, kinda patrol it's following me, first of all, and it's kinda like and I'm like, I wanna run you over, but I wanna I don't wanna just run you over.

Speaker 2:

I wanna I I I wanna give you a swirly. That's what I wanna go to. I wanna go with you in the most middle school sense.

Speaker 1:

So it's like I mean, Andreson's animal brain survival, yes. But like the sophomoric deep down desire to give robots swirlies, like it's even more intrinsic. It turns out

Speaker 2:

It is, it is baked very deep in our humanity. And I, I mean, God, it was, you know, to the point where it was almost like, I have I'm really having to resist. Because, like, clearly, that thing is, like, loaded with cameras. It's not like I'm gonna get away with it, but I still like, god, I wanna run this thing over. I wanna run this thing over.

Speaker 2:

So I mean, so yeah. You you you think about this, like, when the if we are actually adversarial with these things, and it's dependent on all of this firmware and all these parts, and it only knows how to read the datasheet, and it's like, oh my god. This is a this is fishing in a stock pond. I mean, this is I mean, sign me up for the the the the Red Dawn Wolverine's equivalent in the the human resistance movement. I mean, I can't wait.

Speaker 2:

There's so much. You know? Can you make a Red Dawn reference? Is that a is that a safe reference to make? I'm not sure.

Speaker 2:

I'm not sure how well Red Dawn has held up.

Speaker 1:

That I don't know. It might be like, like when I, when I showed, Peter Pan to my child and instantly regretted it.

Speaker 2:

Peter Pan. Oh, yeah. Oh, really?

Speaker 1:

Yeah.

Speaker 2:

Like the animated Peter Pan?

Speaker 1:

Yeah. It's it's really, really? Yeah. Oh, no. Real bad.

Speaker 1:

Yeah.

Speaker 2:

Okay. Good to know.

Speaker 1:

Yeah. Just take just just just erase all the fond memories you have. Yeah. Just just erase the fond memories. Yeah.

Speaker 1:

And But just yeah. Just apologize for them.

Speaker 2:

Oh, well. Okay. Well.

Speaker 1:

But but, like, in terms of this AI doomerism, and maybe this is this is the the big thesis here. I mean, the folks making these claims either probably both don't understand the underlying technologies, you know, but lower down in the stack and view them as implementation details or as solve problems. Right? There are no problems left in chip design. There are no problems left in PCBs or in networking or in systems.

Speaker 1:

Like, actually, there are tons of problems. Every day there are problems.

Speaker 2:

Yes. And I I agree with all of that. I also think that many of these people are just used to kind of creating things with their mind. It's like, Oh, I said to do this, and then it happened. And it's like, yeah, there was actually a huge amount of pain, and because you, like, you are a poor manager or leader, no one bothered to actually, escalate these issues that were happening.

Speaker 2:

And you actually had no visibility into how this thing all you know is that it worked in the end. And it's like, yeah, you actually don't know how, how extraordinarily close it was to not working. Yeah. And you're not really interested in the details. Yeah.

Speaker 1:

Well, and then also from a field that spent 30 years in the wilderness to then suddenly be like everything working in ways that surprise everyone. So a little bit hungover on that.

Speaker 2:

A little bit hungover. And I think, like, they themselves I mean, AI researchers are surprised by the emergent behavior. And so that is, like, that is also whenever you have technologists that are kind of surprised by the behavior of the thing that you're like, okay. Why why are you surprised? But it's like, well, we actually don't understand how this thing works completely.

Speaker 2:

And we don't you know, like, Keith was on here last time saying that if you if you tell this thing it's John Carmack, it will be it's much more likely to be correct. And I don't think they completely understand why that is the case. So and, okay, so you're let me ask you this. How much of the AI doomerism is coming from AI researchers versus coming from those that are kind of like in the the the hoi polloi, kind of the chattering class of technology who are observing this and observing the AI researchers having concerns and then magnituding it, then kind of like increasing the concerns of their own accord. I mean, how much of that is like the AI research?

Speaker 1:

I mean, it really feels like the latter. I mean, obviously, there's some crossover of, like, AI researchers who are also hucksters. But it really feels like, folks who are regarded as technically, you know, thought leaders rather than the folks who are in the trenches practitioners.

Speaker 2:

And are you are you as shocked as I am by the this kind of web 3 to kind of the Leron Andreson flip flop here?

Speaker 1:

I'm I'm hurt by it. You know, there is there's the there's the Simpsons episode where Marge comes out against the entry and scratch scratchy against violence, and then is interviewed because Michael Lodolow's David is going on display and a bunch of folks opposed his nudity and Marge says, I don't know, seems fine to me. And I I have I have the same reaction of, like, wait, I I agreed with you so much on this other thing. Why suddenly do I disagree with you on this, on this other one? So, yeah, it's it's, yeah, I, I, I, I had sort of hoped that, you know, skeptics were skeptics, but, that's wrong.

Speaker 1:

Obviously.

Speaker 2:

And is it my mind? I mean, it just does feel like that there, there is this, this kind of, loud den. And I also think of, like, it's it's dangerous. I mean, I think it's also worth looking back to some of the because we we said at the top, it's kind of apocalyptic thinking. There is this substrate of it among technologists.

Speaker 2:

And if folks haven't read it recently, there Bill Joy wrote a piece in Wired in 2000 called The Future Does Not Doesn't Need Us. And, Adam, you obviously read this at the time, because then we talked about this. Okay. So you had not yet come to Sun. Were you embarrassed to come to Sun app for that?

Speaker 1:

No. And it was not as sort of I think, like, your mom was asking you about it. I guess my mom, like, didn't read Wired or whatever at the time. So Yeah. I'll talk to my

Speaker 2:

mom was like I I had to, like, talk my mom's book club off the roof on that. That's a because my mom was like, he works for Sun. Brian works for Sun, so this must be very, very serious. Like, does it he would not and my mom, you know, god bless her, like, trusts institutions. She's like, wired would not have published this if it weren't very serious.

Speaker 2:

I'm like, wired is about your hacks, mom. It's like the so okay. So you do not but the I mean, this you obviously read it. And Yes. And it's I mean, it's just unhinged.

Speaker 2:

I mean, like, look. If you are quoting Ted Kaczynski at length, if you were having to say, like, look. I'm not apologizing for that. I I think that Ted Kaczynski is a If you have

Speaker 1:

But he made some good points. Right?

Speaker 2:

If he it's like, you you know, you gotta really, like, check yourself here. And the I mean, I think that, like, he talks about, and I I I really really hope, I'm I'm sure that he will claim some degree of prescience in this piece because he but he shouldn't because he talks about things like genetically modified foods, and in particular nanotechnology. And nanotechnology is one of these things that, like, sounds very scary, and this is based on Feynman's piece of, you know, a machine that could make a smaller machine, and what if this what if this occurred kinda ad infinitum, you would have these machines that are kind of molecular sized machines that could do arbitrary things, and this is KR or Drexel's vision of turning you would have a, you know, a weapon that could turn people into gray goo. Anytime someone makes a reference to gray goo, they're making a nanotechnology reference effectively. And nanotech well, I mean, I I was embarrassed that I read all of that book before realizing that none of it had actually been implemented.

Speaker 2:

And this is all just like, oh, wait a minute. This is all, like, you sorry. I'm reading science fiction right now?

Speaker 1:

Just cheap an imagineer. Yeah.

Speaker 2:

Yeah. Totally. And it's like, this is all these kind of, like, hypotheticals on what could possibly happen, but it's like, sorry. This is not there are also really practical reasons why this isn't gonna happen. And, you know, you would talk to, like you know?

Speaker 2:

And actually, in the in the piece, it's funny, Beaudrill is like you know what? My scientist friends would tell me that nanotechnology was there there are physical reasons why this is impossible. And I I, you know, I made the mistake of listening to them for too long. I'm like, wonder how you feel about that one looking back on it.

Speaker 1:

I guess that makes sense. That's one way to feel about it.

Speaker 2:

But I think we we do, and I I think we we have, you know, have talked about, y two k was this way, and the the lead up to y two k. There was a lot of doomerism, and those predictions were I just feel like we we don't go back to some of these past predictions and say, like, wait a minute. Can we go like, why did we get this so incredibly wrong? And how does that inform the way we think? Because I think that that too often, people kind of pick their own metaphors.

Speaker 2:

And, I mean, like, alright. Fine. Look. I'm probably the last person to be accusing others of picking their own metaphors, but I do feel that people pick advantageous metaphors, and the one that people love to when they talk about AI or technologies that they think are gonna be dangerous, you know you are t minus 10 seconds away from nuclear weapons. Everyone wants to talk about nuclear weapons.

Speaker 2:

And, like, no doubt nuclear weapons are obviously very dangerous and merit very tight regulation. They are also weapons. They are they are indisputably weapons, and there's a lot that's been I wish people would learn more about nuclear nonproliferation and the kind of the the the nuts and bolts there before they would immediately go to the the the nuclear metaphor. Because it's like, why not go and you should go to y two k as a metaphor more frequently.

Speaker 1:

So, maybe to bring some of the AI doomerism to more immediacy and pragmatism. Someone mentioned in chat, Timnit Gebru, who talks about, you know, discrimination and racism inherent in these models. The other side of it is the energy consumption from these models is enormous. Enormous.

Speaker 2:

Totally. Yeah.

Speaker 1:

So, maybe this is my own kind of doomerism, but it seems like the misuse of AI not in the creation of super robots, but in continuing to hold up prejudice and racism and otherism generally, like that to me is actually more immediate and a more focused on immediate doom problems.

Speaker 2:

Totally. And I mean, I think it's just it feels like it's more mundane to talk. Oh, we can talk about racism again. It's like, yes. We're talking racism again.

Speaker 2:

Sorry.

Speaker 1:

No. We haven't solved that one yet. That's still

Speaker 2:

solved that one yet. No. We do not get to talk about I know you wanna talk about, like, the laser eyed robots, but we're not gonna talk about that. We're gonna talk about racism. Sorry.

Speaker 2:

And I no. Absolutely. And the way that these things are I mean, you get very nervous about these things actually being when we're outsourcing decisions around a loan approval to a oh, sorry. Like, the the the large language model has, based on your loan application has rejected you. Like, I okay.

Speaker 2:

Do you wanna get into a little more detail, please? Can you explain why? Because, no.

Speaker 1:

And it's like, we don't know why. It won't show its work. Like, that's not a thing it does.

Speaker 2:

Right.

Speaker 1:

Or or or sometimes it does show its work. And then when it does, actually, it contradicts the decisions it made.

Speaker 2:

So, yeah. And we don't

Speaker 1:

really get how it works, but it just comes with the answer.

Speaker 2:

Totally. And it's like the and I do think it's a it's a big mistake. And this is like this false dichotomy that's that's just drive me up the wall, then it's like, well, you either need to be an AI doomer, or you need to dismiss any kind of ethical or safety concerns around AI. And it's, like, why can't we think of AI? And I actually do think we are best served by thinking it, like, not that unlike the Internet.

Speaker 2:

Like, Internet, really big deal. If someone, you know, 15 years ago, or more maybe, was saying, hey, the Internet, I think the Internet is something that we need to pay attention to the safety of. I mean, I think at the time, you'd be like, okay. Are we really? How can the Internet be dangerous?

Speaker 2:

It's like, well, we've seen how the Internet can be dangerous. And the Internet is date I mean, the Internet, I think, led to an insurrection on January 6th. The the I mean, I don't think that that is that is far farfetched. Right? The Internet has led to some really, really bad human behavior.

Speaker 2:

The Internet is not necessarily to blame, but it's definitely involved. I also think that, like, when you look at, like, well, how do you regulate that? I'm not sure what the answer is there because it's not I mean, I I'm not sure that regulation is is gonna be the the one only answer. I do feel that we on a lot of these things, we need to make sure that we are enforcing existing laws, which actually is important.

Speaker 1:

Yeah. And and hard to, imagine anticipating the multifarious problems of the Internet 20 years ago, 25 years ago, 30 years ago. But rather Yeah. To your point at the onset, like humans change, like we are not held static. We adapt to new conditions and trying to say that we need to, like, anticipate everything and set in place the prescribed laws and regulations, you know, for these events that are 3 times speculative over the horizon.

Speaker 1:

It's tricky.

Speaker 2:

It's tricky. I also do feel that the and let us not also underestimate human resilience. That's the other thing. I I also feel that, like, these people kinda need to spend some more time outside. You know what I mean?

Speaker 2:

And I don't mean that in, like, a touch grass sense, although maybe that too. But I mean, like, go backpacking. Like, get really the the get to because I I I think one of the things I I love about backpacking is not just because you feel both the human advantages and all the technology that you rely upon to kind of to propel yourself through the wilderness. And but you you also feel about how kind of brittle it is, and how much you vary you know, anyone who hikes the Pacific Crest Trail is relying on being able to pick up food every, you know, n miles, and having food stores and food caches and sending food ahead. Nobody is starting at the Mexico border with the food that they need to to get to to, Washington, to Canada.

Speaker 2:

And it's like we are dependent on the things around us. And so we are both I mean, at once, we are both vulnerable and extraordinary result extraordinarily resilient. Things are both brittle and robust. And it's like, yeah, it's I I know. It's all like a lot to wrap the brains around.

Speaker 2:

Like, welcome to the human condition. Sorry.

Speaker 1:

Yeah. And strength in numbers. Right? Like that, I think built into what you're just saying. It's like, it's not just the individuals, the community, and and holding each other up and supporting each other.

Speaker 2:

Yeah. It's a good point too. In terms of, like, you also have a you know, when the back is really against the wall, you you actually do have a lot of us. And, like, there's some, you know, really creative people out there. Yeah.

Speaker 2:

And, you know, it's really I mean, certainly one of the things that I've loved about oxide is when we have had any number of these problems that I felt unresolvable, watching everyone really begin to figure out how they can contribute to to finding a solution. And, you know, the solution comes from you the the it's very hard to forecast where that solution comes from. And, you know, I'd actually just like maybe just go on. I'd like to give maybe 1 or 2 more just concrete details that I I feel like the the just in in case people don't yet have the confidence that the AI is not gonna make hardware, or not gonna build its own computer, not gonna achieve the thinking singularity. The number of times that we have had parts that have been mis documented, or did so we just don't do the right thing.

Speaker 2:

I mean, we had an issue with our Chelsea O'Nik where the the the resistor was wrong. We had the wrong pull down resistor on there. Pull up resistor. I can't remember which direction it was. But the they were documented as only as as one k resistor being sufficient.

Speaker 2:

As it turns out, we needed a stronger resistor. We needed a we need to actually, 499 ohm resistor. And it's like, again, good luck. You know, the way we found that is by a huge amount of experimentation and desoldering components and, you know, the reworking things over and over and over again, and things that required not just ingenuity, but our hands. And, like, we actually had to, like, solder things.

Speaker 2:

And this is this is the stuff that, you know, the as someone says in the chat, yeah, desolder and rework all the things. And I don't know. May I it'd be kind of entertaining to do bring up at the behest of chat gpt. I I'm not sure I'm not sure how long it would last.

Speaker 1:

Having it send you on little missions to, like, clip solder I've been clip resistors and resoldered things and bodge wires and so forth. Yeah. That'd be Oh,

Speaker 2:

yeah. That'd be interesting. It would be interesting. I think it would be interesting for the first maybe 30 minutes be kind of entertaining, and then your eyes, like, we are not actually gonna gonna converge. So, I mean, I I feel like, again, there are a ton of these.

Speaker 2:

We've had a a a we've had a lot of these where we've had the things that felt like we it really has has taken our collective creativity to be able to ship a system. And if we hadn't been talking about it, people wouldn't know it. They would just power on an oxide rack, and it would just work. And they would think it's like, the AI could make that. It's like, nope.

Speaker 2:

The AI definitely definitely can't make it. And I don't feel we're on a trajectory for that, and that assumes that again, Adam, that there that the AI is not adversarial. And if it is adversarial, then we're really gonna start to screw with it. Can you imagine? That's just gonna be great.

Speaker 2:

That's how I keep going.

Speaker 1:

Delightful. Yeah. Exactly.

Speaker 2:

Well, do I mean, and one of the thing actually, it was funny because

Speaker 1:

The teenage boys will become our greatest resource in the fighting against

Speaker 2:

each other. Can you imagine that we ask 15 year olds to sign up to serve not just their country, not just humanity, but the universe, 15 year old boys.

Speaker 1:

The whole light cone.

Speaker 2:

The the whole light cone serve the light cone. Your light cone needs you, to, yeah. No. That we need the and then we don't mean to pick on the boys, but, the, the boys are the boys.

Speaker 1:

Yeah. Exactly.

Speaker 2:

Yeah. The ones that have been using, I think I'm sure we talked this last time, but they've been using chat gpt on Nextdoor, to troll our neighbors with, indignant posts. As they said, it's like, gosh. How about GPT? It's great.

Speaker 2:

It sounds so adult. Okay. And it's, it's really, very far too effective, and that's what we will need. Yes. We will need we will need them to serve serve the white cone in mischief.

Speaker 2:

And also because it mean I think that we would, and and, well, I think in a future episode, we are going to talk about the ways that we secure our our root of trust. Because there so we've got a root of trust on the on on all of our parts, on the gamut, on the compute side, on the sidecar switch, on the PowerShell controller, we have word of trust. And how do you secure what is ultimately the private keys for that thing? And we, again, we're gonna need to do a whole episode of this because it was so I mean, Adam, I'm I I'm sure you were you were as fascinated by that. I mean, it's super fascinating.

Speaker 2:

Because ultimately, the the root of the root of trust are keys sitting in a safe deposit box at a bank that will be unnamed. And

Speaker 1:

You said too much. Right?

Speaker 2:

I said too much. But it's just like, can you imagine we're gonna intercept the Samsung robot on the way to the the safe deposit box? I mean, it's just like it's gonna be, like, yeah, go ahead. Good luck, robot army, securing your firmware, because ultimately, you are going to have to depend on you're gonna have to have you're gonna have to generate private keys. And you're gonna have to in order to prevent us humans, us wily humans, the the the ape that was able to actually put, put one of our own on the moon, you're gonna have to actually outsmart us as we attempt to infiltrate your ceremony and and infiltrate you.

Speaker 2:

And, also, it it does because we're serving the light cone, we're serving humanity, we're serving the universe, it does it like, the mirror neurons don't even it doesn't even feel like you're you know, it it just feels unequivocally good. There's not even that kind of the moral ambiguity of of the fog of war. This is like, no. There's no the the there's no equivalent of a of a what what I mean, sorry. What is a war crime against chatgbt4?

Speaker 1:

I feel like we've gone from AI skeptics to planning the insurrection that gives the AI overlords in, like, pretty in, like, you know, hour and 6 minutes.

Speaker 2:

That's right.

Speaker 1:

For it, sign me up.

Speaker 2:

Right. They they call them the large language model butcher. They, that that yeah. I although, I guess I'd get I'd get a we we we need we need, like, war names if we're gonna be but I'm ready. I'm I'm ready to go, I I I'm I'm ready to to serve the light cone and to use all my creativity because it's like, there's no way.

Speaker 2:

Unfortunately, it's not gonna get to that. I'm, like, I'm ready, but it doesn't matter because we're not gonna get anywhere else.

Speaker 1:

That's right. You can be a perpetual reservist in the anti AI army.

Speaker 2:

Yeah. Right. I am still ready to be. Actually, maybe that's what we need. Maybe that's what we need to counter the AI doomerists.

Speaker 2:

Maybe we need to have the reserves. We are like, look. We're not no. We're the light cone. To to to to save the light cone, it's like, look, you all said this is a low probability event.

Speaker 2:

So low probability humanity is gonna be destroyed. Low probability, we're gonna call up the reserves. And here you know, sign up for the reserves to fight the AI. And do you think it's gonna make you feel more or less secure?

Speaker 1:

For a nominal fee. I think it's a great opportunity.

Speaker 2:

For a nominal fee. I will pay that nominal fee. I'll pay double. Yeah. To be in an to be in a bot fighting militia, you know, I think I'm I again, I'm not sure this is gonna make people feel better or worse.

Speaker 2:

Alright. So how do we do? Do we are are people feeling less, doom horrific? I'm hoping that we, I I would say I know that it would be a reasonable counter to say, like, was all this actually necessary? Do people actually exist out there who are AI Doomerists?

Speaker 2:

But they AI Doomerism absolutely does exist.

Speaker 1:

So Absolutely. And and it deadly serious about it, surprisingly serious even from people who you think of as, you know, right thinking, anti web 3 folks.

Speaker 2:

Leron, we are looking right at you, pal. Come on. The oh, okay. Someone did drop into the chat. That's the someone dropped the robot into chat, Adam.

Speaker 1:

I see a code that you needed to harass.

Speaker 2:

Oh my god. And I get it. Like, I think they are trying to design this like we wanted to make it appear friendly. It's like, I overshot the mark on that one because I, hold them rolling.

Speaker 1:

To derpy. Like, way past friendly, it's a derpy.

Speaker 2:

Derpy, and I feel that I I I mean, I'm only going to, to win social cloud by destroying this thing. I mean, it just doesn't that don't right? Don't you feel the same thing?

Speaker 1:

Oh, it'd be tough to walk by that without giving it a little shove, at least.

Speaker 2:

I was in my car, and it was coming at me beeping. I'm just saying. Like, I

Speaker 1:

You angered it?

Speaker 2:

I it's it was following me to begin with, and then it started chirping at me. And then I got in my car, and it's kinda like chirping at me. And again, like, I believe I've just just had stamps just leaving Samsung. Like, I haven't done it. I mean, like, it's got nothing to chirping me for.

Speaker 2:

I was like, holy moly do I wanna run you over, pal.

Speaker 1:

Get lost, dark. Exact

Speaker 2:

Exact exactly.

Speaker 1:

Someone just posted a picture of it in a fountain.

Speaker 2:

A bit of fountain. Yeah. Well, I think it add like these things. That's why. I mean, the mirror is not gonna fire.

Speaker 2:

It's gonna feel truly victimless to go do these things, and that's why it's like, they're not gonna function very well as security because it doesn't it doesn't forget not feeling criminal. It doesn't feel wrong. It doesn't feel like it doesn't feel like the destruction of property for whatever reason even though it probably should. So, alright. Well, I am, I I'm looking forward to, signing up to be in the reserves in in our, anti bot militia, but I do think that our, our the the future of the light cone, I feel that is safe.

Speaker 2:

I feel that is safe. I think the universe also safe. Humanity also safe. Come on.

Speaker 1:

Hey. And like, I'd say also bullish on AI, like, not not skeptics, but skeptical of of the the future not needing us.

Speaker 2:

Skeptical of the future not needing us and see our what I thought was a great discussion. I really enjoyed our discussion. We had Ashley on there and a bunch of other folks, talking about what how we think software engineering can be benefit from this. So I think there's a lot that that we are not AI skeptics. We're we we, for 1, welcome our AI overlords.

Speaker 2:

What was your line? I think you were as you were appealing to our future overlords.

Speaker 1:

Yeah. And and want to remind them that Podcast Host can be useful in softening it up a compliant population.

Speaker 2:

That's a Simpsons reference. Again, I'm afraid that we need to always need to qualify that. Alrighty. On that note, I think we I may have a surprise guest next time. So, stay tuned.

Speaker 2:

I'm still securing our guest, but we may have a

Speaker 1:

Oh, surprise to me. I can't wait to find out.

Speaker 2:

Well, you know, it always is. And I'm just again, I apologize for being trolled on Sunday morning. Leron, we put this at at your feet. Alright. Take care, everyone.

Speaker 2:

Thanks, everyone.

Okay, Doomer: A Rebuttal to AI Doom-mongering
Broadcast by