Predictions 2024!

Speaker 1:

My laptop had just been stolen. Did I emphasize that enough last year? I don't know. I'm just trying to

Speaker 2:

I don't think you did. And, also, it's it's is it fair to say it wasn't just that your laptop got stolen? It's that your laptops got stolen.

Speaker 1:

Yes. Every laptop I owned was in a backpack in view park in Oakland. Let's just get it all out there. Let's just say it all. Let's pay the whole thing.

Speaker 1:

Let's say the whole sentence. Let's not just drip it out.

Speaker 2:

Keep all those eggs safe in your safest basket.

Speaker 1:

In the safest basket. And, yes, and I was I I was like and I was out of the car for maybe 10 minutes. I'm like, this is not a good idea. And in that Oh. Of course.

Speaker 1:

Yeah. It was not Brutal. And and as I've said several times over, what those what those thieves stole, it's not all it's not merely a laptop, not merely 3 laptops. They stole a working Linux audio config with the price of that is actually priceless. So But Hope you enjoy the working audio, you bastard.

Speaker 2:

Isn't there a silver lining here that now you're on the framework laptop and with working audio?

Speaker 1:

There's a silver lining. I'm on the Oh, yeah. Framework. Yeah. Yeah.

Speaker 1:

Yeah. And it, yeah,

Speaker 2:

I'm on the framework.

Speaker 1:

It's brought to you by framework, actually. You know, actually, Adam, I am reading a, a birthday gift you got. I'd recently Oh, nice. Recently had a big milestone, turned big 50, which is very exciting. And, I you got me the the book High Noon.

Speaker 3:

Yes.

Speaker 1:

Very, very, very generous of you.

Speaker 2:

Well, you're putting it generously in that I got myself the book High Noon used,

Speaker 4:

read it

Speaker 1:

Oh, did you?

Speaker 2:

And then and then gave it to you. Yes. You know, I thought that

Speaker 1:

you know, here I had thought that copy, which is from a library, by the way Yes.

Speaker 2:

Steal it, if that's the next question. You're not an accessory to anything.

Speaker 1:

You did. It it it's from, I mean, it's from Hicksville, New York, which I had assumed. I mean, sounds like it's a Simpsons reference, but it's not. And the, the book feels unread. So it's good that you actually I'm not accusing you.

Speaker 1:

I'm not I'm not accusing you of because I don't think this is on Audible, so I think you would have had to have actually read it. I think it's good. I like it. I think it's good.

Speaker 2:

Oh, yeah. No. It was I I enjoyed it. I think we talked about it in one of the books in the box episodes. I'm glad you're getting to it.

Speaker 2:

Yeah. I Yeah. I think it's pretty pretty good.

Speaker 1:

It is pretty good. It's pretty good. And it's you know, there are the a lot it's a cultural ancestor of oxide. But there's a a bone there when McNeely is like, yeah. You know, we're not very hierarchical.

Speaker 1:

And so you you if, like, if the engineers think something is a bad idea, they're just gonna tell you and they're not gonna do it. They're gonna tell you it's a bad idea, which was very, very much like, Hey, it oxide. If your audio sucks, like, you're gonna be told it doesn't matter what the reporting structure is. So, Steve, meet Mike Cafrella. Mike, could you meet meet Steve, who is masquerading here as one Lyndon Baines Johnson, master of the senate.

Speaker 5:

Yeah. I just apologize for my icon and name, which were, you know, created 4 years ago in some kind of COVID video deal was, fever dream. And I have not logged in since, but I'm happy to be here tonight.

Speaker 1:

I just think that is great where there is somewhere in this planet, there is some kid who's, like, seeing a photo of LBJ for the first time. He's, like, that's the guy that, like, kicked my ass in Fortnite. Like, that's so, well, Mike, it is it is really great to have you here. And, of course, you you know Adam from over the years. If nothing else, from my wedding.

Speaker 3:

We were

Speaker 1:

all at my wedding together.

Speaker 5:

Yeah. That's right.

Speaker 3:

Adam, it's

Speaker 5:

been a long time. It's good to it's good to hear from you.

Speaker 2:

Yeah. It's it's been a minute. I think it has been, the 20 years that Brian's been married since since I've seen you in the flesh, but but good to speak with you.

Speaker 1:

It has been a really long time, and and this has given me a lot of time. Part of the reason, Mike, I'm really glad that you in particular can join us because, you know, we've you and I you and I have known one another since we were our kids ages. Isn't that amazing?

Speaker 2:

Oh my goodness.

Speaker 1:

That's not to make this feel. Yeah. It is definitely shocking. I mean, and I Well,

Speaker 5:

I mean, exactly the same.

Speaker 1:

You know, my my kids don't see it that way. My kids really have got some opinions on on my, my aging, but, I think we're aging or as a group, we're aging pretty well, but the we have known one another since we were 18 years old. And so, and and our lives have have have crisscrossed many times over. We share an alma mater. We went to school together.

Speaker 1:

And, so and Adam, I'm not sure if you knew this, but it's so after we graduated, Mike went to work for one of the hottest companies in Silicon Valley, Marimba, the early Java company, which is which features in high noon. And, and Mike, when Mike and I think this is right, it is when you were going to grad school, you worked at Marimba, and then tell me, and then you were gonna head back to grad school. You went to University of Washington to get your PhD. And it is I I believe it is when you departed that you entrusted me with what has become the the the one of the most important things for which I'm a steward, namely the bottle of marimba IPO wine. Eat a possession so sacred that out of money and tree, we've even seen it.

Speaker 1:

This is like kept in in kept in the vault. I am terrified of that. No. I don't I you know, we

Speaker 5:

just I I've act actually, if a year or 2 ago, I was wondering where it had gone because I wouldn't have thrown it away. Brian, I'm I'm glad that that I was right and that it's still surviving somewhere. This is terrific.

Speaker 2:

Wait. Now now I

Speaker 3:

had it?

Speaker 2:

Wait. But did you give it to him? Was this voluntary? Like, did

Speaker 5:

he walk off with it, Brian?

Speaker 1:

Okay. Well, so meanwhile, I told myself yet a third story that Mike had entrusted me with this sacred object and he had carefully selected among all he knew and it was I who was trusted with but but Mike's like, did I throw that thing out? I can't even remember. Maybe I dig it out of a trash can.

Speaker 5:

Okay. So your your story is plausible because I own a statistically absurd quantity of, like, Sun merch from the late nineties. So I think my my guess is that, you know, when I was moving out of town, we had some kind of, like, high level summit where we we exchange gifts, to to remember the the dotcom era.

Speaker 1:

The the dotcom hostage exchange where I got the Marimba IP o one. And by the way, this is, like, not just Marimba IP o one. It is signed by the founders of Marimba I've hired Yeah.

Speaker 5:

No. It's it's a good piece of merchandise. It's it's it's a it's a piece of Silicon Valley history.

Speaker 1:

And I know it's safe because I was making a Chuck roast last night and very uncharacteristically because normally we don't drink very much and I you often use wine for cooking and we were out of red wine. And so I went kind of frenetically to go look for the red wine. And I I I gave it a long look last night. I mean, but I'm like, no. No.

Speaker 1:

This is not the way it ends for the Roomba IPO wine. It can't end being poured out over a 6 pound chuck roast. It's got a I I don't Mike, I don't know what our plans are for that thing, but, at this point, it's like

Speaker 3:

Yeah.

Speaker 5:

I think it'd be pretty epic at this point. Yeah. We gotta come up with

Speaker 1:

some It's pretty epic. Yeah. Exactly. So, you worked for you you were at Marimba and Tell Me went back to school and Yep. But but before you went back to school or maybe it was while you were at school that you were picking up a consulting gig?

Speaker 1:

Because I remember you I just remember vividly where we were in my kitchen in Noi Valley when you were describing notch and the, the, and kind of the work that you were doing with this guy, Doug, and that became Hadoop. That was the that was the antecedent to to Hadoop. Right? I'm not I think I'm remembering correctly.

Speaker 5:

No. That that that's totally right. So, there was about a year when I, I went I was, getting ready to go back to grad school. I was doing some kind of pre, you know, PhD program research, with a couple of our professors from college. And I had some extra time on my hands, so I did that work with Doug on Notch.

Speaker 5:

And then it turned out that, you know, the stuff on Notch, which eventually became part of Hadoop, ended up being kind of a bigger deal than the the research I was working on at the time. So I went into it kind of casually, but it turned out to be, a great project.

Speaker 1:

It turned to be a really big deal. And, if I had any doubts that you can see the future, I my my any doubts should have been laid to rest with with Notch. But then, fast forward many years later to, 2019, and, I told you that that we wanted to go start Oxide. You, unlike frankly Adam, were very supportive of of the starting Adam was just like, you're gonna jump to your death. But, you were extremely supportive, which was, really, such a shot in the arm.

Speaker 1:

And, again, I remember vividly where I was when you we were we were describing the kind of the progress in AI, and you've done I mean, your work has kind of been at the intersection of AI and databases. Correct me if that's incorrect. Yeah.

Speaker 5:

That's that's fair. Yep.

Speaker 1:

And, you know, you've been you've been one of these folks who's kinda been working on this problem for a long time while a period where it wasn't making a lot of progress or people weren't necessarily investing in it. And I, we were kind of talking about what was the state of the art in 2019. And you told me, well, you have to understand that, like, and I was decrying kind of some of the lack of progress as an outsider to, to, language and language interpretation. And you're like, look. Language is basically a solved problem.

Speaker 1:

Like, it is. It's like, yes. It's like, you don't know it yet because it's only really in research results, but there's this thing, Bert. It's gonna totally change everything. Like, language is solved, so we're kinda on to the next thing.

Speaker 1:

And that was in 2019, which I felt like was maybe it was, like, well known among, among, like, database researchers or rather AI researchers, but it was not broadly known, I would say, in the industry. And, man, the number of times I've thought about you and that, like, moment in the last, like, 18 months, 24 months, I'm like, damn, like, was, like, living in the future, and saw that that problem actually was conquerable because it has been conquered to really a really surprising degree. I mean, are you surprised at at how good some of this the the LLMs are?

Speaker 5:

Yeah. I think it's shocking. So, I mean, first of all, you're you're making my prediction sound good in a in a great way, which is to ignore all my wrong predictions. So first, that is great, Brian. I appreciate you doing that.

Speaker 5:

That is what we do around here. Yeah. No, it is terrific. And and the other thing is saying it was solved or even is solved now is is certainly an overstatement. But but what was exciting, and I think everyone has seen it now, is, like, there was a technology curve to ride on language understanding that there hadn't been, you know, for a very long time or or maybe ever.

Speaker 5:

And that was quite visible at the time, and was, you know, has obviously, yielded some of the amazing things that we have now. So, yeah, I think it's thrilling. Like, I I know the the AI stuff gets people, turned into knots sometimes, but but I you know, they've been making movies about, like, talking computers since before I was born. I am happy to be around when it actually happened. I think it's great.

Speaker 1:

It is amazing. And so I think that and we're very excited to have you on this year as we're making predictions because, you know, looking back over and, Adam, I don't know about how you feel about this, but you and I did this so many years not recorded. And, god, I so prefer recording the predictions. Oh, yeah. Where where we would get together and, on the kind of first Monday of the year, and we would we would talk about our predictions.

Speaker 1:

We would write them down, but we didn't actually record the conversations. And I feel that, Adam, going back and listening to our predictions from last year beginning of 2023 and the beginning of 2022, it was just very revealing about, like, what what were we thinking at the time? And, you know, I've said this before, we'll say this again, but predictions reveal much more about the present than they do about the future. And I feel like our predictions in 2022 were very crypto centric. So crypto centric that we put a back limit on crypto.

Speaker 1:

We said you can only have 1 web 3 prediction because every there's a line to predict the demise of web 3. Adam, you had the you kinda nailed it with your 1 year prediction that web 3 would drop out of the lexicon, which it very much did.

Speaker 2:

Yeah.

Speaker 1:

And then I feel that that last year, we're getting it definitely getting into some of the AI predictions, but then a lot of the the VR predictions too with with Meta going so long on on VR was certainly a big theme. And it felt like pretty clear that this year, the theme was going to be I mean, it has to be. I mean, this is a really big deal. And not that everyone it needs to make an AI prediction, but I would say no bag limit on AI predictions because I think unlike web 3, where there was really a line to predict its its demise because most technologists and I thought Kelsey was very I mean, Kelsey was very outspoken when he joined us, Adam, for those predictions and Kelsey Hightower. And Kelsey's like, I've never seen something that just doesn't it has this much hype in this little utility.

Speaker 1:

And had kind of gone into it with a very open mind. I think that on AI, it's like, there's a lot of there there. This is gonna be pretty radically change a bunch of different things in a way that it's feels kind of unpredictable right now. So I don't know. Adam might you might like am I having some sort of, like, AI fever dream here?

Speaker 1:

I mean, is this this is the most back on and just, like, corona?

Speaker 2:

This is the most bullish I've ever heard you. I mean, first of all, you're using the term AI and not sort of apologizing for it or say, actually, we shouldn't use the term AI, which was your prediction, by the way, last year. It is my prediction.

Speaker 1:

So now I've got 5 years to run on that. I still, I still want to stand by it.

Speaker 3:

Yeah.

Speaker 1:

Yes. Excuse me. When I say AI, I actually mean LOMs. And Yeah.

Speaker 2:

But no. No. I think I I I share your optimism. And it's, it's exciting to predict, but also hard to predict because it could sort of be anything.

Speaker 1:

It could be anything. So, so, Mike, we, because you actually, unlike the rest of us, actually are able to see into the future. You live in the future. You really come come back to the past to enlighten us. Mere mortals, we, and and feel free to not take the floor right now if you don't need it.

Speaker 1:

If you don't want to, if you need some extra time, but we, I would love to know, what, what are some things that you see? What are some of your 1, 3 and 6 year predictions? And those of you, in the audience definitely, put your predictions right please write them down and either put them in chat, or if you wanna come on stage, definitely raise your hand. We'll we'll have you talk about it. But I think we got a lot to talk about

Speaker 3:

about today. So Can I can I can I ask some can I

Speaker 5:

ask some ground rules, some some guidance for a first time or first? Is the goal here accuracy, or is the goal here entertainment?

Speaker 1:

No. The goal here is, I I I I actually you know, I would say that's a false dichotomy, but that actually is not a false dichotomy. Really, accuracy and entertainment really are are diametrically opposed here. No. Much more, I would say the, and I this was kind of famously was it 2 years ago, Adam, that we had Steven O'Grady on?

Speaker 1:

And, you know, I gave Steven O'Grady consultant RedMonk, who I gave a very hard time about grading his predictions on accuracy. And, like, accurate predictions are not very interesting because, part of, of a prediction that makes it interesting is getting a bit over your skis. So I think we wanna be entertaining and thought provoking. I think the stuff that's plausible is the stuff that that is kind of outlandish, but plausible. And certainly, the stuff that's outlandish and happens to be correct is the stuff that's really interesting.

Speaker 1:

And actually, we do actually

Speaker 2:

Let me put it another way, Mike. Like, if we look back 6 year you know, in in 2030, in the distant future, when we look back and you make some 6 year prediction that, like, we all are enslaved to an AI overlord, it's like, if that happens, you're gonna look like a genius. If, you know, if you predict that, like, the domestication of the dog is gonna continue unabated, like, you won't look that smart.

Speaker 5:

Yeah. Fair enough. Okay. I I think I get I think I get the shape of it now. And and the idea is, like, 1 to 3 to 6 years, this is, a sequence of escalating ridiculousness as as things go.

Speaker 5:

Escalating

Speaker 1:

it is escalating ridiculousness in that it's like 1 year is happens much faster than you think, and yet a lot can change in a year. 6 years feels like it's in the indefinite future. I mean, it's like, god, just like 2030 feels like it's like Buck Rogers and, you know, Twinkie are walking around. The was that the name of that robot? Was it Twinkie?

Speaker 1:

Am I making that up?

Speaker 3:

Twinkie?

Speaker 4:

Adam, bold prediction that the, overlords will let us look back in the past.

Speaker 5:

That far in the future, I'll

Speaker 4:

write that one down.

Speaker 1:

I'll write that one down.

Speaker 2:

It's Cooper.

Speaker 1:

Yeah. Steve would like to clarify to our robot overlords. That was Adam's prediction, not Steve. So Steve should be rewarded in the robot afterlife, please. But 2030 feels like it's so far in the future, and it's really not.

Speaker 1:

2030 is it's, you know, it's only 6 years away. It's just not that far away. So, actually, maybe it is worth Adam, do you have any favorites from did you were you listening to our productions from last year? Yes. I did.

Speaker 1:

So I I think favorites you wanna revisit?

Speaker 2:

Well, first of all, I wanna revisit yours from a year ago, and I want to know how you're going to rule monger this one.

Speaker 1:

I'm ready. I'm ready to rule monger.

Speaker 2:

So a year ago, you said that Elon was out of Twitter, out of Tesla, but you sort of clarify, first of all, and we all agreed out of Tesla first. And you you said, you know, Elon out as CEO of of, of Twitter because of pressure from investors. And so he is, I mean, he is out as CEO, but I think not perhaps in the way that you were predicted.

Speaker 1:

I yeah. I'm I'm willing to give myself afford myself some credit in that Twitter is absolutely the a total tire fire. And the and but then he that he is out as CEO, but as you say, it's a name on. Yeah. So yeah.

Speaker 1:

No. I think well, I'm I'm happy to take 0

Speaker 3:

Okay.

Speaker 1:

Or very little credit for that. How do you so in that spirit where I have been so generous and, how do you feel about your mission that, we're all in units, that we're all in the software engineers local 701?

Speaker 2:

A couple weeks ago, you were, I think or maybe a week ago, you're very generous. You were saying, oh, yeah. I think that that was fairly accurate because there was some news recently about Google negotiating with unions and being forced to. I don't I I'm gonna take maybe 5% credit. So my my prediction was that 2023 was the year of the tech union.

Speaker 2:

I don't think that really happened.

Speaker 1:

I mean, it was the year of the National Labor Relations Board investigation into big tech, though,

Speaker 2:

to a certain degree.

Speaker 1:

I mean, 8 of those

Speaker 5:

are the ones.

Speaker 1:

You can get get you

Speaker 5:

keep yourself somewhere around it.

Speaker 2:

No, maybe better than 5%. I mean, you know, I'm not I'm not passing, but at least they showed up.

Speaker 1:

So I think that we we are also, 2 years into Laura Abbott's prediction 3 years, 2 years ago that, we'd see risk 5 in the data center, and I think that that one could still I think it could surprise us. Yeah. I think we could see, so with, and then I also like to bend Stoltz's prediction from last year. I think it was a 6 year prediction, that we are all gonna need to be therapists. AI is we have to have therapists with the AI to, like, because the AIs are gonna become depressed.

Speaker 1:

They're gonna do human jobs, but they're also gonna become blue and lose their sense of purpose. That was a good one.

Speaker 2:

Ian had a Ian had a great one. His 6 year prediction was that Apple He

Speaker 5:

had a great one.

Speaker 2:

Apple, launches VRAR and then but it's dead by 2029, dead dead within 6 years. And his prediction was within 12 to 24 months, they come out with their VRAR and that was spot on. They announced it in June, and it's supposed to come out in, like, 4 weeks or something like that. So so far, so good.

Speaker 1:

So he So far, so good. Reading that

Speaker 4:

in the notes, and I was like, dang. Because, like, I was gonna be making my predictions based on Apple VR AR, and now I feel like I'm just copying Ian. So I'm not.

Speaker 1:

You know, copying is allowed. That's fine.

Speaker 3:

So we

Speaker 1:

do we we we're gonna allow you to copy it, but that's a good prediction from from from Ian. So, yeah, Mike hoping that gives you a flavor. That's a sampling of Yeah.

Speaker 5:

I think that's great.

Speaker 3:

I think

Speaker 5:

I got it.

Speaker 1:

Definitely more, but entertainment. And, and and also just, like, thought provoking in terms of what the future could could hold for us. So with that, do you have any, 1, 3, and or 6 year predictions?

Speaker 5:

Well, why don't I why don't I start with with an easy one on 1 year, and then I'd like to hear other people's things we can maybe escalate after that. My prediction for 1 year is the first major sort of news cycle wide privacy scandal involving Zoom video and AI processing. So someone someone finding something shameful or embarrassing or, you know, national security links by, like, extracting a reflection from someone's eyeball, something you can't get with the human eye but, like, that your AI bot caught, sometime we will see I'm gonna guess it in 1st year, but if it's in 2 or 3 years, I'm still gonna take credit sometime in the future if the robots allow it. Some kind of privacy scandal that is only enabled by, like, machine processing of your Zoom video.

Speaker 1:

Really interesting. Chilling, actually, as well.

Speaker 2:

I feel like this was the plot of some xenophobic, like, Michael Crichton novel in the nineties about Japan.

Speaker 1:

Geez.

Speaker 5:

Okay. I I know what I'll be referring to. I've never read it. If I've

Speaker 3:

been in

Speaker 5:

for the invoked

Speaker 1:

No. No. No.

Speaker 5:

I'm gonna feel really bad about this whole escapade. No. No.

Speaker 2:

No. I I love it. And I love the idea that also, the reflection of my eyeball could be my coffee cup, but the ML will decide that it's something untoward. And I'll I'll I'll be back footed, needing to show people that it was just the coffee cup.

Speaker 1:

And so, Mike, are the ramifications of this that we are all in our meetings with sunglasses on, like, World Series of Poker style, where we are all wearing our hoodies with sun I mean, it's like, wait. What do we end up? Do do people attempt to mitigate this this new does does this risk, instill panic in the populace?

Speaker 5:

I think that's, the World Series of Poker analogy, I think, is a good one. I think that stopping information leakage is gonna be really tough with your audio and video sensors. And so if you turn it off, maybe you only turn it on for trusted people post scandal, eventually, you'll have some kind of you know, if that turns out to be true, one imagines there's some kind of, you know, computational countermeasure you can have that scrubs things out. But, yeah, for a while, yeah, maybe you're right. Maybe we are all wearing hoodies and, hobbling over the cameras so no one can see what's behind us.

Speaker 1:

Yeah. Wow. Well, that and it reminds me very much, Adam. We had a colleague that predicted who, had really, focused on on mail, on SMTP, and, he predicted that there was going to be a a a Google breach, a from a state actor. And it was, like, pretty chillingly on with the the issues that Google ended up having that caused him a lot of China.

Speaker 1:

So, feels, feels possible. Feels very chillingly possible.

Speaker 5:

Good one.

Speaker 1:

And then, Mike, do you wanna we we can do other 1 years if you want. We can do kind of take it for in terms of time. That

Speaker 5:

was you. Let's do other one one years. I like to hear other people's.

Speaker 1:

Alright. Well, I'm I'm gonna put it out there, Adam. I I'm I'm gonna I I've inspired by your prediction that web 3 dropped out of the lexicon, your 1 year prediction that ended up being we all agreed that you were predicting with your hopes, with your heart, not not your head. But you know what? Your your heart had its day.

Speaker 5:

That's right.

Speaker 1:

And you were right. So I I I'm taking inspiration. And I'm gonna say that in 1 year, AI doomerism is out of the lexicon. That that we do not that we do not talk about. P doom, this whole p doom business is gone.

Speaker 1:

The X risk is gone. I, that as people begin to use this stuff a lot more, they will see what it is and what it isn't. And they're just gonna be less vulnerable to this sense of fear. And I think also the the arguments are so obviously ridiculous at so many different levels, that I think that that they're gonna start to unravel. I also am going to further predict just to just to make sure this prediction is wrong in case it happens to that that part happens to be right.

Speaker 1:

I'm going to further predict that the extant AI doomers claim credit for this. That the reason that that is dropped out of the lexicon is because they raised awareness and that that AI researchers were able to dedicate their their their precious caloric budgets to preventing our AI robot overlords from taking

Speaker 2:

Nice. So so all all of their doomerism, all of the all of the podcasts and TED Talks and whatever, they're gonna find some scrap of legislation or research and and decide that's the end zone. They're all gonna spike the football and and move on.

Speaker 1:

And never speak of it again. And and it is also possible that they will just never speak of it again. Yeah. That that this is gonna be such such an an absolutely embarrassing episode in hindsight. And it's gonna feel like with so many other real problems, it's gonna feel like such an absurd pretended problem Yeah.

Speaker 1:

That they're just gonna quietly Yeah. Never mention it.

Speaker 2:

I mean, I I think that as people you're right. As people see ML as and these LLMs as, like, fancy autocomplete more than a a thinking machine that will contemplate your doom, it it will become increasingly outlandish.

Speaker 1:

Well, even as they like I think because these things are really extraordinary in so many different ways, but the extraordinariness is not there extraordinary as a tool, as something that like you can use to do stuff with. And that's going to be, this is going to feel just like way more uplifting than it's going to feel like the idea that this thing is going to like somehow, kill any humans, let alone all humans. This is just going to seem really ridiculous.

Speaker 2:

So I'm

Speaker 1:

just saying that that is as a 1 year prediction. I'm, dear AI robot overlord. Yeah. Like, making a sample out of me.

Speaker 2:

I know. I I'm worried about the 2030 takeover now, but and what this means for you and the rest of us, but I think it's a great prediction. Exactly.

Speaker 1:

But they're gonna come after me first to send a message to all of you. So so, yeah, when I, when I disappear under mysterious circumstances, I know I'm hoping a lot of people are gonna ask questions about that.

Speaker 2:

I have my own 1 year, AI prediction, which is, what I'm what I'm calling instant ad generation. So you're you're reading an article, you're reading content that is generated by humans and then ads interspersed so seamlessly that you can't tell the difference and they're then inserting product placement or because it's 2024, endorsements for candidates or things like that that flows gracefully from what you're actually reading into this generated content that is, like, hyper tailored and created on demand.

Speaker 1:

So there is gonna be so let me just make sure that I'm getting this out correctly that that the ad placement is gonna get so good. This is gonna be in a year because people are presumably you're using LMS as part of, like, for as a where you'd use a search engine. So, the LOM is like, oh, you know, actually, it occurs to me that there's I if are you interested in in eating some muffins? Because I had some really delicious ones that you should know about from Well, no. A bakery that's actually just done

Speaker 3:

the future.

Speaker 2:

It's just gonna be inserted into other content whether it's, you know, not like, the the ads that are served within the page will become indistinguishable from the content around it, but biased towards the thing that they're advocating.

Speaker 5:

Adam, is this only for new content or if I'm reading like Moby Dick on my Kindle, do all of a sudden the characters go out and, you know, they they I think they buy some great USB C I think

Speaker 3:

to do well.

Speaker 2:

I think not not Moby Dick, but when you pull up the, the New York Times article from the day you were born, that the ad served in there will will, hype up that USB c cable. But flowing with the content of the story, you know, Ronald Reagan endorses this USB cable.

Speaker 5:

This is a future I can get from.

Speaker 1:

Well, so okay. So that's that's this is also interesting in that the I have got a what I what I've got currently as well, so here's what actually, no. I have this as a 1 year. The, that stake in a what I call stake in a graphic optimization is becomes a thing where people and I I my counter are people actively looking at, hiding messages in content such that I mean, I know that the like, this such a thing does exist, but where search engine optimization becomes this kind of optimization to actually hide things in content to fool, the these LOMs into actually, advocating these things even though they wouldn't otherwise.

Speaker 5:

That's kinda interesting. You mean, like, are people trying to formulate content that finds its way into synthesize the answers in the particular way? Like the

Speaker 1:

That's right.

Speaker 5:

The the deep neural version of search engine optimization.

Speaker 1:

That's right. Where because, you know, on some of these, like, when you have I know that there's been a lot of jailbreaking, of with, prompts where, like, you just do a u u encoded string of something horrific, and it won't actually if you ask it to do something horrific, it will refuse. But if you give the you, you encoded string of your horrific request, it's like, oh yeah, sure. Yeah. I'll do that.

Speaker 1:

No problem. Here you go. Here's here's how to make napalm or whatever it is. The so it feels like that's gonna become a thing that's exploitable.

Speaker 5:

Yeah. I mean, I had seen this in the context of these, visual language models, which are are pretty slick, but they don't get quite as much as tension. This is basically the the way that many of these systems are able to understand, like, imagery that you give to us. So you upload a photograph. You ask how how many people are wearing a blue shirt, and you can sometimes get an answer.

Speaker 5:

The quality isn't obviously as good as the the pure text ones, but it is getting better. I've seen a few of these deal break things where, like, you you make an illicit request as an utterance in a thought balloon by someone in an image. And there's something about, like, the way that the the system is distinguishing, you know, the kind of the speech act of the prompt versus the content of it that, you know, by putting in the image, it's been able to confuse systems in the past. I I don't view this as, like, a a long term problem, but you've you've seen it in a in a few examples in that area.

Speaker 1:

Yeah. Interesting. Steve, do you have a a 1 year?

Speaker 4:

Yeah. I do have a 1 year. I actually have all 3, although I'm I'm I'm fine tuning my 6 because it's a spicy, spicy one. Oh. My my 1 year prediction is that, Apple VR will do well, but not take over the world.

Speaker 4:

I'm not a 100% sure how to quantify that exactly. I think it will, like, not obviously fail. I I think it'll set them up to make a revision too. Let's put it that way. Like, it will do well enough that at some point after a year, they'll end up doing a second revision.

Speaker 2:

It's not a Newton, but it's not an iPhone.

Speaker 4:

Yes. Exactly.

Speaker 1:

Yeah. And I feel like I I was engaged recently in some debate about the well, it's like, well, the iPhone, you know, the iPhone really did not have a lot of excitement around it. Like, that's not true.

Speaker 2:

You're like I don't think You're like, I've been hearing from Adam for, like, 3 years about it. Right?

Speaker 4:

Listen. I was in that conversation, and I think that you are probably

Speaker 1:

in the conversation. This was this Yeah.

Speaker 4:

This was during it wasn't with me. This was during a water cooler meeting, and we were talking about this stuff, earlier this week. And somebody else at Oxide had said there was not a whole lot of they said specifically, I think it was, like, the iPhone was more niche on launch than it eventually became, was, like, more of the spirit of the claim. Like, it obviously sold out. People waited for it forever, but it was a very expensive phone, and there weren't that many of them.

Speaker 4:

Like, it took a while for everyone to have iPhones. And, like, people were a little, like, skeptical and stuff, but, like, that doesn't mean nobody cared or there wasn't a ton of attention. It was, like, it couldn't do a whole ton and it was a lot of money, but you also weren't wrong and, like, people were switching to AT and T to get the iPhone. Like, it was obviously a big thing too.

Speaker 1:

People including Adam, I think. Right? Adam, were you a very a very early adopter?

Speaker 2:

Yeah. I mean, I I mean, I think part of your your thinking is colored by how insufferable I was, how hyped I was for the iPhone. And then, like, I got it.

Speaker 1:

No. That is what I yeah.

Speaker 2:

I know. I know. Right.

Speaker 1:

When I was saying that everyone was insufferable about it, I think you're right. I think it was just you.

Speaker 2:

That's right.

Speaker 3:

I I

Speaker 1:

think you had so suffocated now the rest of humanity that I actually was ahead to yes.

Speaker 2:

Yeah. That's fair.

Speaker 4:

Part of my argument for this, is that they're gonna launch the Vision Pro at $35100. And it is, like, a powerful enough computer on its own that, like, if you're already the kind of person who's buying a maxed out MacBook, like, buying a VR headset, like, and including the maxed out MacBook. Like, it's not actually it is it is expensive, but, like, for what you get, it is not absurdly expensive.

Speaker 1:

Okay.

Speaker 4:

And I think there is some class of early adopters who will be willing to plop down that much money until it will do okay, but I still think that's, like, far too expensive for mass adoption to be a real thing.

Speaker 1:

And so, Steve, may I encourage you to be because you had a very specific idea of an early demographic for this thing. Yeah. Yeah. I think I think you should formally predict so we can get that way down.

Speaker 4:

So, like, a lot of a lot of predictions are colored by an experience. Right? I spent a lot of the last 10 years of my life on way too many airplanes and in way too many hotel rooms, and carrying around stuff when you travel a lot really sucks. And I think that, like, if you could have your amazing dual monitor setup or quad mono, literally wall sized monitor setup in a hotel room. Like, there there is a class of people who would be willing to, like, pay that much money, travel all the time, like DevRel, for example, like, those kinds of people, if your company's willing to fish out that money.

Speaker 4:

DevRel's kinda dead now, though. That's, like, a whole complicated separate thing. But, like, just the point is I think there will be, like, the traveling salesman who will be, like, find it more comfortable to interact with a virtual screen than with a physical one. And I think the portability aspect will drive at least one category of, like, early adopter.

Speaker 1:

So I think this prediction is great in that it is ludicrous. I the it is I mean, if true, like, amazing. I also get in my head that, like, if you've seen ads from the eighties for, like, the compact or the k pro, like, do you remember these? Like, the

Speaker 3:

Oh, yeah.

Speaker 1:

With, like, the the the businessman lugging the compact. And I would just love for Apple to discover, like, wow. This is the demographic. So we actually need it is like the the the the traveling business person that we need to appeal to for this thing. I just love I would love the idea of, like, what apps do they want?

Speaker 1:

What it's like, it's all like, these are all, like, frequent flyer apps. Like, what is going on with this thing? I I think it'll be great.

Speaker 5:

It's all about, like, people trying to scam free trips out of the airlines. Totally. And these guys who, like, travel around the world in the last day of the year going the wrong direction, so they can potentially get calls to the supplier. Another app.

Speaker 1:

I'm hoping to get quintuple miles. It's like, what is this about?

Speaker 2:

I'm imagining the Alan Alda ad for the AS 400, like this style of ad.

Speaker 1:

That's exactly right. Yeah. I know. You get the spirit of it. So the yeah.

Speaker 1:

The No.

Speaker 5:

I'm saying Yeah. Great ad. That's a journey of it. Yeah. We can go make probably a plenty of ads with George Plimpton and Alan Alda.

Speaker 5:

1980 style electronics now.

Speaker 1:

Yeah. Like, what is I I yeah. What's to prevent that, I guess? We can have, Alan Alta can be we need to we need to go through the I guess well, I guess, the the laws are to prevent it. That's I know.

Speaker 2:

That's right. That pesky thing again.

Speaker 1:

Alright. Those are some good, good one years. And, again, if, folks are, in the chat, definitely drop your predictions in and free if you wanna hop up on stage to to rattle some off, definitely raise your hand. So, Mike, do you, what are you ready to go into the 3 years? Are you ready to advance the dial a little bit?

Speaker 5:

I am. Although, just raised his hand.

Speaker 1:

Oh, yeah. Sure. If you want to. Yeah. This is where here we go.

Speaker 1:

Turn the camera. Yeah.

Speaker 6:

Is this even working?

Speaker 1:

It is working. It is and I know it is early morning where you are.

Speaker 6:

5:30. How is everyone?

Speaker 1:

Doing well. Doing well.

Speaker 6:

Well, I I have a weird prediction, but it's based on another prediction. Both of them are actually good towards the oxide computer company. The first prediction that needs to happen is that many more companies are going to move away from the public cloud into a private cloud. That might mean, you know, oxide computer racks or whatever that they're building in house. But I think if that happens, my prediction is gonna be that within 6 years, a quote alternative operating systems would make a very good comeback.

Speaker 6:

Like, if someone wants to build a NAS, they're gonna say, okay, let's use for EBSD. Someone wants to build the router, they're gonna use open BSD. They want a software defined network. They're gonna use Illumus with the crossbow. And like a lot of the alternative operating systems, you know, other Unix says other than Linux, who seems very niche or would become a lot more popular even in the enterprises for that specific reason.

Speaker 6:

And everyone's gonna be looking at Linux more of a public cloud compute solution Because on the on on the cloud, everything else is kind of solved according to them. So everything else on premise would be based on on other niche operating systems. And I think that descendants of the operating systems from the nineties and the 2000s who were kind of even more niche these days will make a comeback for their own specific area. So some of them might be very good in computation. Some of them might be good to be a real time operating system.

Speaker 6:

And that I think might be more beneficial for, say, may maybe software portability, but also, you know, more more diversity in the operating system scene.

Speaker 1:

Intraniak, I love that prediction. Mhmm. And if I may pile on on brand here, with our our theme this year. I actually think I don't know if you've used, say, Chargept or Chargept4 or help on one of these systems, but they are it's remarkable. I don't have you done this yet?

Speaker 1:

Have you, like

Speaker 2:

I've used Chat g p t.

Speaker 3:

Like

Speaker 2:

what do you mean help?

Speaker 1:

But so for example, like, asking it like, I was just kinda curious to and kind of also want to, to to take aspects of it out for a spin. Because I've been trying to use it a lot in the last couple weeks. And, like, what are some example of the most commands that have a parsable option? And it gave me a great answer on a question that's, like, kinda hard to answer, honestly. I'm kinda grabbing through man pages.

Speaker 1:

Mhmm. You know? And it the and you ask it things on, like, I, you know, was asking it how to do some, like, kind of arcane stuff that it gave me really, really, really good answers on. So in trying to here's a question that I've got kind of back to you and Mike, maybe I'll direct this one to you as well because I actually think and I had that this is a 3 year prediction, but I almost I almost wonder if it's gonna be a 1 year prediction with the the pace at which things are accelerating. It seems to me that retrieval augmented generation is a really big deal.

Speaker 1:

And, Mike, maybe that is just like a totally, like, ridiculously obvious thing to say. But, rag, which you've augmented generation, which I think that has an idea that's a couple years old, but it feels like that has that that has really reduced the level of hallucination of these things. Is that a is that a fair read?

Speaker 5:

I think I think from my perspective, it is. Yeah. I mean, it is it reduces the level of hallucination, and, also, it allows people to, people who are not ready to train their own models. It lets them upgrade the quality of the answers in a way that they can understand. So, like, swap in some higher quality documents in maybe your Elastic store or your vector store that's, like, populating the RAG outputs, and now you can get better outputs.

Speaker 5:

Right? You can you can upgrade the quality of the answers without, like, some zillion dollar training task that you don't know how to do anyway. So That's right. I think it's it's it's it's not only, you know, a good step. It's quite obtainable for a lot of people.

Speaker 5:

So I yeah. I agree. It's it's an idea that I think has been around in a kind of common sense way. You know, I'm I and probably a lot of other people wrote code a year ago that did something like that, but I only heard the phrase probably 6 or 8 months ago.

Speaker 1:

And it's moving all moving so fast. Yeah. Yes.

Speaker 6:

So good luck with that.

Speaker 1:

Well, and so in training, this is why it it kind of dovetails in with your prediction because I think that if when you got these systems that have got good canonical documentation, you're giving something, like, really authoritative that can actually I think it it that you can use either with rag or fine tuning. I mean, I think, Mike, to kinda your point about, like, giving it some really good content for the these models, and it feels I mean, I was I was kinda blown away, honestly, about about the the the quality of answers I was getting, like, way better. I mean, it's better than Stack Overflow for is someone gonna predict the death of Stack Overflow, by the way? And can someone do that? Like, when do we

Speaker 5:

That would have been a great prediction. Yeah. I regret not having that down, actually. Like, it's so much better.

Speaker 4:

There is the blog post about

Speaker 3:

So much better.

Speaker 4:

Couple weeks ago.

Speaker 2:

Yeah. It was

Speaker 1:

a yeah. There was a blog post saying, like, well, everyone's they they basically, the demise of Stack Overflow has been greatly exaggerated. I'm like, that's just because people are not, like, actually using cache p d side by side with Stack Overflow. I mean, Stack Overflow is so bad. Sorry.

Speaker 1:

Stack Overflow. I did

Speaker 6:

I I had the exact same scenario yesterday when I was, like, debugging the Linux compatibility layer code with dtrace on the Linux app, and everything I searched for gave me like, no. No. No. You absolutely have a permission issue. Every Stack Overflow article will, like, of course, it's a permission issue.

Speaker 6:

And then I decided to give the socket documentation, the unique socket documentation to gbt 4 and gave it to the dtrace output. And it's like, oh, it's an abstract socket. It starts with 0. It's like, like, it's it's an abstract socket. I'm like, oh my god.

Speaker 6:

How did I even miss that? Wow. That's great. That blew my mind. Because, like, I obviously, if I read the document properly, I would have find it myself probably as well if I was also reading the output of DTrace properly.

Speaker 6:

But for a machine and again, GPT 3.5 was not able to do that. I don't know what do they do behind the scenes, but, like, g p t four is really on a on another level on that regard indeed.

Speaker 1:

So it really is on another level, and I would say that and, actually, my eyes were open to that actually here on Oxide and Friends, Adam, when we were, I had put up I can't even remember what it was, a PR for something. And someone in the chat posted what chat g p t 4 would do as a code review for my PR that I just created, like, that day. And it was like, damn, these comments are pretty good. Like, they're they're way better than than 3.5. So there are a lot.

Speaker 1:

I mean, it is, these things are getting better really, really fast. And in turn, that is that is, an amazing anecdote, that I it feels to me like that with the thing I love about that is I I would love to be in a world where great documentation is really well rewarded because

Speaker 6:

I think I'm gonna go this way.

Speaker 1:

Off. Yeah. Well, because I could think one of the frustrations that some of these smaller systems have is, like, you know, it's, like, the stuff is really well documented. It's like but they're not broadly used. So if you Google it, no, you're not gonna get a Stack Overflow answer to this.

Speaker 1:

But by the way, if you read the documentation, it would tell you, it's like, well, now you can actually automate that step of reading the docs and the, so, you know, it's like, that's a pretty interesting book. So on that note, actually, Steve, I don't know if you've thought about this.

Speaker 2:

I also Oh,

Speaker 4:

I have thought about this, and I have tons of people who have said this to me. And, yes, I'm very sad about the one the docs half of me will be automated out of a job before the program No.

Speaker 3:

No. No. No. No. No.

Speaker 2:

No. No

Speaker 4:

one wants to read no one wants to read the fucking docs. 560 pages for the the first thing. You know where people statistically drop off reading the book? Chapter 4. It's, like, 45 pages in.

Speaker 4:

All those other pages who read

Speaker 3:

that book all

Speaker 1:

the way to the end,

Speaker 4:

chapter 4. Damn it. I thought you

Speaker 1:

were gonna say

Speaker 4:

you. But yeah. Like Oh. That's that's just like I

Speaker 3:

I thought I thought I thought I thought I thought I thought I thought I thought I thought I thought I thought I thought I thought I thought I thought I thought I thought I thought I thought I thought I thought I thought I thought I thought I thought

Speaker 1:

I thought I thought I thought I thought I thought I thought I thought I thought I thought like, the fact that you've got something that can be consumed. I mean, I think it's it's kind of interesting and surprising of a little bit that we've been saying, I agree, Steve, that you've been like, oh, this is like no one's gonna write documentation anymore because the the the the bots can just figure it out. It's like, well, actually, it's the documentation that allows them to figure it out. And the systems that are really well documented are ones that actually it's able to be very helpful with and the ones that are not documented at all like you're back in the kind of the, the, the mess. So the, the other thing I wonder, Steve, as long as we're on this hot topic is the other thing I have found is that like it's rust is really good and it's really well commented.

Speaker 1:

And

Speaker 4:

you are getting into my 3 year prediction, actually? So

Speaker 1:

Yeah. Yeah. Go for your 3 year prediction. We're ready for 3 years. So what's your 3 year prediction?

Speaker 4:

So my 3 year prediction is kind of a counter prediction to Elon, which I will paste into the the chat. Elon tweeted yesterday, done right, a compiler should be able to figure out types automatically. It's not that hard. Not that it will matter much in the AI future. And I have many feelings about this post.

Speaker 1:

Jesus Christ.

Speaker 2:

Are are you sure this wasn't generated to troll you? Yeah. Yeah.

Speaker 1:

I know. This is not gonna be the CEO of Tesla or Twitter for very long according to my one year prediction from a year ago. Yeah.

Speaker 4:

Exactly. But, like, I I do think my 3 year was gonna be basically, like, I don't think that AI will kill type inference, but that's a little too glib, I think, obviously. But just, like, I do think that, more people will use AI, but I don't think it's going to have impacts on programming language development yet. I think that we're a little far out from that actually happening, which is kinda, like, related to what you're saying, which is basically, like, strongly typed languages are, like, really good for computers to figure out stuff, and therefore, strongly typed languages are gonna be good for AI. And therefore, we're gonna see more and more and more movement towards strongly typed languages is, like, what I get is what I think that you're going for there.

Speaker 1:

That's what I so, you know, because I think you've you've heard me say this that that I have felt that Rust shifts the cognitive load from when a system is running to when you're developing a system. And that's what makes it challenging is that you are now having to solve a bunch of problems that you didn't have to think about because you could just kinda like deploy the code with this sign extension bug or this memory safety bug. And it was some other poor sap that had to deal with this thing in production that would have to deal with it. And Rust kinda forces you to deal with that upfront, and I think that's great. They creates this this big, big gap for people in this kind of steep learning curve that you have done, a terrific amount to make that learning curve less steep.

Speaker 1:

And I think that that the LLMs are gonna do terrifically well at these kinds of languages that are not sloppy in production. They're super rigorous, and they're putting that rigor upfront. And I think you're gonna see, I'd go even further, that you're gonna see not just is it gonna be great for Rust, but you're gonna see a renaissance of languages like f star and some of these, like, much more esoteric formal languages because the the the the LLM can actually help you out a lot.

Speaker 4:

I think it's maybe I would say I agree with you, but maybe it's more subtle. And I'm just thinking this now, so it might be totally wrong. But, like, the danger is that the the l m tells you something that's incorrect. Right? So I'll pick on Ruby because I love it.

Speaker 4:

And l m could give me an answer about Ruby, and, like, I don't know if that works or not. But, like, the Rust compiler, if it invents something from whole cloth, is gonna immediately slap me on the hands and be, like, that's not real. And so I wonder if that's the, like,

Speaker 1:

dynamic there that I'm not sure that I'm aware of. It can go compile it. It ChatGP is, like, oh, man. I don't know if you've had this kind of interaction, like, it will go compile things.

Speaker 4:

I have not really significantly used l m LMs for computing tasks yet.

Speaker 3:

John. By the way,

Speaker 6:

on the on on the languages side, speaking of which, professor Worth has passed away on January 1st May.

Speaker 3:

Yeah.

Speaker 6:

Just his soul. My friends had interest in in Worthy and languages. I mean, they all they've all heard of Pascal, but that's pretty much it. And they started discovering, like, modular 2 and Oberon, which is, like, unlike the c language families, it doesn't have, like, these brackets, and it all has, like, this begin and end. And and, obviously, GPT had no idea about the language, but turns out the language report the whole language report of of the operon programming language is 16 pages.

Speaker 6:

So we just fed it the language report PDF. Turns out you can upload the PDF. I had no idea about that either. And, you know, seconds later, oh, there you go. Now GPT-four knows a whole new programming language because it iterated it read the EPNF description of the language.

Speaker 6:

And, like, that was also very impressive. So now we are debugging our Oberon compiler issues with chat gbt for the first time.

Speaker 1:

Oh my god. Can you imagine? Like, even a year in the past being like, no. No. No.

Speaker 1:

The this is gonna lead to an Oberon resurgence. You're like,

Speaker 4:

what is that? Is this like the programming language version of, like, Tupac performing after he's dead because they, like, made a computer resurrect him? Is that, like, Nicholas Wirth has died and, like, 12 hours later, an AI is, like, actually, here here it is. Here's all the stuff.

Speaker 1:

But you've got more Oberon written in the last, like, week than has probably been written in the last year. I'm

Speaker 6:

just 20 years got those. Yes.

Speaker 1:

Yeah. Amazing. So

Speaker 7:

I think that there's something very optimistic and fun about this idea that AI will cause this nostalgic resurgence of, like, programming language arcana that we all, I think, have an aesthetic draw to. But I think what might be really interesting about AI is, like, I don't know why AI wouldn't optimize tools for itself. And I think the more interesting question, and this is because I'm horribly package manager filled, is how can, like, different models share tools with each other? And I think that's when we start seeing really interesting tools. Because I think right now, we've written a bunch of tools that are, like, vaguely okay for humans, but I think that what AI needs is gonna be pretty different.

Speaker 7:

Like, if we think about audiences and, like, I don't think we're wrong that, like, types certainly make it a better tool for AI, but there's probably aspects that make tools really optimized for AI that would make them almost unusable by humans. And I think, ultimately, that's the direction we're going.

Speaker 1:

Alright. So, Ashley, I love it. Can you so let's get a we gotta get a 1 or a 3 or a 6 year prediction out of you on the on on that.

Speaker 7:

No. Right? I know. Well, I well, I've been trying to think a lot about, like, what happens to dev tools in the age of AI for a lot of everyone knows what I'm up to that's very relevant to my to my causes. But there's this one thought that, like, once everybody has AI, like, do we need individual dev tools or not?

Speaker 7:

Or can we just, like, use generalized or, like, Copilot, and, like, will those own them? And so I think how we end up abstracting out the future dev tools we wanna build will kind of, like, inform this. But my suspicion is this is probably towards the 3 or 6 year end, but it depends on how quickly we teach AI to, like, talk to each other. And I know there's, like, kind of recombinant things where folks are having LLMs, like, check themselves with other models. And so if there's a way for them to, like, cash or save their own tools amongst each other, I think whenever that mechanism happens, it'll happen pretty quickly.

Speaker 7:

It's actually building tools, why wouldn't they? I don't know. Like, I know they're not, like, real people, but, I mean Yeah.

Speaker 1:

They're not people. I mean, that's that's the reason. Yeah.

Speaker 3:

They don't

Speaker 7:

do, though. They're gonna wanna like, I think that will happen. They are just gonna, like, generate things from first principles on every question.

Speaker 1:

So in what kind of dev tools are you thinking specifically? I mean, because I I I will confess that I naturally go to, like, the debugger as the kind of the quintessential developer tool, but that's not I'm also the person that doesn't use syntax highlighting. So I'm, like, the wrong person to ask about this. I also don't use syntax highlighting. Is that true?

Speaker 1:

It might have saved space. I had no idea.

Speaker 7:

True. I use Vim with no syntax highlighting mostly because

Speaker 3:

I didn't

Speaker 6:

Syntax highlighting.

Speaker 7:

Had to turn it on. When I started writing it, and I just never figured it out. It's very sad. But, I mean, a lot of tools that we have, like, kind of, like, cache repeatable tasks. But I'm I'm sure, like, if if we're able to create mechanisms by which LLMs can understand, like, the frequencies of tasks they need to do, and or the frequencies of types of answers, like, I'm sure that there will be the desire for, like, abstractions to form, the same way we, like, write shell scripts, and then shell scripts turn into, like, package managers and things like that.

Speaker 1:

Mike, what do you think it's sort of listening to us on all this? It's just, like, these these, like, infrastructure people No.

Speaker 5:

I actually lost a lot. Is great. The only thing I don't like about this is that it's slightly stepping on my 3 year prediction.

Speaker 1:

So Please give us your 3 year prediction.

Speaker 5:

I I think the the the language issues that that Andre mentioned and that that the issues that Ashley just mentioned, I I think those are all very interesting and really cool. I, I would put this in, in the, the category of what I can call it, like, software systems and LLM codesign. So artifacts and tools that are written for both humans and for l m generation. I hadn't thought too much about languages. I guess if you would ask me, I would have said, yeah, languages will be there.

Speaker 5:

But the the areas that I have been thinking of, more directly were logging and, data provenance. So in both those cases, you in the k let's take the case of logging the unless you have d trace, you have to make some kind of decisions ahead of time about what you're gonna log. If you have D trace, then you might be logging stuff that, you know, you don't, you either have to write the program yourself to actually understand the semantics of what's happening. You could imagine, kind of, systems for logging that come imbued with small amounts of documentation. Maybe the instrumentation point comes with like a snippet of the source code that surrounded the piece of log information that you're generating.

Speaker 5:

Some like description of the semantics of the information so that it can be expanded by the l o m after the case. So some notion of logging that makes it easier to process by for your AI machine downstream. The other thing was data provenance, which were pretty bad at. Like, if you want to capture where did a particular record in a database come from, or where was it generated, what were the, like, the license terms under which it was gathered. We're bad at that inside one institution, and cross institutions are really terrible at it.

Speaker 5:

If you can lower the bar to that kind of data gathering, where maybe you just write down, like, in a voice notes file, like, where you found the data and so on. If you lower the the cleanliness bar for data provenance, which right now either doesn't exist at all or is extremely high, like in the case of, financial regulated industries and so on. If you were to lower that bar and make it you know, just assume it's okay if the data is dirty. The downstream AI machine will parse it after the fact. Maybe you would actually get an environment of people like swapping more data provenance records back and forth.

Speaker 5:

And that whole area, which has been kind of dying inside academia for a long time, could actually become, a widespread part of the infrastructure.

Speaker 1:

And then so, like, how does the I mean, the the kind of the rigor necessary for data provenance? Because you're now you're getting, like, well, it's like it's I I this is lossy at some level. Right? Is that but do we then use kind of other techniques to basically check whether this the the AI is kind of giving us some some directional information about provenance that we can then check more rigorously?

Speaker 5:

Yeah. I think I think data provenance, unless it's like taint tracking, like, unless it's within a single machine, where you have an extremely kind of purely computational use of the provenance, it usually hits a human system at some point. Right? You have some kind of company policy or government regulation you're trying to, be in compliance with. And once you hit that, you know, my very crude, not technical statement is that any ambiguity around the the processing and so on can deal with whatever kind of semi messy facts the system is giving you.

Speaker 5:

And right now, it gives you no facts at all. So, like, we have created a system in which, for a tiny number of industries, the precision is close to a 100%. But in the system overall, the recall is close to 0%. And we should have a a more reasonable trade off between those 2. And if we can put up with some small amount of lack of accuracy or lack of completeness, then you can get a better trade off.

Speaker 1:

That's very interesting. And so I I I have kind of a follow-up question on that. Do we do you think that we will be using this for l o m's themselves that are about to have a real provenance issue, to the degree that they don't already, in terms of, like, where did this data come from that was used to train the network? Do you think that that's gonna be will this have an application for the design of LLMs themselves?

Speaker 5:

That's a little bit too snake eating the tail for my taste. Like, I don't know if if cracking like, if my, you know, if my self published novel made it into the LLM without someone's permission. I don't know if that's like the highest and best use of this whole provenance infrastructure. Okay. But, yeah, it's one possible use case.

Speaker 7:

One thing I wanted to just quickly ask, because I'm I'm just genuinely curious and don't know the answer. In, like, software supply chain circles, there's a lot of work being done from systems to, like, mathematical proofing for software provenance. Is there any overlap with, like, the work done on software provenance with data provenance? Because, I mean, the line between data and software, I guess, once we start talking about LLM starts getting real blurry, I think, eventually.

Speaker 5:

I I don't know the software provenance work well enough to to make a strong statement about this, I'm afraid.

Speaker 7:

Fair enough. Well, maybe my prediction is those two things will collide.

Speaker 1:

I will. Send me

Speaker 5:

some links. I'd love to learn more about it.

Speaker 1:

Yeah. Totally. And then so, Ashley, we've got you down for a was it a 3 year prediction that dev tools will be revolutionized by LMS? It sounds like as they is that a fair highest level synopsis?

Speaker 7:

Say dev tools will be revolutionized by LMS because I think so I really believe in tools are really important and driven by their audience, and I think that LLMs will have their own tools. Like, I don't think that LLMs are gonna, like, go pick up Modula and, like, run with it, like, in their in, like, the perfect world. I think that we will have audience driven tools. And this means if we wanna use LLMs as agents, like, there will be tools that are specifically designed for them. And I think that many of those tools will overlap with, like, dev tools that we know and use today in functionality in some capacity but have, like, potentially a radically different interface, or, like, the weights of the interface would be really different because of how they're getting used and by who, if I can say.

Speaker 7:

What's the right pronoun for freaking l m's? I don't know.

Speaker 6:

It I also think there's gonna be an interesting saturation in the market in the coming 5 to 6 years. Like, people who can work without LLMs, basically, reading man pages and documentation will become the COBOL engineer of the industry. They'll be paid a lot of money because they can work without OLMs, and I I'm sure unless unless language models can be, performant enough or optimized to be run on a on on a self hosted infrastructure, You know, you'll be running it on the cloud and people who can work without LLMs due to, let's say, security issues or privacy issues, etcetera, or regulation by governments, that might also be a thing. They will be paid a lot more money. They'll be like, oh, I can work with a software stack without using LLM.

Speaker 6:

Okay. Then you will be get paid a lot more because of some job requirement. I'm not sure about this, but I I have a feeling because, you know, a lot of these Pearl engineers, they're still around and they do get paid a lot of money because of some legacy system. Sure. Python is more popular, but the Perl people are still on.

Speaker 6:

So that might be something very interesting about the job market that that would happen. So there's there might be some kind of more saturation happening. Is saturation the right word? I'm not sure. Saturation happening.

Speaker 1:

To anyone early in their career, Antoinette is not recommending that you pick up Pearl, refuse to use a chat gpt. Be be very careful about how you you take this prediction in your own career management. But, yeah, interesting. I mean, I I kinda feel that it's gonna become more like a search it's gonna become like a search engine. Like, I mean, you can I mean, Adam, I'm I am embarrassed about the degree to which I mean, I just use a search engine as part of writing code very frequently, and

Speaker 2:

you're embarrassed by that? Should I

Speaker 1:

not be embarrassed by that? I I mean, embarrassed. I'm not embarrassed. I don't I'm just like

Speaker 2:

folks were being asked to write code as part of the interview process. And I think sort of some some people, like, docking well, people saying, oh, well, they had to look it up. They had to search for things. It's like, well, of course, they did. You'd expect them to, like, digest the entire, know, whatever language manual of every library every created and summarize it now.

Speaker 2:

Like, a search engine is part of Well,

Speaker 1:

you remember, like, doing writing code with the motif manuals literally, physically on your lap. So it's Absolutely.

Speaker 3:

Yeah. I

Speaker 1:

shouldn't be embarrassed. Like, you've been, like, dude, you've been writing code this way all along. Just, like, don't don't don't give your past self

Speaker 2:

info. Right. You just didn't you just don't have, like, a book next to you, to to make you feel like, I don't know, it's somehow more more literate or more academic.

Speaker 3:

Did we get

Speaker 1:

to your 3 year item? I don't think we did. Did we? Okay. No.

Speaker 1:

Go ahead.

Speaker 2:

No. And I feel like such a bucket of cold water on this one because it has nothing to do with chat GPT or LLMs or anything.

Speaker 1:

You saw what I did there. It has to do with

Speaker 2:

okay. Well, good. I've I'm I'm looking forward to see how you spin this one. I have become a little bit obsessed with Hock Tan, the CEO of Broadcom.

Speaker 1:

You you use the word obsessed. Okay. Are you Yeah.

Speaker 5:

This this is the most

Speaker 2:

improbable sense of the

Speaker 5:

year, and we're only on January

Speaker 2:

8th. There we go. Because he is such a killer.

Speaker 1:

Killer. And, and, you

Speaker 2:

know, apparently, it's had some all hands at VMware, where basically the theme was, if I had anything to learn from you people, then, I wouldn't have been able to buy you. I'm paraphrasing, but, I also appreciate the candor. No. Not by much. Not by much.

Speaker 2:

So, you know, VMware is one of the points he made was, you know, Broadcom is 21,000 employees and, with some revenue, and VMware was 38,000 employees with less revenue. So my prediction is simply that, he makes massive cuts to VMware, but, in 3 years, the 38,000 goes down to 21,000. That is to say, VMware in 3 years is the size of Broadcom, prior to the VMware acquisition.

Speaker 1:

And I believe that this was an answer to the question how he was going to preserve VMware culture. I believe that he performed this math for people. Is it was my understanding.

Speaker 2:

That's that's right. Yeah. Yeah. You you and I have you you I think we have independent we have we have different moles.

Speaker 1:

Well, isn't it all?

Speaker 2:

Yeah. We're talking about the same all hands.

Speaker 1:

So it's like it's not exactly a state secret

Speaker 3:

when you do this Yeah. In all

Speaker 1:

hands. No. Exactly. And so well, okay. So with are we going to start to see cuts that I mean, just to just to defy you that I actually can take any prediction and rephrase it in terms of LLMs.

Speaker 1:

Are we gonna see cuts that are like, hey. You know what? We can actually I a large language model can do what? I don't know. A 1,000 of you do.

Speaker 1:

It, like, obviously, you'll can say

Speaker 2:

Oh, yeah. I'm sure. But I'm sure. Fine. But also with the large language model turned off, I think it's gonna be the implication.

Speaker 2:

Ouch.

Speaker 1:

Yeah. Okay. Well, yeah, that's dark. That's dark.

Speaker 2:

Yeah. Sorry. I don't know. I I sorry to be so dark on this one. I just I it is is such a fascinating takeover, and, Huktan is is kind of a fascinating fascinating designer.

Speaker 1:

Alright. So I've got a a 3 year prediction that, I think is is maybe is not dark, but maybe a bit more mundane. I actually think that the, one of the great ramifications of even generative AI is going to be to totally revolutionize search, and we will not recognize search 3 years from now. That 3 years from now, if you were to tell someone to go back into a time machine and use search from January of 2024, it would just feel other worldly. It would feel like you are in the Google era going back to the hot bot era, or you are in there again, that it is that, there's just a real sea change coming because when I use, I mean, the answers I get out of chat g p t when I wanna go when I wanna go, sir, I I know, like, when you just searching for, like, super mundane things, it's like you get so much junk that comes along with it.

Speaker 1:

And it's like there's all this garbage here that I don't want. It's not even a good answer to my question. And, even when it is a good answer to my question, like actually didn't want 10 answers. I actually just wanted one answer. And, so if you could actually just give me one answer, like, that would be awesome.

Speaker 1:

By the way, if you could also search all of this massive, I mean, to me, like it is and I would love to be wrong on this. So someone please tell me if I'm wrong. I love podcasts. It is, like, impossible to search podcasts. Like, I wanna I've got a super simple query.

Speaker 1:

Like, I Adam, you know, we did our podcast on the m I 300. I would like to listen

Speaker 3:

to

Speaker 1:

every podcast that's discussing the m I 300. I think it'd be interesting. I think it'd be like overlap. I would just I wanna hear what they have to say about it. I think it's an interesting part.

Speaker 1:

And it feels like that should be a very like, why can't I phrase that query? And, you know, right now.

Speaker 2:

This is a great prediction, Brian. I, I think like just as a time capsule for us 3 years from now, searching the web today, you you know, you you click, you search, you you scroll past several sponsored links, you scroll past some more ads, you go down and investigate a few things that pop back and forth, but you're imagining I I think sort of almost back a renaissance to back when, you know, you could click I'm feeling lucky and come up with something useful in Google.

Speaker 1:

That's exactly what it is. Where and where and you just would never search the Internet. The the way we do it today is just gonna look foreign in 3 years. It just

Speaker 5:

So so yeah. Brian, everyone else. So, like, if you could get the latency from chat gbt that you get from Google, what fraction of your Google search traffic would go right now to chat gbt?

Speaker 1:

So I will tell you that the quality is so much higher that I have directed more and more to it. And because the, I also have find this chat gbt 4 too. So the latency is higher. So, but I have found I mean, so Adam, this is like a, like a silly thing but I want you and I both play ultimate and in ultimate a a dive is called a layout. What is the past tense of layout is it.

Speaker 2:

Like laid out versus.

Speaker 1:

Yeah, exactly the, I just want to know when in particular like how to spell that and I wouldn't I I don't know and that's the kind of question that now I just like actually asked Chachi Chachi before give me a great answer and he, I think that Google would have given me the same answer, but it would have taken longer because I had to sift through junk. And I actually. Okay, so the

Speaker 2:

the part partly on this, but Brian, so are you willing to pay for that? And or Yeah. I am. Are you willing to scroll through ads?

Speaker 1:

So I am paying for it, and what are they, like, the $20 a month for, for, chat g b four. I'm I I am paying and I because I think and I think it's worth it. So, yeah, I think it's like and maybe that's maybe that's a great product. Maybe that's the prediction. Is that, like, actually, I've kind of done the math and like a lot of people, it's like I you got I mean, in an open AI, you've got, you know, one of the most successful product launches in history from a revenue perspective.

Speaker 1:

And it's like, yeah, people are willing to pay to actually get an actual quick answer to their question.

Speaker 2:

Okay. Brian, feel free to plead the 5th. But other than streaming services, are there other, like, I'll call it software products that you pay $20 a month for?

Speaker 1:

Oh, I seriously right. No. Not really. I mean, in terms of, like, yeah, like, I'm not a I mean, you I I think I I think I understand that.

Speaker 2:

You're not office 365 or whatever. Yeah.

Speaker 1:

Grasp of your question, which is, like, the fact that I'm actually paying for this is a yeah. It's a big deal. And you know what? I kind of went back and I I would part of why I rationalize that. And I should say also that, like, I view it as a professional expense.

Speaker 1:

So it's like I and I would, I think I certainly told anyone that's asked, but, certainly at Oxide, it's like, yeah. You should expense the the the $20 a month because it's like you expect your laptop, you expect your monitor, you're using it professionally. It's like, for sure, you should do that. And, it is definitely it's it's worth it. And I I think that, you know, I go back and think about how much software used to cost.

Speaker 1:

And I read a great piece on the on, someone retelling working at Walden Software. I worked at Walden Software back in the day. This is a mall software store. Mike, you would love this. It's such a great, like, time capsule and, of this kind of like nineties software or zeitgeist of like software, etcetera, and Babbage's.

Speaker 1:

And, and, you know, I, you know, I, I worked at the Babbage's and then went to work with Walden software and the, you know, the compiler was $400. And, you know, you couldn't do anything without paying and that's $400 in, like, 1991. That's a lot of money today. And I am willing to pay for things that I think really change, like and I think it's changed by quality of life. So, yeah, I wouldn't pay for it.

Speaker 1:

I don't know. Maybe that is the product is that people are gonna people are gonna be willing to pay for it. And maybe our kind of social contract with search is gonna change. If if not, like, yeah, for the contract.

Speaker 4:

Interesting. I think that's super interesting and good, but I may have to go, and I wanna give my spicy 6 year presentation. You have

Speaker 1:

the spicy 6 year. Before we go.

Speaker 4:

Okay. It's kind of a 2 parter. The first one is is that, c plus plus will be considered a legacy programming language and that c will see a small resurgence but will still not return to its older glory days. This is based on the fact that the challenge that's coming to the c plus plus community, and the c community, both from the US government and the EU, I, in the in the spirit of being bold about predictions and things I may not fully agree with, I think that the committee is not raising to the challenge, and I think that it's gonna go very badly for them. And I hope that's not necessarily exactly the truth, but it appears to be the truth in everything I've seen so far, and I don't see any indication that the is turning around anytime soon.

Speaker 4:

So that's that's my big Those

Speaker 1:

do not feel spicy to me. Those feel like the that that that I was braced for much spicier. So, I feel like Yeah. Those are good.

Speaker 4:

I feel like it's a little like, that's a big deal. That's, like, a very big change.

Speaker 2:

So It's it's a big deal. And and we're definitely inside the the the echo chamber here.

Speaker 4:

Yeah. But I I I I have a lot of buzz I'm working on, but, like, when the US government literally says, like, we are going to make, the Department of Defense come up with a plan for transitioning away from memory sector languages, and they asked the public for, comments. There were, like, a 180 organizations that responded. And all of them were either, yes, this is good. We need you to do it now.

Speaker 4:

Yes. This is good, but we should be really careful about allowing legacy things to continue. And then the one single, no. You're wrong. This is totally fine, and everything is great came from the c plus plus or part of the c plus plus committee.

Speaker 2:

And so Yeah. And and not particularly compelling.

Speaker 4:

Like, poorly written and all those other things. And so, like, it's not that I necessarily, like, I just I just don't see the the way in which they're responding to this complete crisis is just not adequate to me, and I think that they're gonna get run over by legislation.

Speaker 1:

And I'm gonna go ahead just, again be very on brand. I think LOMs are gonna be viewed as an accelerant in that regard. I think that

Speaker 3:

no.

Speaker 1:

I think at the I know Adam, you're back. Do you remember when you said I was being a sucker about the iPhone launch? Like, you remember that? You were like, I feel like I was very

Speaker 2:

recently back to that. Like, do

Speaker 1:

you do you hear yourself right now? Okay. But no. But hear me out. Because I think that part of what I I think that, LLMs are because I also use it to debug.

Speaker 1:

Try I had a problem though that I was trying to debug. And, you went kind of back and forth with chatty p d to debug it. And it was interesting. Ultimately, it did not really help me debug it, but it gave me some things to get, you know, was thought provoking And I think that they are going to remain bad at debugging memory safety issues. Like, I don't think I think it's that and I'll look good.

Speaker 1:

I would even

Speaker 2:

Just to be clear, everyone's bad at it.

Speaker 1:

Yeah. Everyone's bad at it. And I think it's such emergent behavior that I think you're you're gonna be really hard pressed to, I almost wanna make this like a 6 year prediction that you will not be able to upload a core dump to chat g p t and to have it reasonably I mean, obviously, there can be some core dumps. A core dump that could be quickly analyzed by a human, but I think if you have memory corruption where it's like you've been clearly corrupted by the adjacent buffer and someone who's used this buffer that's like a 12 byte thing that has been plowed, and we've gotten, you know, Adam, the kinds of things that we have seen that that are that long, long tail of memory safety issues. I don't think it's gonna it's not very useful on.

Speaker 1:

And

Speaker 2:

it's It's very interesting because I think the they tend the the thing they have in common is their divergence. Right? Whereas other problems may be kind of more convergent. If you see symptoms of a certain kind, there may be, steps that you take, that that that sort of correlate. Whereas the symptoms you see with the kinds of problems that you're describing are literally anything

Speaker 1:

Literally anything and everything. And I think the so I I don't think they're gonna be very much use on debugging those kinds of problems. So it's gonna drive you more towards these things like Rust that it can get right out of the chute.

Speaker 6:

So I want I I wanna give one story and one prediction. The story is that when we were doing organizing the CTF last month, end of the year, we we one of the challenges was kind of a copy pasta of another already existing challenge with a little bit of difference. And people try to ask ChargeGPT about the challenge and to calculate the end result of of whatever what the input was, And it gave them a flag for the CTF, but the flag was not correct. Instead, it was a flag from a blog post that it has read about Oh,

Speaker 3:

I see.

Speaker 6:

Learned learned from. So that was very, very weird thing that happened. And so that's one. But but my my actual prediction is that because of, how mainstream these Mac computers are becoming, thanks to Apple Silicon, and LLMs being text based, I think that, command line is gonna have some kind of bump base. How what do you call a bump to up?

Speaker 6:

Is that a bump? Restriction. Yes. A push, you know, because Yes. Because you want you want to Google or in this case, LLM, how to convert a MP 4 to a, I don't know, an OGV.

Speaker 6:

And if it's a GUI command hasn't been changed for 20 years. It will stay the same. So I think that might be a very good, you know, surge happening in the next, I I think, or close to 6 years. The people will are gonna be like, oh, okay. So the command line makes more sense with these text interfaces these days.

Speaker 6:

So that's how the command line

Speaker 1:

restructure. I don't

Speaker 5:

I don't know if that's gonna happen,

Speaker 1:

but I would love it if it does.

Speaker 5:

That that's a thrilling prediction for you, firstly. That'd be great. I can't I can't I can't text only mobile phone? I get, like, a smartphone that's that is just like a p t one hundred smartphone. That would be phenomenal.

Speaker 1:

Like, do you have, did you have some 6 years? I think we we got you 3 years. I think we're we're on to section. Okay.

Speaker 5:

Okay. So 6 year on the on the technical side, I don't know if that maybe maybe like the the text only I iPhone. This is hopeful rather than actually predictive. I think, like, visualization dashboards as a way of integrating different sources of information on on, you know, mainly data center style or software style systems. But not only that that is going to disappear as the primary way of integrating complex signals from complex systems.

Speaker 5:

Instead, like a lot of the stuff we've been doing on, say, causal reasoning or, conversational predictive style stuff, which will use LMS, but it won't be viewed primarily as a chat GPT style application. We're finally gonna get higher level methods for integrating conflicting evidence from some kind of complex system. The the era of, like, you know, as many visualizations as your retinas can handle, I I hope and think is finally coming to a close.

Speaker 1:

Really interesting. So and then you will be the emphasis when you're developing these systems will not be in developing these things for the eyeballs, these dashboards for the eyeballs, but getting the data sources into something that can actually reason about them.

Speaker 5:

Yeah. So so I think, like, presidential daily briefing for everything. So, you know, the headline, you know, 2 page document that might include a visualization if you're the kind of president who wants to see 1. But you know, it has makes like decisions about what's important, what is both unusual and amenable to intervention, not just unusual, like outlier detection style stuff. I think I think the the intellectual machinery is finally there to build it.

Speaker 5:

And you the dashboards have not been a total success. Like, they they don't scale well to kind of the kind of surrounding information that you can now get through l m. So, like, in principle, I could understand my system better if you gave me, like, source code to a program I didn't have source code before or documentation to something I didn't have documentation to before. And yet anything based on visualizations and so on does not get instantly conversational style interface, those could be improved by those resources. And then finally, you know, the the dashboards just leave too much kind of underexploited, too many problems that never get actually diagnosed in time before they disappear.

Speaker 5:

So I'm I'm hoping for some higher level abstraction, something better.

Speaker 1:

Yeah. That that's a that's a great prediction. And that, a great prediction. So the death of dashboards too.

Speaker 2:

Yeah. Yeah. It it really interesting. I I think Brian, one of the points you've made over the years is how literally we have evolved to be able to see patterns in in data, and yet, there are lots of patterns that evade our detection or that just aren't well we happen to choose. So neat prediction that we move away from that, to see if, like, these automated systems become better at understanding the data than than

Speaker 1:

When I also do that, like, so much of the trick has been, like, what do you visualize and how do you and Yeah. Like, that's been the top thing that's hard. And, having something also that can assist on that, it's like, actually, I found something I think it's interesting to visualize this. Or here's an interesting way. In other words, not the death of total visualization.

Speaker 1:

Mike, maybe that's what you're predicting, but that the, where the causal reasoning is actually helping to steer you to something that that that merits visualization.

Speaker 5:

Yeah. I mean, we'll still have plots. Like, they're they aren't gonna disappear, but but I think the age of, like, the solution for a lot of, diagnosis problems being yet another plot or staring at multiple plots simultaneously to infer something else, the the scope of it is going to go way down. We're going to be we're going to use machines to, you know, lower that workforce.

Speaker 1:

So speaking

Speaker 2:

Yeah. There was a there was a time when I think the the the top chess players were humans and computers kinda collaborating.

Speaker 1:

Yeah. Yeah.

Speaker 2:

So maybe more more in that vein.

Speaker 1:

It's really I was gonna just gonna dovetail into one of your predictions that, speaking of plotting, I do think that, chat g p t 4 will lead to a resurgence of a new plot. It's actually not one of my written down ones, but I would like to say that no one has got any excuse for not actually doing gnuplot for everything because it's like you can have it everyone always complains about how gnuplot's unapproachable, and it's can actually just have GPT write it for you. It's does a marvelous job. That works great.

Speaker 5:

Works great.

Speaker 6:

I was gonna ask if if you think that DTrace will become 3 d if Apple ported instruments to VisionOS. That's right.

Speaker 2:

Well, I That's a good I

Speaker 1:

mean, I think it's so, yeah, I mean, I think it is interesting because I I I think that it will be, I I think it's gonna be a great tool for debugging systems, but I think it's not in the way I I don't think it's gonna fully close the loop. I think it's gonna be the human's gonna be very much in the loop, but but we will see. Took out a couple of 6 years, Adam. He he

Speaker 2:

Okay. You you get one, and then I'm gonna jump in there because, I I I keep on coming up with these downer predictions.

Speaker 1:

You do have a downer 6 year? Do you wanna do would you go we do the downer 6 year?

Speaker 2:

No. You you Okay.

Speaker 3:

So medium

Speaker 2:

downer medium born from a place of downerism.

Speaker 3:

Born from

Speaker 1:

a place of downerism. Alright. So I gotta we're gonna get we're getting weird here on some of these. So, this is gonna be a ridiculous part right here. But the so we we record every meeting at Oxide, Adam.

Speaker 1:

And I know this is idiosyncratic and most companies don't do this, but they should because it is incredibly powerful to have every meeting recorded. And I think

Speaker 2:

You know what? You know, Brent, I just have to interject because I've been reading this history of Watergate.

Speaker 3:

Yes.

Speaker 2:

And I would say that Nixon felt exactly the same way you did. Please do.

Speaker 1:

Do you know the story? Is this from, like, Bridgewater?

Speaker 5:

Is it did you get this idea from that hedge fund?

Speaker 1:

No. Oh, god. Ray Dalio in the worst title book of all time, Napoli Principles. It's like the word principles does not mean what you think it means. This is like this book should tell the aphorisms, or or the no.

Speaker 1:

No. No. We did that a little bit accidentally, and started I mean, it actually so it came a a consequence of a couple things. One was we got some very good advice very early on, and I feel I can safely attribute it to him to Jeff Rothschild, who was, founder of Veritas very early in Facebook. And one of the pieces of advice that he gave us is when we we we had 0 hires, when we're not, we'd really still fundraising.

Speaker 1:

He's like make sure that every meeting is open to everybody. And I mean you obviously have personnel meetings that you know you've got meetings where you that that can't be true for absolutely everything, but basically make sure that every meeting is open to everybody and everybody knows that everything is open to everybody because you will, the fear that people have, the decisions are being made without them knowing why. You can actually, you won't allow that fear to grow if you know that all meetings are open to everybody because then people don't have to go to meetings because they they know that they could. And I thought it was a really interesting point. And then when we the pandemic happened very early on in oxides history in March 2020, and all the meetings were online.

Speaker 1:

And then we just started hitting the record button. And Google actually in one of the rare rate integrations across Google properties, God only knows how this happened because it's so out of character for Google. Mike, I don't know if you know this, but when you record a Google Meet, the recording is attached to the calendar invite of the meeting. So you can go back to a meeting. Yeah.

Speaker 1:

So there have been an Adam, I'm sure you've done this too. Cause I know I've done this like a lot or been like, oh, that meeting. I remember it was hot and we were, it was like, he was during like the fires. That's what I remember. It's like, okay, that meeting was in like September of 2020.

Speaker 1:

And so I start going through the weeks. I'm like, oh, there it is. And I can go go to the recording and it's remarkable. It has been, really, really effective, where you have something that's important in a meeting that you miss and go back and listen to the recording, get all the context. It, we could, so my 6 year prediction is that, and I can't just outlandish and go look back at this and laugh that I could be so ridiculous.

Speaker 1:

But I think if you were to combine that, namely remote work with recording all meetings, with much better, information processing, l o m processing of the of these meetings, you could get total organizational visibility and you can eliminate you can completely revolutionize to put it mystically, middle management. And because so much of management is communication. And if you can actually say here's exactly what's happening in an organization, and you can do it concrete. Like, there I mean, Adam, like, don't you think you'd want this today? I feel like even at oxide size, it's way smaller than, you know, we are, you know, whatever.

Speaker 1:

Sixty people. Sixty two people. I feel like it would be super useful that to be able to I feel like something say, hey, by the way, do you know what, like, the same idea is being talked about in 2 different meetings slightly differently, which I'm sure is already happening today.

Speaker 2:

And yet you you this maybe to to Mike's point, you get this sort of daily briefing of like or weekly briefing of these are the meetings that you didn't attend or these are meetings that you attended and you care about and you've told me you care about, but also there are similar topics being discussed elsewhere in the company. And you should you folks should, you know, figure out That's right.

Speaker 1:

I mean, how often do you have, like, oh, hey. You I mean, how much of the function of, like, good management is, like, oh, you know, you mentioned this thing that's, like, interesting to you. You should know that, like, actually, Bob has got this idea that's also interesting that that Bob was talking about with Alice. And you the 3 of you should get together because you've got some, like, shared ideas about this. You know, it's like, it you know, it everyone kind of always goes to, like, the, oh, you should get together and, like, duke it out.

Speaker 1:

Well, it's actually like, no. No. You actually the 3 of you are actually interested in this problem, but you're taking it from a very different approach, and you should work together. So this is

Speaker 5:

yeah. It's a cool idea. I yeah.

Speaker 2:

I think it's a great predictions. So how do you think how do we how do we know if we got there?

Speaker 5:

Like, what's your

Speaker 2:

what how do we adjudicate?

Speaker 1:

That was all important. So, well, I think first of all, I

Speaker 3:

think this

Speaker 1:

is gonna be easy to adjudicate because I think the likelihood that we look back on this and just laugh hysterically is is awfully high. And I think that it'd be like, I would like to see you, adjudicate this. It's a no. I think that this will be but I think that there will be companies that are successfully experimenting with this, just like there are companies with remote work that are and may you know what? I'll I'll I'll I'll go this.

Speaker 1:

I and I understand I see what you're doing here, and that's fine. I'll take it. I'll do it. I'll I'll take the bait. I'll chomp out on this bait.

Speaker 1:

I think that this new management approach has a name. It is like surround management or surround, or it's got an organ like, like whole accuracy, right? A whole accuracy got its own name. This thing has this has got enough oomph to it at it's got a name. It's got maybe a book on it.

Speaker 1:

It's got a it's like the net

Speaker 2:

Got it. So, like, in in the style of Jack Welch winning, there's a management book articulating a style of management driven by an almost defined by an absence of management with, with communication driven through

Speaker 1:

That's right. That's right. And when you call that an absence of management, will you get this kind of, like, the and I, Mike, I love the way you described it as the presidential daily brief. It's kinda like every one of the companies getting a a presidential daily brief. And I would say these companies are smaller.

Speaker 1:

That the the the the kind of the the flip side of it is people are able to work more efficiently and you've got less duplication of effort. And this is just like, now we're just kind of in fantasy land. But,

Speaker 5:

I Listen. I think if there are any if there are any budding VCs in the audience, they wanna they wanna, like, have their new angle where, you know, it's not the minimum viable product anymore. Now we're all about, like, the, the holacratic organization. What's the term? The the Well, holocracy

Speaker 1:

are that does exist.

Speaker 5:

Yeah. That's yeah. You don't you want you want a new term for this, which is This

Speaker 1:

could be a new term.

Speaker 5:

Yeah. Yeah. I I think this is and it's it's not going the the presidential daily startup, the it's gonna be hard to implement an existing company. But if you wanna start your own presidential daily start up, then, like, it won't take more than a, I don't know, couple months hacking together some tools. Company's ready to go.

Speaker 1:

So the this is,

Speaker 5:

in the churn. Do a a very small fundraising round, and you're ready to to go spark start a whole new round of companies, it's gonna be great.

Speaker 1:

That it it is is omniscient management. I agree with that. It this is like I like the omniscience as part of it. I think that's exactly what it is. It it it it is this is democratized omniscience and brought to you by an l o m.

Speaker 1:

Also, I write the book

Speaker 5:

and then no. No. No.

Speaker 1:

The inside. Alright. So you could add, but Adam, you're gonna bring us right down here.

Speaker 2:

Okay. Well, not all the way down, but, so, one up and one down. 1 one, so I have also become obsessed with TSMC. Have you listened to the acquired episode on TSMC? I did.

Speaker 1:

It's a little.

Speaker 2:

It's so good. I mean, acquired is great, but, the one on TSMC

Speaker 1:

is Mike, do you listen to acquired at all?

Speaker 5:

You would No. I don't, but it sounds like I should.

Speaker 1:

Acquired is really good, and the acquired on TSMC is especially good. The ones on NVIDIA are also are good to get a little bit, a little bit heavy towards the end. They love Nvidia, which is great. But I actually think the TSMC one is better. So, you know, I think it's a it's a really, yeah, Adam.

Speaker 1:

TSMC is a remarkable company. Okay. Yeah.

Speaker 2:

TSMC. Great. So I was trying to think of what's my what's my TSMC prediction. Then I started noodling on, you know, some something you said the other day, Brian, about the demise of Intel. And, like, you know, I think Intel's really lost its way, I mean, to put it mildly.

Speaker 1:

It's something you guys are not it's also the same for the record, which is I'd so Intel sorry. Go ahead, Adam.

Speaker 2:

Oh, I can trim this up story. So as take 2. Intel is great as we know. So my 6 year prediction is that less than 50% of the server market by dollars is x86. That is to say of, like, chip sales.

Speaker 1:

Okay. Yeah. Interesting. And where where where's the balance going to?

Speaker 2:

Yeah. I don't know. Good question. Let's say, arm, let's say, you know, mean, maybe it's maybe it's too maybe it's already tautologically true with, like, if you say if you throw GPUs into the mix, but that, you know, it's as Intel kinda tries to to promote its foundry and its fab and that kind of stuff, that there's lots of non x86 86 being built for the server.

Speaker 1:

I I think it's a great prediction. I actually wanted to make a prediction. I was going back and because we we actually had a similar 6 year prediction last year where we were talking about heterogeneous cores on a package. And I would I so I'm gonna sharpen that a little bit. And I am gonna say that that the this APU model where you've got a an accelerated chiplet sitting alongside a general purpose chiplet is the only way that you buy CPUs.

Speaker 1:

That that all CPUs come with some number of cheap what we used to call GPU chiplets and the disjoint GPU that looks like the disjoint floating point processor, the disjoint FPU of of our, distant youth. And

Speaker 2:

You know what? I was gonna bring this up. You actually you actually made a you didn't make a prediction

Speaker 1:

last year, but

Speaker 2:

you had something very prescient last year where, yeah, you you had said that, accelerated compute was going on die. And I as I listened to that, I was like, that is that you nailed it. I mean, that might have already been true. I mean, because I guess Grace Hopper was probably close to there, but I I thought it was

Speaker 1:

very frustrating. Also because of we I we knew about the my 300 a was coming. So I did so I definitely was cheating, but I also do think that it is like that is the right model where you have, because I think this is gonna be so ubiquitous and, so important for so many different things that you're just gonna not ever buy a CPU. And I think it's gonna complicate the SKU stack, by the way. The SKU stack is gonna be a big mess about, like, what is the balance between your GPU elements and your CPU elements.

Speaker 1:

But, like, that is the way you're gonna consume.

Speaker 2:

Yeah. I mean, it's already complicated, like, if you're buying a new MacBook, it's like, how how many GPUs and how many cores do I need and how much memory and what's the balance for all this stuff and

Speaker 5:

what am I paying for?

Speaker 1:

And it's gonna be way more complicated. So, yeah, that is so I have a good situation. Why do you think it's gonna you know, it was funny because I was gonna use that prediction to also predict the demise of Intel. I Intel, a a a cherished investor in oxide. But I do think that, like, one thing that Intel's really gonna need to figure out is what the g p GPU story is.

Speaker 1:

I think AMD is, like, AMD is really interesting. We talked about it in the Mi 300 episode, but really, really interesting

Speaker 5:

with your garden.

Speaker 2:

Well, I was feeling bad about it because right before, the show started in chat, folks were like, yeah, everyone knows Arm in the server market is dead. Right? And and lots of people were piling on that. So I thought, oh, jeez. Like, yeah.

Speaker 2:

Whoops so much for that. But you know what? Arm naysayers, I say in 2030, more than half of the the market is up.

Speaker 3:

Risk 5

Speaker 1:

or risk 5.

Speaker 2:

Yeah. Or risk 5 or or something else.

Speaker 1:

So you were what else do you have for 6 years? Couple other. Yeah. Mike, do you have any other 6 years? I've got a I've got a couple of other

Speaker 5:

no. Let's see them. Go ahead.

Speaker 1:

Zany 6 years. Alright. So this is also again, we're gonna we're gonna we're gonna look back on this year and and and laugh as the year that I really lost it. I have been just really thinking about other, the other kind of people intensive industries that I think are gonna be revolutionized, not because they're gonna be fewer people in them, because they're gonna be able to work more effectively. So I'm gonna say another ludicrous prediction.

Speaker 1:

I think that the and I should say this came out of a very interesting talk I saw give given by Sal Khan of Khan Academy, talking about the possibilities to use LOMs, for tutoring. And I think it's really, really interesting. And so I I think that, generative AI is going to revolutionize, K8 education. And in particular, it is going to allow teachers to, actually in engage with some of the students that have been actually left behind, or are being left behind today. And because I think that the and, you know, Adam, I I saw this with, my own, my own kids during the pandemic, in particular, what the one of my children really, had a great relationship with a tutor, and, the pandemic really disrupted his own education in that regard.

Speaker 1:

And it was really I mean, the and I just the idea of giving everybody someone who's really invested in their education and is gonna, like, really help them explain a concept to them or really work with them on a concept until they really understand it. And I think, you know, we've been really focusing on on Chachi Pita, like, cheat and stuff like that. And I actually think that there's something way more interesting where it's like, it can actually help you learn. And I look again at the quality of, of rust that I get out of chat g p t and how well common that it is. And it's a lot it's I think there's there's some real opportunity to, and the, you know, being able to ask a follow-up question of like, I don't understand this or help me understand.

Speaker 1:

And I also think you've got this idea of like, you know, a kid that doesn't want to tell a teacher, they don't understand it is willing to tell a computer they don't understand it. And the the fact that it's not an actual person that's gonna judge them is gonna actually help them because help help them learn, which is which really should be the objective.

Speaker 2:

I think it's a great prediction, and I'd love to see it. And I I think on the other side of it, to your point, it may help educators focus on students where, you know, if they've got 30 kids in a class, there's only so much attention that they can give to each one and let them focus their time on the students who need, you know, their experience specifically. Whereas many many of the kids may be able to benefit from this, AI Overlord teacher, but they'd be able to focus on the folks who are struggling with that.

Speaker 3:

Yeah. And

Speaker 1:

I think it can also help the teachers too in terms of their approach, you know. I mean, do you if you do you ask GPT parenting advice? Have you done that at all, Autumn? It gives really good parenting advice.

Speaker 2:

I gotta I I gotta pay for this subscription to get it in.

Speaker 1:

Pairing wise because it gives, like of course, it gives good parenting advice because it's distilled. Everything that's, you know, stuff that's written on the Internet is not like, you know, push your kids into traffic. It's like all good advice that people just don't take. And, so I think that there's a lot of wisdom out there, and I think that it can also allow teachers to kinda attach tap some of that wisdom as well. If you want,

Speaker 5:

like, the norm core answer to any question, it's great. Right?

Speaker 3:

And

Speaker 5:

Yes. That's right. Learning, like, 8th grade history, the norm core answer is, in fact, very

Speaker 3:

helpful.

Speaker 1:

And when you're parenting, by the way, the normcore answer is very helpful.

Speaker 5:

Yeah. If if your if your goal is to not be innovative, but to, like, just get just not be a total screw up at something, maybe even be higher than that, which, you know, honestly is most of life, right, then the answers are very very useful.

Speaker 1:

Well, so it's interesting. So, my mom is a school psychologist, and you'd be both of my mom. And I was talking with her over the holiday about this and I was like, let's take it for a test drive and she was actually like, okay, so what, I've got my my 13 year old wants to go to a coed slumber party. What how what should my approach be? And chat gbt chat to you before gave a a an answer that was my mom was like, yep.

Speaker 1:

This is exactly the right like, asking all the right questions. You know? What are the these are the questions to basically ask. And it gave just very good, you know, maybe norm core, but very good level, level headed advice. And it's interesting, you know, and I think it's gonna, it's gonna be right.

Speaker 2:

So you're saying you're saying not just school, but maybe boarding schools

Speaker 1:

for kids

Speaker 2:

run by the CHET GPT?

Speaker 1:

Boarding school. You're terrific. I I I feel like that there almost has to be a I I mean, because kids are still very much gonna be kids, and they will be taking all of their creativity and figuring out how to jailbreak this thing, how to turn it, how to get it to say horrific things. So there will be, like, an interesting back and forth on that. I'd kind of, so we need to see how that unfolds.

Speaker 1:

Okay. Another one, actually, I'd like, I would love your take on this. So I've I've been just, like, very star eyed about all of the this this glorious AI future. I am gonna say that robotics has not materially moved. That we that you you got in that, you're you're still and I'll make this, like, very concrete.

Speaker 1:

You're still loading your dishes by hand. You're still cooking by hand. Like, the things you're doing domestically, are haven't really moved. And that

Speaker 2:

it's like mechanical automation has not progressed past the remote.

Speaker 3:

It is

Speaker 1:

not be because it I don't think that's gonna move.

Speaker 5:

Yeah. I mean, you know, you you see the the, like, the Boston Dynamics robots and so on.

Speaker 3:

Mhmm.

Speaker 5:

And they're genuinely impressive, but they're not hinged to glass, you know, the neural things that people are so thrilled by, right?

Speaker 1:

They're not. And I

Speaker 5:

think that's mainly right. But I think the big exception would be perception. Like, their ability to understand the world is probably substantially better as I mean, I'm I used to be, you know, a zillion years ago, I was into robotics. I haven't been for a long time. My understanding of the field is, like, they have tapped into a lot of the improvements in computer vision that have been enabled partially by the neural stuff.

Speaker 5:

But when it comes to, like, manipulation and and so on, I think you're totally correct.

Speaker 1:

And I think it's also gonna be it's not even gonna be that it's impossible, but it's just gonna be, like, it's just not uneconomic. It's like, I actually don't need a $30,000 machine to make eggs. I'm happy to just, like, make eggs. And, there already

Speaker 5:

have story about McDonald's for a long time that, like, there exists somewhere inside McDonald's research labs, like, a totally automated fast food restaurant, but it's just not economical to

Speaker 1:

deploy. Oh, interesting. Yeah. And that it would it would not surprise me at all. And so, yeah, I think that, like and maybe that's a very concrete way to say it is that I think fast food is you still have humans in the loop for for fast food, that you do not have.

Speaker 1:

But, yeah, I think, and then another one so another one that I actually and and this one, I'm gonna stop short on a prediction. But actually, Adam, I wanna ask you a couple of questions because you mentioned earlier, the, you know, the fact that we have people write things on a whiteboard. Will our l m is gonna change that at all? So our LMS gonna change interviewing and our LMS gonna change they're kind of the presence of GPT as a tool, gonna change interviews. And are they gonna I mean, I think they're definitely gonna change code review.

Speaker 1:

I actually do really wonder if you're not gonna have, because code review is a task that's really important, but is one that is, automated code review is going to be something that's going to be interesting. I think people will look at, I mean, obviously our people already are.

Speaker 2:

Yeah. Yeah. No. Very interesting. But you're right.

Speaker 2:

In terms of, like, the interview process, I think there's probably gonna be at least an initial backlash of the like, you know, I had to come up without this tool so you don't get to use it as part of this process or whatever. But I wonder if that's just gonna be sort of a gut reaction rather than something that sticks.

Speaker 1:

Yeah. Oh, we didn't another question for you, Mike, around in the ad amount if you're doing the specific one on this too, but, these open source models, and while we're doing so on.

Speaker 5:

Yeah. Before you get there, go back Yeah. Half a step. The big exception to what you mentioned is is now that I think of it is is obviously drones where there's been, like, incredible deployments. Right?

Speaker 1:

Totally. And what I

Speaker 5:

We don't use robots to load the dishwasher, but, like, it's very plausible that at some point, pretty soon, it becomes economical for me to get a burger via drone. That that's kind of wild to think about that that becomes possible before the dishwashing robot.

Speaker 1:

It is wild to think about. So yeah. Do you and, you know, I think 6 years ago, it would have felt absolutely nuts to predict that. I but in 6 years from now, it does not feel total. But that said, a world in which everything's being delivered by drone is gonna be very loud, among other things.

Speaker 1:

I feel like, Mike, there's a good prediction to be teased out here. I'm gonna go ahead and predict that wealthy enclaves have banned drone deliveries in 6 years, and it's like hugely contentious. It's it's, I'm just thinking of the the the gas powered leaf blowers that caused Nextdoor to explode in various neighborhoods.

Speaker 5:

I wish I could have some very spicy take on Next door from 6 years from now, but I don't think I can I don't have it in hand, and I I don't want to, freestyle it? But the but I think that's a that's a that's a good prediction. And I think, like, 6 years where, you know, let's say some class of fast food is regularly delivered inside the United States. Like, I don't know I don't know if a pizza is actually gonna fit on a drone. Like, there's in the same way that a pizza used to be the only thing you can get delivered, like, you couldn't get French fries delivered, there's probably some kind of food form factor that is ideal for drone delivery.

Speaker 5:

There's gonna be at least one class of fast food that gets delivered to your house for your drone. That's a that's a a better prediction than than I I I've had before.

Speaker 6:

The smaller, developing countries with a smaller language base, like Armenian, Estonian, you know, who have couple of million people, have already started pushing from the government much content and finally started digitizing things properly just in order to get that data fed into an LLM to be used by the public. So, like, the government here was like, okay, we'll like this chat gpt thing, but is does it work on Armenian? 3.5, it was very, very not usable. 4 can finally type properly, but the sentences, you know, kinda make sense. It's it's it's it's not like you can't even use it to translate that similar tool will translate, but it it can write some things.

Speaker 6:

It sometimes misses the word coffee for pink for some reason. I don't know why. But now the government's like, okay. Can we digitize all of these, you know, 2000 years of archives of our culture, make it public so an LLM can learn based on that? Maybe we can even just like, you know, old programming language is becoming more popular now.

Speaker 6:

Maybe even the smaller languages can be easy to learn, thanks to an LLM and hopefully governments or people pushing that content to be available to these companies and projects. So I I think that that might also be a very interesting push to more, you know, digital publishing by smaller countries, even pushing their limits as much as they can.

Speaker 1:

It's really interesting. And so do you think that because, I mean, for I mean, you're in Armenia, but you we're obviously we're having this conversation in English. English is kinda had has become the lingua franca of high-tech. Do you think that that will begin to become less necessary be where it's, like, actually, because you'd be able to Armenians will be able to work in Armenian and have that be translated? Or is that are we gonna live in that kind of a sci fi future?

Speaker 1:

Or the Oh, absolutely. Yeah. Interesting.

Speaker 6:

I I I did not even consider that. So, one of my students, I I I teach a boot camp, asked a UNIX question to to the system for EBSD stuff. And, like, he he got the output in English, but he doesn't understand it. So, obviously, his first solution was to, you know, put this into Google Trans I mean, Armenia is, you know, post Soviet, so he was like, can you translate this to Russian? And the things that he couldn't understand in English, now he can understand in Russian.

Speaker 6:

So, obviously, if Armenian even gets better, a lot of, you know, younger people who don't know English necessarily might get into tech or other educational fields like physics, for example, in in much earlier age because of the availability of the content. That's a very good point that I haven't considered, although I have seen every day with my boot camp. Yeah. They do that almost every day.

Speaker 1:

Which is, like, you know, I just think that there could be so many positive ramifications of all this stuff. And I, you know, I know and I and so I actually on that note, Mike, when you and I were talking earlier about the pro prospects of 1, 3, and 6 year predictions, you did have 1 6 year prediction that I feel we've gotta get gotta get on the recording here.

Speaker 5:

Oh, yeah. I'm sorry. I didn't do this one. Yeah. Okay.

Speaker 5:

So I I wanna I wanna phrase this one carefully. So, you, when you talk, when you talk about the, the, all the discussion of AI and so on, you had mentioned the AI doomers earlier, Brian, which I I agree with you. I'm I I hope your one year prediction comes true in that people are are no longer worried about, you know, the Terminator taking over. I think there's a lot of people perhaps rightfully concerned with kind of the the the social ramifications that the that models, if, especially if they can't be inspected, they might yield, you know, some forms of discriminatory decision making, or other negative social outcomes that are hard to detect, like are negative for people who have gotten the short end of the stick for a long time. And that will generally regret you importing AI into.

Speaker 5:

And I don't know if there's a prediction, but let's call it a possible path for 6 years from now. I think there's a vision in which, at least for some forms of discrimination, AI comes to be viewed as actually a positive thing. And so far as it allows you to reduce the amount of discretion in people's heads and make certain kinds of decision making more regularized and predictable. And so if there's a class of person or, you know, professional role in which you think some amount of discretion today has to be granted, and they're misusing it. Maybe they don't even realize they're misusing it, But you can look at the data after the fact and say, yeah, people with certain backgrounds are are getting mistreated.

Speaker 5:

If you outsource that to some kind of computational process, it doesn't guarantee that it'll be a decision process you agree with. But it does let you debug it, engineer it, and make it replicable. So I think there's a path by which increasingly automated decision making is viewed, in that sense, at least, as a more just outcome or at least an element of, like, a a more just future?

Speaker 1:

I think it's

Speaker 2:

That's really interesting, Mike. You know, I'm thinking about, like, for example, promotions at at companies. And It sort of mixes your your previous prediction a little bit in that folks look at data and they kind of squint at it, but they come with all their biases, all our biases. And so maybe a way of of providing a gut check or a validation of of some of the data to to see if you're behaving in equitable ways. I think it's really interesting.

Speaker 1:

It is at least plausible. Right? It's possible. I think that that's part of what, what makes this so interesting is that there are there's there's a whole world of possibility out there. I feel like, you know, we've been as humans are want to do, we've been focusing on some of the darker outcomes without actually and which is not to say that those should be ignored, the risks should be minimized, but, there's some real positives, deep positives that could be really, world changing in in a in a bunch of different dimensions.

Speaker 1:

I know that's a that's a great one. Now, if it's a good

Speaker 5:

if you if you wanna look at the dark side of things, you could say, like, it only works by reducing the amount of discretion that individual people have. Right? Like, only by bolting you to the hive mind ever closer does that positive future come to pass. And so let's just say it's gonna be a mixed bag.

Speaker 1:

Well, I totally and and, hey, look, in my in in my future of of, democratized omniscience, It first, it requires you to pass through the police state in which all, you know, conversations are recorded. So, there's there's obviously peril in all of this stuff, and the great possibility for abuse and misuse. But, there's, there's also great possibility for for truly delightful things and, you know, I and I even, I I guess I'm bullish on humanity in that, you know, I think we we generally, like, figure out ways to make these things work in ways that are pretty good. And so I I think we can do it. Mike, this has been great.

Speaker 1:

I know it's late there, and this episode always runs long, but we really, really appreciate your perspective, and you're taking us on your glimpse into the future. I think we got a lot of good predictions. Folks putting good predictions in the chat. Please continue to do that. Leave the chat open for a a little bit here so folks can can drop their predictions in there.

Speaker 1:

You can get them down. And, yeah, looking forward to, it it I guess Adam will be looking back on this year being like, wait. Was that the year that Cantrell just, like, lost his mind

Speaker 2:

Where Chad GPT had been feeding him answers all day.

Speaker 1:

Exactly. The robot overlords. Right? Exactly where it

Speaker 3:

was revealed

Speaker 5:

that that would have been. I should have shown up and just pipe this whole thing into the LOM and just say, well, many people disagree about what 2024 will have in store. Would've been great. That's right.

Speaker 1:

It it was later revealed that Cantrell was an agent of the LLMs, was actually on the take, and the robot overlords had already taken over and have been using me as a vessel to convince everyone that there's nothing to worry about. No. That's not true. As far as you know, that is not true at all. I'm really disappointed that you even say that or think it.

Speaker 1:

Everything's fine, everybody. Please embrace the robot overlords.

Speaker 2:

Blink twice, Brandon. Blink twice.

Speaker 1:

Alright. Well, hey. Thanks a lot, everybody. And, Trenton, thanks for joining us. Thanks to Ashley and Steve as well.

Speaker 1:

Adam, obviously, thanks to you. And, Mike, thanks again for for joining us.

Speaker 6:

I had to go to work.

Speaker 1:

There you go. Exactly. And, here's to a great 2024, and we'll look forward to seeing everyone certainly next year, but, also for a lot of Oxide and Friends in the coming year.

Predictions 2024!
Broadcast by