Predictions 2023!

Bryan, Adam, and the Oxide Friends review last year's predictions and look ahead 1, 3, and 6 years into the future. What's in store for Rust? Will ChatGPT boom or bust? Will Bryan's prediction of the demise of the term "Artificial Intelligence" come to fruition (finally!)?
Speaker 1:

Sorry. Just reprimanding my children. I'm trying to mute myself before I ram my children to go easy on the Internet.

Speaker 2:

You know, I would say though, for the audience, this was probably easier on the ears for than our 10 minute intro of baseball, baloney with Steven, a while ago. I think this is more tolerable to some. Alright.

Speaker 1:

Is that is that any better?

Speaker 2:

Say more.

Speaker 1:

This is better. No.

Speaker 2:

Mark Tomlin says low for real download the app on your phone.

Speaker 1:

Right. Give me your I'm gonna get I'm gonna download the app. You give me your take on on your on our predictions from last year.

Speaker 2:

Our predictions. Alright. Well, I will kick it off. So happy New Year, everyone. As Brian maybe said and maybe didn't, what a year.

Speaker 2:

I was looking listening back to some of the predictions, Brian. I think in addition to the predictions, we made a resolution, and the resolution was to start posting podcast episodes of this. And it only took us until March or May or something to make that real. And we did. So, in addition to, predictions last year, we made this resolution.

Speaker 2:

And after that, we had, my favorite stat was that, we have 58,000 downloads of the podcast. And on YouTube, we had 18,000 hours of listening, which equates to 2 years worth of listening. So in the single year, we had 2 years worth of listeners. When it comes to predictions from last year, I gotta say I was very proud of my web 3 prediction, I feel like,

Speaker 1:

yeah, it was

Speaker 2:

I mean, I don't wanna throw my arm my shoulder hat patting myself on the back, but, but but I said that web 3 was gonna be dead, and we weren't gonna be talking about it. And we barely remember what it was. And I think that may be a little far, but I think we can agree that, like

Speaker 1:

No.

Speaker 2:

It's pretty dead. We're not really talking about it.

Speaker 1:

It is definitely not. Now I think that this is the we had not recorded our predictors prior to this past year. So prior to this this past year, when you and I made predictions together, we have had to rely on our collective self interested memories as we've recalled the predictions of one another. And but it was interesting. Did he I assume he listened to our to that episode.

Speaker 2:

I did. Yes. Yes. And I know that I I said explicitly that this was my heart talking and not my head talking.

Speaker 1:

Right. And so Let's

Speaker 2:

score 1 for scroll 1 for heart.

Speaker 1:

Hey. Scroll 1 for heart. Hey. Hey. Suck it head.

Speaker 1:

Heart with the heart. Take your home.

Speaker 2:

What do you know?

Speaker 1:

Oh, I'm sorry. I'm sorry. Stupid head. Constantly gotta get some trouble. Heart knows all the answers.

Speaker 1:

Okay. The no. I I saw and I then I also regretted the fact that we only gave ourselves 1 web 3 friction a piece.

Speaker 2:

I have no regrets about that because I feel like otherwise, it would just be I mean, it was just too tempting. It was just too tempting. Your your your big web 3 prediction was a big a 1 year flame out, like a big flame out of web 3. Are you you're claiming credit for that one?

Speaker 1:

In of course. Absolutely. Not only am I claiming credit well, so I in particular, I I claimed that there would be a big web reif came out, and that both web 3 advocates and its malcontents would claim vindication. So I am I'm gonna take full with that one on a specific flame out. And the Lunaterra flame out that happened in March.

Speaker 2:

Yeah. Okay. That that didn't take long.

Speaker 1:

I need I like, I would give you, like, with very, very close top. Right? And I think you you got somebody that slid is rear friction that web 3 was gonna be the next web ban, and us qualifying with that meant, I mean, I think that everyone was very skeptical skeptical was a skeptic about Web 3, and we were all basically the kid. We're all I mean, it's the the this and I'll just I don't think, you know, Kelsey had said he'd never seen technologists so divided. And, I think that we were all basically those skeptics and cynics were were, vacated.

Speaker 1:

So, yeah, I'm taking folk I yeah. I'm taking folk credit for that one, for sure. I feel that, like, with Luna, the and I do feel like the FTX bailout of, like, Celsius as pure, was you did have this moment early in 22 where the the the there were people who were welcoming the the the culling of the herd and the weaker among them were even eliminated. But, of course, the whole premise is flawed, and now we we saw that with now.

Speaker 2:

Yeah. Yeah. Absolutely. Another prediction from last year that that I really enjoyed was and I'd forgotten. Laura, predicted that Discord was going to screw it up.

Speaker 2:

That, like, Discord was gonna lose the plot and, alienate their users, and and and that was gonna be it for them. And and Laura, emailed me with a statement that she

Speaker 1:

She's a prepared she's prepared statement that Yeah. Exactly. To reach.

Speaker 2:

She she she Laura, she says she may be not be able to join because she's, she may be giving birth or she may be asleep because the baby is making her sleep. But she says, my 2022 prediction was that Discord would make everyone mad. I'm actually very pleased this didn't happen and Discord is still going strong. Part of me is still a cynic, and and thinks that someone will try to come up with some boneheaded strategy to make the revenue line go up even more. But I'd also like to believe that Discord and its investors really, really understand the core product at this point and will not do anything to distract from that.

Speaker 2:

This does not count as a prediction, but you may all still hold me to that statement. So,

Speaker 1:

Well, it sounds like I was always prepared by at the the request of an investor in this. It feels like there's a, but I think well, I think Laura actually I think she's underselling herself a little bit in that. She made a very important observation about the rise of discord and the importance of witness the fact that we're here. I mean Yeah. Twitter's implosion is not something that anyone saw.

Speaker 1:

And it's amazing how much can change in a year that in a I mean, it it would just have seemed very implausible a

Speaker 3:

year ago. You know, so

Speaker 1:

many 1 year predictions, there's really no action on, and then you got these things that are just, like, explosive that we not makes sense. Yeah. Absolutely.

Speaker 2:

So folks who are here, if you want to join and share a prediction, you can raise your hand, and we'll unmute you, and you can you can you know, we'll call on you and you can share, what you think is coming. Brian, do you wanna, open it up first, or do you wanna do you wanna lead off, or do you wanna close it out, or what you wanna do?

Speaker 1:

Well, I mean, I kinda had this fantasy of getting to bed audio. But, how

Speaker 2:

Good. Good. Good. Then why don't we wait wait for a better audio?

Speaker 1:

Marvin, I bet.

Speaker 2:

But, you know, I'm gonna I'm gonna share, I'm gonna start with I'm gonna, drizzle sprinkle mine out over over the next hour or whatever, but I did wanna share 1. I feel like 1 year predictions are hard because they basically already need to be true. This is your web 3 prediction. But my, my 1 year prediction is that 2023 is gonna be the year of tech workers unionizing. So, and some of that's gonna be precipitated by, like, increasing return to work or feelings of, you know, imposing on on the work life balance and work styles that folks have.

Speaker 2:

They're gonna see, tech workers who previously weren't really that unionized starting to unionize.

Speaker 1:

Are we talking union in the in the, in the legal sense? I mean, this is like local to

Speaker 2:

a I'm not talking about, like, a c structure. Yes. I mean, as a as a like, you're paying dues into the union, collective organizing collective bargaining, that kind of structure.

Speaker 1:

Oh, okay. So this is like real agent.

Speaker 2:

Yes. People writing software in a union.

Speaker 1:

And that would that's a 1 year.

Speaker 2:

I'll take your that's a 1 year prediction. That's gonna that's gonna be the zeitgeist of this year. Yes. Swinging for the fences.

Speaker 1:

TFPK.

Speaker 2:

T f p k. You joined.

Speaker 1:

Give give us

Speaker 2:

your predictions. What are 1 how are you doing?

Speaker 4:

I'm doing well. My name's Tom, by the way. I just gave a TFPK on the Internet. Well So yeah. I'm the Internet.

Speaker 4:

Thank you. So I've got 3 predictions, 1, 3, and 6. One of them is a sort of safety prediction, and I it'll be interesting to see if you can figure out which one that is. So my 1 year prediction is that I think that all of the good chat AI systems, like, you know, chat GPT, that they will all be expensive enough that their frivolous use declined significantly, but they'll still be significantly used in academic misconduct because it's cheaper than all the alternatives.

Speaker 2:

Still the steep the cheapest way to cheap. Love it.

Speaker 4:

Yes. So I I think that one, I don't know if you want me to talk about it or just to go through all of them, what's what's easier.

Speaker 2:

No. Go go.

Speaker 1:

I'm sorry. Go ahead.

Speaker 2:

No. Go for it, Brian. I I

Speaker 1:

I Sketchy robot voice in cell phone in a tunnel. The no. I think this is gonna be a big theme this year, I think, among the predictions. So I tell them, like, yeah, look around a little bit. What do you see out there?

Speaker 1:

Because I've got some predictions in this domain as well, so I'd be curious for you to provide some more context there.

Speaker 4:

I mean, both of those statements are basically bay based based on one piece of context, each. So the expensive stuff, I mean, Microsoft was talking about how expensive it was to run one of these AI systems, and I think they're only gonna get more expensive. And so I think that the sort of cost benefit analysis of, like, the research value of having lots of people use it and the publicity value of everyone seeing how cool it is will be start to be outweighed by, like these instances cost a heck of a lot of money. So, that's why I think that they're gonna become more expensive, and I think that's gonna push out a lot of the silly use, like people, you know, posting to Stack Overflow with this kind of stuff. Like, that's no longer gonna make sense if you have to pay money for it, especially if it's like a per use payment of money as opposed to, like, a, you know, sign up for subscription per month.

Speaker 4:

And then, I mean, I've been working at a university for the last couple of years, and we're already seeing, like, stuff that looks a lot like people using AI to cheat. So and and I and, like, and knowing how much it costs to pay a ghostwriter to complete an assignment. Like, it's not a lot of money, but it's more money than it's gonna cost to have an AI do it. So that's that's why

Speaker 3:

I said that.

Speaker 2:

Do you have a sense of what the per use cost is? And I know that that I'm sure that varies based on the content and so forth, but do you have a do you have a sense of it?

Speaker 4:

That's that's such a good question. I I don't have a good sense of it, but I'm I I'm I'll say something. I reckon somewhere in the range of, like, maybe a couple of cents per use or maybe a little bit more than that. But, like, it'll very much be, like, the AWS style of pricing in, you know, tiny increments so that you don't notice it until you're using it a 100000 times, and then AWS is making all their money. Or in this case, Microsoft is making all of the money.

Speaker 1:

Adam, is my audio better?

Speaker 2:

It is better. Welcome.

Speaker 1:

Is it better, or do you just want to stop talking about it? It's fine.

Speaker 2:

Legitimately better. I I can hear 4 out of 5 words. No. No. 5 out of 5 words.

Speaker 1:

5 out of 5. We'll listen again. Okay. Good. Alright.

Speaker 1:

I'm I'm in on the app, so score 1 for the app. Yeah. Yeah. So, definitely, interesting stuff on and and so, Tom, we you you think that the cost of this is just going to become exorbitant?

Speaker 4:

I don't think it's going to be exorbitant, but I think it's going to be big enough that a lot of the uses that we've seen so far, which are just sort of, you know, like people trying to post it to Stack Overflow or people using it, just to do something funny or intro or like, that kind of thing is gonna go away because the moment that you have to attach a credit card to doing that sort of thing, it's already gonna be more, like, more effort than a lot of people are gonna wanna put in, I suspect. But I think that there will be, you know, serious applications of it that will stop being used. That's why I said, like, frivolous use will decline, but I do think

Speaker 3:

they're gonna

Speaker 4:

be a massive part of, the next couple of years, and I suspect that other people have predictions related to that as well.

Speaker 1:

Yeah. So and, actually, on we and, Adam, maybe now that, hopefully, you can hear me, we can set so a couple of reminders for folks. If you could when you make your predictions, if you could write them down and submit them as a PR when we put up the show notes, it's, I think, really helpful. So, Tom, if you wouldn't mind, as you as you write down the as you make your 1 year and 3 year 6 year predictions, definitely write them down. So make sure we get them right.

Speaker 1:

And it would Adam, I thought it was really helpful that you put the marker in the audio so we could go listen to the the actual context surrounding it. So I didn't I didn't do that.

Speaker 2:

Like, some someone much more helpful than I did, like, from last

Speaker 1:

year. Oh,

Speaker 3:

okay. Yeah.

Speaker 2:

Yeah. That that that sadly, that was on me. But I would say also PRs can go up. Like, right now, there is a placeholder for the notes for today, so you don't even have to wait for tomorrow or whatever. So make your predictions and note the time and throw them up there.

Speaker 2:

That'd be tremendously helpful.

Speaker 1:

And so and then, Tom, do you have, 3 6 year predictions related to, to either large language models or OpenAI or ChatGPT or what have you?

Speaker 4:

So I I try to keep them all quite different, on the basis that, you know, it's more more interesting if different people come up with different stuff, but I don't wanna steal other people's time to contribute. So I'm I'm happy to go through them all to wait till later if that's better.

Speaker 1:

Yeah. Go for it, I think.

Speaker 3:

Yeah. Go for it.

Speaker 4:

Okay. So my 3 year prediction is that a new trend called something stupid like web 3 or web 4 will emerge. It's probably gonna be to do with either AI or the metaverse, and I wrote metaverse in, like, SpongeBob text. And some people are gonna make a lot of money off it off it, but time will prove that it's a fad.

Speaker 1:

Feels like an evergreen. Feels like

Speaker 4:

Right. Yeah.

Speaker 2:

Will all the investors, like, have a, you know, pile in behind it, and then and then pretend it never happened. Sounds right.

Speaker 4:

This set also sounds very likely. If it wasn't obvious, yeah, this this one was my was my safety prediction. I wasn't sure what to say, and I thought

Speaker 1:

there's no way that this doesn't happen. I do. It does feel pretty safe, I gotta say. I wanna make I I I kinda wanna make you bolder. In terms of alright.

Speaker 1:

So metaverse, is this gonna be a pro metaverse fad, or is this gonna be an anti metaverse fad?

Speaker 4:

I think it's gonna be a a pro metaverse fad. It's gonna be, you know Oh. Some new system that everyone is buying, probably not made by Facebook. I mean, maybe made by or by Meta. Probably not made by Meta.

Speaker 4:

Like, I reckon it'll be some other company that starts selling something, and everyone starts swearing by, oh, you know, this this amazing new thing is gonna revolutionize x, and then a large community is gonna form around how great x is. And then we're all gonna realize that x actually provides no additional utility, and therefore, you know, people will invest a lot of money in it, and it'll turn out to be not much.

Speaker 1:

And when you say metaverse, are we because I kinda feel Adam, when you say metaverse, are you implying VR? Because I feel I feel like I'm implying VR when I I I mean, I'm like, it's it's Zuckerberg's vision of the metaverse.

Speaker 2:

I don't even I I'm not even clear on what that is. So I I don't know whether it's more VR or more AR.

Speaker 3:

But, yeah,

Speaker 4:

I don't know. I could even see it being some advanced form of, like, second life that people start saying because that's what we're already seeing is, you know, online spaces that are effectively kind of games that also have other aspects to them as well. And, you know, it it might just be a new second life that people say this is gonna revolutionize the web and then doesn't.

Speaker 1:

And, Adam, are your teen did did is your teenager into VR at all?

Speaker 2:

Not no. In fact, he used to be briefly. Like, we we

Speaker 1:

have gone through a multiple headsets. Yeah. But it's over. So, the the same for us. Multiple I I went through a headset and actually, it's funny because he he and I were comparing notes, and he's kinda kept track of that headset, which has been bought and sold by, like, 7 different kids.

Speaker 1:

And and I'm like, that's kind of a bad sign. He's like, no. No. It's definitely a bad sign. He because his view on it is, like, this is fun for a little bit for a little period of a short period of time when you are by yourself.

Speaker 1:

Have you ever been in the have you been in the room when someone has got VR goggles on?

Speaker 2:

Oh, that's hilariously ridiculous. Yes. Absolutely.

Speaker 1:

And why is it that the animal brain is overwhelmed with the desire to screw with that person? I mean, overwhelming. Is that just me?

Speaker 2:

No. I think that there's a or at least photograph them.

Speaker 4:

This is everyone? Totally. Yeah.

Speaker 1:

Right. It's like I

Speaker 2:

I I

Speaker 4:

don't know if you guys

Speaker 3:

go ahead.

Speaker 4:

Oh, sorry. Go ahead, Brian.

Speaker 1:

No. No. No. Go ahead, please.

Speaker 4:

I was gonna say, I don't know if you've seen that game that's, like, walking on a plank. The only time I've ever played VR with anybody else in the room was showing them this game, getting them to pretend to walk on the plank, which is looks like it's 50, you know, 50 stories in the air, and then going up behind them and either whispering or or, you know, pushing them or something

Speaker 1:

like that, which I realized outs me as a terrible, terrible person. What? I think it just

Speaker 4:

But I feel like they kinda signed

Speaker 1:

up for it. I again, I feel this is, like, super deep in the animal brain. I feel the same way around, you know, I worked for Samsung briefly, and the Samsung campus in Mountain View has these Samsung robots that patrol it that are like these cones that are like 5 foot tall cones.

Speaker 2:

Are these, like, armed Roombas?

Speaker 1:

They are they are unarmed Roombas. You and and it is the it is the least menacing robot imaginable, and I have never wanted to run something over more. I mean, I feel I'm generally, like, a a rule abider, and I'm like, I want to destroy this thing. So I am so, yeah, Tom, I think you're you're a good company. Alright.

Speaker 1:

So the we but you think we have a pro metaverse 3 year prediction, then do you have a 6 year prediction?

Speaker 4:

I do indeed. This one is probably gonna be the most controversial. Oh, it might be. We'll we'll see. My 6 year prediction is that a large or prestigious university starts teaching their CS one course in Rust, and then time reveals that it was a mistake to do so.

Speaker 4:

Oh,

Speaker 1:

I think that that is that's a I don't I I don't think that's a controversial prediction. I think it's a fair prediction. I think that reminds me of, like, the I mean, Adam, the I mean, your for the the the Adam took a, your 1st CS course was an accelerated course that was taught in Scheme. Right? Am I remembering that correctly?

Speaker 2:

I so, actually, I took the course in Java, but I TA'd and helped develop the course Right. In Scheme and then ML. Yeah.

Speaker 1:

And the and so I think it's like actually, I wonder if it will if that first CS course will be if you will bifurcate that first CS course into folks that are ready for a more advanced introduction course. And, like, a a course for concentrators versus a course for non concentrators or something, what it within that. Because Rust would be I don't know, Adam. What do you think about Rust as a as a first?

Speaker 2:

The first well, first of first of all, you know, we we both said jump to first language.

Speaker 1:

But It's offered language. Yeah.

Speaker 2:

Yeah. In in 2029, right, in the in the distant future, like, will this really be the first language? Like, what will I mean, for some entering freshmen, certainly. But for a lot, like, they'll they'll have dabbled in at least a couple of languages. Yeah.

Speaker 2:

So maybe at that point, they're they'll be ready for that that kind of level of, you know, programming language sophistication. Although, Tom, I do like the parlay bet, which is that they will regret this choice. And it, but I think that that actually has

Speaker 4:

I think that's that's probably the more controversial part of it.

Speaker 1:

I sorry?

Speaker 2:

No. No. It's it's an interesting one because I think that universities do this a fair bit where they they have some bifurcation of the intro, and they try it out and often it works. And sometimes it does, and there's a cohort of folks that they sort of need to to fix along the way. And I think that's not impossible in this case.

Speaker 4:

I I think what it's interesting because talking to quite a few people in the Rust community, I think people have been saying, especially the Rust EDU group, which, I recently became involved in and seems really awesome, But one of their goals is to try and teach c s one in, in in Rust to make it possible to teach it in Rust. And I still have some very big reservations about teaching c s one in Rust in the same way that I know, people who are prominent in the c plus plus community have reservations about teaching c plus plus as a first language. And I'm speaking to somebody who's I I I to play language. Well okay. Yeah.

Speaker 4:

But I'm like so I I helped teach a Rust course at a university last term, and I don't think we could possibly have taught it to 1st or even most second years at the university and had anywhere near the same level of understanding. And so I think it's gonna be interesting to see either I'll be wrong and Rust will make big strides to be much more I don't wanna say beginner friendly, but at least sensible to teach in CS 1, or we'll see courses try and use it and then realize that it's actually not a good fit fit for a lot of their students.

Speaker 1:

I I it I It's

Speaker 2:

it's really interesting. I think not just through the thought of the evolution of the language, but the evolution of the documentation and our familiarity teaching the language and so forth. So those are a lot of ancillary stuff as you as you allude to Tom, that facilitates that. I think it's a really interesting one.

Speaker 1:

It is.

Speaker 2:

And from from all our alma mater, I talked to the systems professor Tom Depner, and asked about Rust. And he said, I don't really see a place for it in the curriculum. And and and he is tasked with not not yeah. He's

Speaker 1:

To be a place with the to the curriculum. I mean, that okay. Like, this is, like, borderline. It does not have a right to exist. I mean, this is this is, like, pretty extreme.

Speaker 2:

I thought it was pretty interesting.

Speaker 4:

For what it's for what it's worth oh, sorry, Adam.

Speaker 2:

Please. No. No. Please stop.

Speaker 4:

Like, and I don't wanna derail this into a discussion about teaching Rust Universities and universities in general, which is a whole separate topic. But, I think one of the troubles with universities is because a lot of them, at least my university, teaches sort of discrete concepts in different courses. You've got the networking course and the operating systems course. Rust really doesn't fit well into any of those disciplines because it's it's too big to be able to talk about any one discipline with it, but it's too, like like but it's not its own discipline. You can't have unless you wanna run a Rust course, which is what we did.

Speaker 4:

Often universities don't want to run a course about a particular language, and there's very valid reasons for doing that. But then, you know, if you're gonna teach Rust the operating systems course, not only do you have to teach operating systems, which are already a large topic, you've also then got to teach Rust and explain why it's better than c. If you wanna teach networking in Rust, you know, it's the same problem of

Speaker 1:

Mhmm.

Speaker 4:

Rust is too big to justify teaching it in a particular, concept related course. And unless you can justify teaching a Rust related course, it's very hard to fit in.

Speaker 1:

I mean, it would feel like it would fit in the kind of the software engineering course, kind of a a second year, 2nd semester kind of, software engineering course. I mean, that same professor had us I mean, it and, Adam, you're describing kinda courses that that may have gone too far, bridged too far in terms of of getting too aggressive retooling things. And he did exactly that with c plus plus and it was way too early for c plus plus and it was a mess. And it was all there was a a lot of pain of people hitting compiler bugs and a bunch of other issues. And not getting Tom, just to your point, not able to get the abstractions that you're trying to teach about the operating system because they're so lost in the abstractions about the language.

Speaker 1:

So yeah. So I'm looking forward to the so the blog entry is gonna be like, whoops. I rewrote the course in Rust. Is that the one I'm looking for in 6 years, Tom?

Speaker 4:

Yeah. I think I think that may be what what we see or at least what we'll see. I mean, maybe it's not 6 years because universities move so glacially slowly. But in 6 years, it's people starting to have discussions about, was this the right choice? Or universities that have writ written their course in Rust starting to look at other languages, that sort of thing.

Speaker 2:

Yeah. It'd be I think it's a really interesting one.

Speaker 1:

Good predictions. Alright. Should we, I think wait. Wanna get someone else up here, Adam?

Speaker 2:

Yeah. Pulling up Ben and then back. You are next. Tom, thanks for joining.

Speaker 4:

Thanks so much.

Speaker 2:

Ben, what do you have?

Speaker 5:

Yeah. So I have, 3 predictions. First, and this is the 1 year, so it's basically already happening. Tree borrows is gonna be implemented in MIRI by Ralph and this person I've not seen yet, except for one small PR. And people will be excited that it theoretically supports programming patterns that Stacked Borrows does not permit, but it will ultimately not have a huge impact on the amount of aliasing UB in the ecosystem.

Speaker 1:

So, I think you may wanna define some terms there. Yeah. Yeah.

Speaker 5:

Yeah. Yeah. So, stacked borrows is the existing prototype aliasing model for Rust. It describes how you can interleave, if at all, uses of references and raw pointers. It's based generally off of the similar behavior you get from

Speaker 1:

safe code if

Speaker 5:

you just have references. You can create, for example, multiple, mutable references so long as you use them in a strict stack discipline. So if you only use the most recently created mutable reference, you can then sort of back up and use an older one so long as you don't don't go forward and try to use a newer one again.

Speaker 1:

And all of this is effectively implicit. The the the the the the the the the pro the programmer has to have kind of an intuitive feel that this is happening. Yes. In in safe code, you don't have to

Speaker 5:

worry about this. The bar checker just handles this effectively. But in unsafe code, it's tricky. This bounds the aliasing optimizations that you can tell LLVM about or implement yourself, and checking it requires a substantial run time, which is what MIRI provides. But the the problem with tree borrows is that, while I'm pretty sure it will fix this particular programming pattern that people refer to as reference to header, where you want to pass access to a larger data structure to a function by passing a reference to just a component of it, turning the reference to the point, and then offsetting that outside the sides of the referent.

Speaker 5:

This is a program that does exist. It's very useful.

Speaker 1:

Yes.

Speaker 5:

But, as far as It's brutal. It

Speaker 1:

is brutal. The I I I use this pattern a lot in c, and breaking this pattern for me, was a big part of learning Rust and accepting that that pattern was no longer gonna be part of what I I just like that. It was because I I was really trying hard to to get this to work. I'd I'd I thought of it as intrusive data structures, where you got a data structure within a data structure, and you're passing that embedded data structure at it beautifully. Of course, let's say it's all beautiful.

Speaker 1:

But, and it is it is so, I mean, we use this in the in the operating system kernel to implement, because you can have a single kind of object that is present on many different data structures because of all these embedded data structures within it. But that is just like Russ is like, no. You're not doing this.

Speaker 2:

So, like, so, like, in the Loomis kernel, that's for ABL trees and linkless

Speaker 1:

Yep.

Speaker 3:

Everywhere, like, all over

Speaker 1:

the place. Everywhere. Everywhere. And it's I mean, it's nice that you can have a single thing, a single, you know, a z node that is on, like, 6 different trees at once. Right.

Speaker 1:

But it's a pattern. So that's interesting. That sees it. So this will allow that kind of pattern, Ben.

Speaker 5:

Yeah. So the supposed and, again, right, there's no implementation, so I can't you know? Or it's not completed yet. The the biggest change with tree borrows is that turning a pointer into a reference doesn't shrink its provenance to the size of the reference.

Speaker 2:

Interesting.

Speaker 5:

So you can then cast back to a raw pointer and offset it outside the size of the reference. Right? So if you have, you know, an array of u eights, right, you get a pointer, which is provenance over the whole array. You then turn that into a reference to the first element. You can then turn that back into a pointer and then offset it to another element of the array, for example.

Speaker 5:

Or you could do it with, like, structure casting right to access members.

Speaker 1:

And

Speaker 2:

and what what what impact did you say this was gonna have on, undefined behavior?

Speaker 5:

Yeah. This will make certain patterns. For example, people have code bases where they've used the reference to header pattern. And they're basically because Ralph has been saying for years now, if he will eventually make it well defined, they're basically, like, crossing their fingers and trying to test with Miri as best they can. With Miri, Snackbarrows implementation, just saying they have you be.

Speaker 5:

They're just sort of hoping that it's gonna be fine once MIRI is able to check it with a more permissive aliasing model. The the problem is that, you will still have aliasing requirements to uphold about the regions which you offset this round trip pointer into. And I'm not sure how much people will be able to uphold those requirements.

Speaker 1:

When you say people able to uphold, you mean the the are are people I I've lost track if people are is the programmer or the guy who works. Yeah. Yeah. Yeah. Yeah.

Speaker 1:

Yeah.

Speaker 5:

The the the programmer. The programmer.

Speaker 1:

Okay. And so the this is gonna prevent, effectively, alias disambiguation as an optimization?

Speaker 5:

No. Alias disambiguation will still exist. You as the programmer will have some complicated invariant to maintain about how you access the memory that you want to then turn this reference back into a pointer and then access. So you'll you'll still have some sort of aliasing obligation. I don't know exactly what it is, but there will be some.

Speaker 1:

Got it. Well, so and this is a so and this is, 1 year prediction. So the the this is you see this on the this must be work that's well underway.

Speaker 5:

Yeah. As far as I can

Speaker 6:

tell.

Speaker 1:

Nate. Yeah. And and, Adam, do you use Miro a lot?

Speaker 2:

No. I've never used it. How about you?

Speaker 1:

I have not. But we've got certainly, many of our colleagues have. I mean, that's something I I I would like to I mean, I also really just try to avoid writing I get so scared now when I'm writing unsafe code, that I really I really do try to I I try to stick it just try to keep it pretty safe and pretty defined.

Speaker 2:

Yeah. No. You you and me both. I think I've I've made I mean, this is not a controversial opinion, but, like, unsafe rust feels much more unsafe than regular c. And I feel much more over my skis on those.

Speaker 1:

Yeah. Absolutely. Alright. I bet a good one here prediction. What do you do you have a you you said you got a 3 year and a 6 year too?

Speaker 5:

Yep. 3 years, Rust will have another IR, and it will be an SSA IR that will be used to implement some interesting new optimizations on MIR to reduce the amount of LLVM IR that the compiler, emits. Like

Speaker 2:

Okay. What what's what's an SSA IR IR, man?

Speaker 5:

So LLVM has this IR form called static single assignment. Basically, all of your variables are immutable, and you can only write to them once. So this this makes a whole bunch of data flow facts and transformations very simple, but you need to, like, get into SSA form first, and that's not always easy. Good. But there's a but on this one.

Speaker 5:

While this will theoretically make the Rust compiler faster, people will build, greater abstractions in that time, especially with upcoming new features in Rust, and, overall, people will not notice the compile time improvement.

Speaker 1:

So this is I was gonna ask what what the the so the the net of this is a compile time improvement.

Speaker 5:

Yes. That that is the idea for a lot of mirror optimizations.

Speaker 1:

To allow it to generate better code, but it's actually to just cut cut down on the amount of work it needs to go do.

Speaker 5:

Well, so that's mostly the 6 year prediction.

Speaker 1:

Here we go. 6 year you know, by the way, you get the promise to come back at 1, 3, and 6 year prediction exactly to help us what it would I Adam, I can just imagine you you'll be like, okay. I've been searching the Internet for the last 3 hours, and I still can't tell if this prediction came through or not. Like, I've been I'm sorry.

Speaker 2:

I'm on the Internet archive searching for what we meant by an SSA in 2023. Right?

Speaker 1:

So, yeah, what's your secret?

Speaker 5:

So this was, like, 2 weeks ago. Rust formed a new team called the operational semantics team, and their job, as opposed to the language team, is to specify all of the semantics for unsafe code that we haven't pinned down yet, including aliasing and validity, all sorts of other horrifying things that need to be figured out. There are a number of people, most notably Patrick Walton, who are chomping at the bit to have, semantics pinned down so that they can implement very interesting, optimizations. So in 6 years, I'm predicting that Rust will be widely regarded as a faster language than c and c plus plus on account of these optimizations having been implemented.

Speaker 1:

That's really interesting. I my experience is that for the for a bunch of stuff that I do, it already is a faster language because the abstractions are better. C's lack of composability makes it really, really hard to implement certain kinds of data structures in ways that can be actually reused and robust by contrast. So it it makes it really easy. So I I not that I've when I took a stopwatch to a particular program that I wasn't seeing, I rewrote Rust.

Speaker 1:

I was shocked when my Rust was like 30% faster. And it was 30% faster because it was able to use a B tree, and my C was using an ABL tree. And because a B tree in C is absolutely brutal. But, yeah, that's that's exciting. That that's definitely exciting.

Speaker 1:

Those are those are good prediction. Good stuff. Very the the a very rustation predictions.

Speaker 5:

Thank you. Blocking my book.

Speaker 1:

There you go. Absolutely.

Speaker 2:

I've invited back up and because that's slow, I've also invited Ian up.

Speaker 1:

See. There we go. Back. Here you are. Can you hear us?

Speaker 2:

And, Ian? Ian, can you hear

Speaker 1:

us?

Speaker 3:

Yes, I can.

Speaker 2:

Terrific. Welcome back. What do you have 1, 3, and 6 year predictions?

Speaker 3:

To be clear, are you welcoming me back, or are you welcoming back back?

Speaker 2:

Oh, Ian, you have the mic. Why don't you why don't you tell us what's gonna happen?

Speaker 1:

Who's that first? Yeah. That's right.

Speaker 3:

So my 1 year, following the, the advice that this should already be happening, I think for the 1 year, salary ranges will be posted for all tech job postings in the the US within the next year, so

Speaker 2:

I think the unions are going to insist on it, I agree.

Speaker 3:

Yeah, this is this is mostly driven by legislation that is already passed Colorado and New York and legislation that's about to come into effect in California and Washington state, but most of the big players have basically said too hard, particularly because they have, remote available postings. It's easier to just, give a salary range for for every single job posting.

Speaker 1:

With some companies offering some absurd ranges. I I believe that there was a Netflix posting that had, like, the the range went over went into 7 figures. They had a a huge, huge range. So, I mean, I think our I I also I kinda think that, like, posting salary ranges is kind of the least we can ask of folks. So I'm glad that folks are gonna be able to meet this kind of bare minimum.

Speaker 1:

But yeah. You know, the different good good prediction. Certainly, at Oxide, it's really simple. So we we definitely post our salary range. It's a very tight range.

Speaker 3:

Yeah. It is it is, going to be pretty humorous to start off with. I think I think there's gonna be some some pretty crazy ranges posted. I do kind of wonder if it's going to cause the mix of compensation just to further shift towards our issues and other compensation that they don't have to necessarily post in that range versus, you know, or vice versa whether they're going to start paying a higher salary and lower stock based compensation to be able to pump that number that is public. I kinda wonder how the incentive is gonna play out there.

Speaker 2:

I see. But but, cynically, the you're figuring the number kinda doesn't move, but rather the bookkeeping around it changes.

Speaker 1:

I saw Yeah. I Okay. With both your prediction though and Adam's prediction, you've got kind of the tech worker in the driver's seat. And do you not see a coming bust at all for I mean, I think with every major company laying folks off, I I I don't know. I I mean, I certainly agree with you that it that it's it is companies should do this, but I I don't know that they're gonna be, it feels like, the the tech orders are gonna be are gonna have less control than they've had in previous years.

Speaker 3:

Oh, I think that they're definitely going to like, the prediction of the whether they're going to post salary ranges is easy. I think that they're going to because the hand is forced by legislation. I think the part that is going to be interesting is to see exactly how that, the the kind of second or third order effects of salary being a more public, concept.

Speaker 1:

Yes.

Speaker 3:

Where many tech jobs have previously, kept salary even salary ranges pretty closely under wraps. When that, information comes out to the worker, I kind of wonder how the incentives play out, whether that plays out to paying, more in salary because it is something that is public and comparable from job to job easily or if it means paying, you know, less of a range in salary and more of a range in bonus and RSU where where, the numbers can be played within a in a hidden manner. I'm not exactly sure how the incentives are gonna play out, and that's interesting.

Speaker 1:

I I mean, I definitely feel that salaries being, with salary ranges not being disclosed, companies often let themselves get out negotiated, I feel, by, by employees and especially brassy ones. We wanted the part of the yeah. Folks, I'm sure, are aware, but we are very we are not we are transparent about our compensation at Ed Oxide. We also happen to be uniform about our compensation. But part of my observation was that the that salary negotiation rewards the brassy, not necessarily the people that are actually the best.

Speaker 1:

And, you don't unless you are hiring someone to be a salary negotiator, in which case you should pay the best negotiator to the highest salary, it just did not it does not make sense to and so I wonder if you're you're just gonna have, yeah, tighter range as more employers are like, hey. Look. We would love to pay you more, but sorry. This is the posted range. You're a good negotiator, but you're at the top of the range.

Speaker 3:

Yeah. Yeah. I'm not I'm not exactly sure how that's gonna play out, but I think it's gonna be super interesting to see how the dynamic changes, that information is public, I mean the information was semi public via anonymous postings on Glassdoor and levels, and other sites of similar nature, so people could have some level of understanding of what those numbers look like, but only if their company was of a certain size, and even then the numbers are like, you know, I'm not sure how much you trust the anonymous posters on on on those sites to to give a a full truth.

Speaker 1:

Yeah. Maybe not that much. Yeah. I get the good, good one, your prediction. What's your what what's your 3 year?

Speaker 3:

My 3 year is around large language models, and I think that, at a 3 year range, search engines are going to be really struggling with spam, due to large language models, making it easier to generate large amounts of content with not necessarily a large amount of human input. I think that that's going to kind of exploit the already existing spam problems that they're they're struggling with, and that's going to be a real challenge for general purpose. So so

Speaker 2:

so search engines become unusable because there's such a volume of spam content and, and they're not good enough at, at effectively spam filtering or deciphering what's real content from what's just confident generated spam.

Speaker 4:

That's

Speaker 1:

what I think is the what's the problem with these large language models is like, they believe everything that they're told on the Internet. So this is how they end up being racists and everything else, why they have to, like, correct because they just consume they inhale the Internet. And I've wondered about this too, Ian. Like, what happens when these models start inhaling their own bullshit? And they believe that they're certain because they read the result generated from some other model or themselves that was totally certain.

Speaker 1:

Ian, do you think we're gonna have I think we're already seeing some efforts to use, to use trainer on that to actually differentiate human written content from g from large language model authored content. And I you wonder if that's where we're gonna see a lot of intellectual endeavor.

Speaker 3:

I'm sure that there's going to be, continued, arms races on that front of language models, generating more and more believable content as well as, search engines and others coming up with ways to detect automatically generated content from those language models. I think that the there will come a point where it becomes difficult to write a tool to detect the content generated by those language models without a high false positive rate knocking out, like genuine user generated content. So I think that that is going to be a real challenge, and I think that the there's there's always an arms race between, you know, the the people generating spam and the the, the people trying to sift through that span to find useful signal to serve up on search engine result pages. I just think that in this 3 year time frame we're going to see the scale is kind of towards the spammers even further, and it's going to be a real challenge to dig out from underneath that, within a 3 year time frame.

Speaker 1:

Yeah. Interesting. It does remind me of Did you ever read David Macaulay's Motel of Mysteries, Adam, as a kid?

Speaker 2:

No. I haven't. No.

Speaker 1:

Do do you know who David Macaulay is?

Speaker 2:

Yeah. Absolutely. Like Castle

Speaker 1:

or some

Speaker 2:

of these other

Speaker 1:

Castle and city, pyramid. So David Macaulay wrote this book that I that was delightful as that I loved to read as a kid called Botel Mysteries. The premise of which was that the world it was I'm reminded of this, Ian, because you're saying the world being buried by garbage. The, the US Postal Service, makes mail free, and the the world is instantly buried in junk mail. So there's this apocalypse of junk mail, and the world is buried.

Speaker 1:

And the year is 3,000 and something, and an archaeologist has found a motel that that the archaeologist infers to be a temple. And it is all about the misinterpretation of things from the eighties as so the in particular and it's all very much in the style of the tomb of common. But it the the one of the pieces that definitely struck with me was the archaeologist's future archaeologist's wife, demonstrating the ceremonial headdress, and it is a toilet seat that she's wearing around her head with the the toilet lid up behind her head and sanitized for your protection. The band holding it onto her forehead. It's really, very well done.

Speaker 1:

So this is what's gonna happen even when the world is buried in garbage that it has created. We are not gonna be able to know what's real and what's not anymore, and future archaeologists are gonna misinterpret what we were doing.

Speaker 3:

Yeah. I mean, the inter the Internet archive definitely has a a an interesting job there as well, right, where

Speaker 1:

Totally. What what,

Speaker 3:

I think the the from an archival standpoint, you probably want to archive all of this content. But on the other hand, like, if, do they have to become more selective as to what they archive? I don't know. It's a it's a challenging problem.

Speaker 1:

Yeah. Interesting. Yeah. What's your 6th year?

Speaker 3:

My 6th year is around the ARVR space. I think within the 6th year time frame, Apple both releases and then pulls out of the, VR AR market due to a lack of adoption. So they release something in the next, I don't know, 1 to 2 years. There's some rumors that it's going to be this full, but I think that within 6 years, they'll pull out of the market because they are unsuccessful in getting mass adoption for this product.

Speaker 1:

Love it. I also love just to to, like, to Tom's prediction as well. I love the parlay where you've get like, the the the the the the two predictions, I think it's great. That's a great prediction. Yeah.

Speaker 1:

Because I I that seems that seems plausible, I I think. That's a that's a good one.

Speaker 2:

So Yeah. I I love the 6 years in particular because it's just enough time to feel like not only could someone do something crazy, but they could also regret it. That's right. Exactly.

Speaker 4:

Yeah. That's I mean,

Speaker 3:

I feel I feel like, Apple also have had a number of, like, surprise kits in the hardware space where, like, you would you look at the AirPods business, and you would you would say that prior to AirPods coming up, Bluetooth head headphones were, like, not cool. If you saw someone wearing Bluetooth headphones and talking down the street, you would think that they were generally not super cool, but, they've managed to turn around that perception and image and produce a product that, that has generated a huge amount of business for them in the in the AirPods, like Apple Watch has also gone pretty well, so I feel like this is, you know, it's a bit of an interesting prediction because Apple has managed to turn around these markets that, previously were uncool and make it into something that worked. But, I think that I feel like they're going to not be able to make it happen for VR and AR, for you know a number of different reasons. I think the pricing is going to be crazy and I feel like the the number of people who are going to get value out of an AI headset or VR headset is pretty low in in practice.

Speaker 3:

I think it's a very cool gimmick to to play around with and then to kinda get get Absolutely. Which I think is what we've seen.

Speaker 1:

Absolutely. I yeah. And I I also feel like this is something that the and and and I don't know, do we wanna intersperse our predictions, or do we wanna because I do it. If

Speaker 2:

you've got a germane one, drop it in. Sure.

Speaker 1:

I believe that Meta renames itself to Facebook. So this is

Speaker 2:

What year what year is this?

Speaker 1:

Well, so I have been vacillating this entire time on this being a 3 year prediction versus a 6 year prediction. Heart says well, the head is actually regretting that it's even along for the ride. Heart says 3 year. Head says 6 year. I'm I'm gonna be inspired by the victory of your own heart this past year.

Speaker 1:

I'll make this a 3 year prediction. So Meta renames itself Facebook alphabet style. The, and, Meta still exists. It's only exists for the for Zucks vision of the of the of the metaverse, and Zuckerberg has been convinced to step down as CEO of Facebook and to run the metaverse entity. So that it it This

Speaker 2:

is quite a parlay. Oh my goodness. A lot of stuff going on for you.

Speaker 1:

This is like, right now, head is throwing up in a trash can as heart heads away.

Speaker 2:

Someone take the microphone away from heart.

Speaker 1:

Heart Hart is and where is Hart? Hart was in oh my god. Hart's got the mic. And I know that this is, like I I mean, the the the this is stupid in many different levels, and not least that he's got that this is something that he would need to do of his own volition, that there's no amount of of shareholder activism that can get him out of the company. But this idea that he has is a stinker, and I feel that everybody knows it.

Speaker 1:

And every I feel that I I I've just I've yet to encounter any person, real person who's like, no, I want to live my life with a headset on. And the fact that when my my own 15 year old washed out of it, and then when we watched that headset get sold and sold and sold, I'm like, this is not good, man. If you if you've not won the 15 year olds for this, the the this is gonna I mean, it's interesting. It's cool. It's neat, but it's good neat for, like, a game, and it's neat.

Speaker 1:

It's just it's small. It's not a big thing. It's a small thing. It's a it's an important thing. Yeah.

Speaker 1:

Small thing. So, yeah, that's but that that that that is is my is my 3 year. Maybe I'll I just to get them out there, so I can only go to one spot in the recording, do you mind if I give a 1 year and a 6 year?

Speaker 3:

Go for it. Go for it. Yes.

Speaker 1:

So, 1 year, Musk is out of both Twitter and Tesla, neither of his own volition.

Speaker 4:

Okay. So how does

Speaker 2:

he get kicked out of Twitter, not of his own volition? Like, the

Speaker 1:

Debt holders. So I think that I I the the the way he structured that, he's got the I think the debt holders are and I think that that is gonna look less like him being fired and more like him being persuaded to find someone else, and the debt holders finding someone else, and then more or less forcing him to take it. And

Speaker 2:

I not in charge of software and servers, is it? Right.

Speaker 1:

So so I think the the not I I should say out as CEO. So not necessarily Oh. I'm involved in Twitter, but out as CEO of Twitter. And it's just very clear a year from now that's it's not clear now, but that this is just a complete god awful mess, and he should not be running the business. But I think he also will be out of Tesla, not of his of his own volition.

Speaker 1:

I think he'll be forced out of Tesla because I think that

Speaker 2:

I I really like that one. Now which one if you had to if you had to flip a coin or or whatever, which one

Speaker 1:

comes first? I think Tesla comes first because he's he's disinterested in running that company. He's just not running it right now. And I I

Speaker 2:

think that's absolutely right. And I think the shareholders have noticed as opposed to the absence of shareholders on the other side.

Speaker 1:

Yeah. And as Hart was contemplating making this prediction, Head really wanted to just check out what the what the shareholder rights were for Musk and to what degree he's got what percentage of the company and what are those super super majority shares or super voting shares. Adam, do you know anything about the dynamics there?

Speaker 2:

No. No idea.

Speaker 1:

Yeah. Okay. Well, it's alright. Sorry. Sorry, head.

Speaker 1:

Heart hearts made another prediction. So that's, that's my 1 year. And then, my my 6th year because I and I'm not at all surprised that I, you know, I I kinda felt going into this that we would see 2 themes. 1, I just feel with with meta going so long on the metaverse. It's really tempting all of us to make metaverse and and VRAR based predictions, then also, kind of generative AI based predictions.

Speaker 1:

So I'm gonna predict that in 6 years, We no longer call it generative AI. In fact, the term AI has fallen out of fashion, and we are no longer thinking of these things as replacing people, but rather goddamn it. Heart's making another prediction here. Scott, sorry, head. You're o for 3.

Speaker 2:

I was like, I've been hearing this I've been hearing this prediction for 20 years.

Speaker 1:

No. But this time no. See, here's why it did so and and that that we are really thinking of these things as we and and instead of thinking of them with kind of, intelligent metaphors or with certainly with human metaphors or with the as with HEI, we are thinking of them more with mechanical metaphors, more as like, you know, assist, auto, the, things where we

Speaker 2:

are orientation. Yeah.

Speaker 1:

Yeah. And where we because, I mean, honestly, like, I think the large language I mean, how badly, Adam, would you want to get a large language model to inhale our own technical documentation at Oxide so you could get it to write a glossary?

Speaker 2:

Oh my god. That'd be unbelievable.

Speaker 1:

Why? That'd be super useful.

Speaker 2:

And it's like Yeah.

Speaker 3:

That's If

Speaker 2:

it leaves that'd be cool.

Speaker 1:

It's super useful. Or a

Speaker 2:

similar summary or yeah. Or

Speaker 1:

No. No. No. Just being able to be, like, hey. Could for this, I actually want you to, the the to I wanna let you loose on this technical documentation, and I want you to write a FAQ for it, which feels like and that feels incredibly powerful and useful.

Speaker 1:

And no one's gonna be like, oh my god. You're putting someone out of business. Like, no. You're doing something that actually we don't have the ability to do right now. And it would be really, really valuable, but we need to stop thinking of it as, like, a chatbot or something that's gonna, like, write a term paper.

Speaker 1:

The the the this is actually, and it's it is also stuff that is, I think, probably a good fit for it, because the the the the the stuff that it's inhaling is all technical documentation. Like, you're not actually sorry. Like, you're not actually slurping in QAnon conspiracy theories. And so so I think that we have, we are thinking of the these technologies are important, and they're big they they the the degenerative technologies, in particular, are we now realize are big breakthroughs, but we are we also realize that they are not on a trajectory to to replace human intelligence. AGI, they they have not advanced AGI at all.

Speaker 1:

But what they have done is that they've made possible some things that were not possible that are important and valuable. So is that it that is

Speaker 2:

that's a terrific prediction. I love that, Brian. And I think that, and, actually, I I welcome this new AI future, not AI future. Excuse me.

Speaker 1:

You're just welcoming the fact that heart went went 3 and o against head for these predictions, and you just can't wait to watch head get Head is it will will cackle as as I am open for these.

Speaker 2:

Alright. Well, I got a couple more. In addition to my ludicrous tech, unionized tech workers, 1 year prediction, I've got, I've got a couple of, mediocre 3 year predictions. First is that HPE is acquired. They they they have basically been flat, like, since the since the diverger or whatever it was.

Speaker 2:

And my even lower probability too even on that.

Speaker 1:

Who are they acquired by? Yeah. Yeah. Okay. Do you have to acquire?

Speaker 2:

Yeah. Yeah. HP.

Speaker 1:

That's great. I don't know.

Speaker 2:

The acquirers, but that's my little probability parlay on that.

Speaker 1:

But HP acquires HPE.

Speaker 2:

How beautiful would that be?

Speaker 1:

That would be very beautiful. I am hoping that HP being the the the acquirer, will, will enforce some common sense. And HPE, in particular, HPE, the the and I've wondered, like, how much 8 chan does Twitter need to devolve into before HPE GreenLake will stop spamming me with ads? Apparently, like, not enough. Wherever we are now, it's like it is still it's like the it's like the last advertiser left, at least my own feed.

Speaker 1:

Is HP IBM has disappeared. I'm sure IBM is still advertising there, but it's IBM has disappeared from my feed on Twitter. When I I've I've been going there actually less and less and less frequently, but so I I probably only go there once every couple of days, but HP is still there. So I welcome this acquisition, Adam, and it's a great prediction.

Speaker 2:

I don't know about that. But my, the, maybe more of my head prediction is that, Brian, you you're familiar with ZNS in, the NVMe protocol.

Speaker 1:

Yeah. Yeah. Yeah. Yeah.

Speaker 2:

The zone namespaces. So this is like turning over more control over, placement on these these flash NAND devices. Okay. But there's been the like meta, slash Facebook, I guess, at the time. And, Google have have kind of backed away from that and said, you know what?

Speaker 2:

Flash vendors, actually, we don't want to completely have to drive. We'd still like you to do a bunch of stuff. So there's, a bunch of stuff called flexible data placement, that lets that is more of a collaboration between the the NVMe device and the software. So I think in 3 years, there's going to be a general purpose open source file system that is widely used, around this technology that that then starts to enable kind of cheaper, lower DRAM, you know, less complicated FTLs.

Speaker 1:

Is this your way of telling me that you're writing such a thing? Does Oxide author this? Is this a

Speaker 2:

you know what? So probably not in part because, no, in part because if we were going to write something, it would not be a general purpose file system. If we were gonna write something,

Speaker 1:

it would be a b u b b s. Yeah.

Speaker 2:

It would be something that enables, block reads and block writes, like the kinds of things that operating systems want to do to us, not, you know, a general purpose file system that stores, like, database files or, you know, MPEGs or whatever.

Speaker 1:

Yeah. Interesting. Yeah. So okay. Yeah.

Speaker 1:

And I think, I mean, this is something I mean, this is something I feel that that, you've been look waiting for for a while. I think

Speaker 2:

we've probably If you're saying that I've been predicting this for 20 years, like, that's also fair.

Speaker 1:

I wanted I wanted to stop short of that. It felt mean spirited. It felt like I Yeah. But, yeah, it feels like it may maybe 15 years.

Speaker 2:

Yeah. I I I wrote a article for the ACMQ too long ago where I said, we want this, we need this. And I think that there has not been the economic incentive to it, and there have not been the devices there. But I think with with pressure from Google and and Meta and the hyperscalers, around some of these technologies, they're starting to make it much more feasible. So, you know, here's hoping.

Speaker 1:

I yeah. Okay. So that's your that's 3 years.

Speaker 2:

That was also 3 years. That was also 3 years.

Speaker 1:

6 years.

Speaker 2:

I I I'm out on a limb here because I feel like either this is gonna be folks saying you're crazy or that happened 2 years ago. What are you talking about? So in 6 years, I think that general purpose, like most general purpose CPUs are gonna be heterogeneous in terms of course. Meaning that there are gonna be a bunch of course that all execute the same ISA, but have very different performance and power characteristics, even within the the on the same die.

Speaker 1:

Okay. But the same ISA is an interesting constraint. So I actually am I'm Yeah. Yeah. Yeah.

Speaker 1:

I am with you. I definitely think we're gonna see these, these heterogeneous dies where we've got different compute elements that are are on the die. But so you think they're gonna they'll they'll share an ISA. I got my I don't know if you're able to scroll down.

Speaker 2:

Yeah. Yeah. So so let me so the reason I think about shared ISA or, or why I think, I, I, I think that that makes it a more interesting prediction in that, then you start to schedule workloads based on, you know, how much power you wanna devote to it or or characteristics of it rather than having, you know, different, you know, entities that you've compiled specifically to run on a on a one core or another, but rather having the the scheduling of these, workloads, you know, be the responsibility of higher level software that or or user intervention that then puts it in a more efficient place to run.

Speaker 1:

Interesting. So so in other words, you've got a there's a single unified system that's able to see all of these. That's But yeah. Interesting. No.

Speaker 1:

No. I think it is interesting. I don't think that I don't think that that that is not we've I mean, we have certainly seen elements of that in computing's past, but I I think that not nowhere near to the degree that you're protecting against a good prediction. 6 years.

Speaker 2:

No. I don't know the I don't know the degree to which Apple is doing this to some degree, but I think with some of their m series chips

Speaker 1:

For sure.

Speaker 3:

Yeah. They're doing all kinds of different stuff. To some degree on on on different processes where they have, efficiency cores and performance cores on the same die. And the the performance cores can burst to a higher clock rate, whereas the efficiency cause, like, a little bit more constrained. I think that some of that ends up just being thinning though, where they they look at the quality of the the, the dye and work out which ones are which, cores are good and which ones are bad, or even like print a die with, I don't know, 16 cores on it, and they find out that, 6 of them are bad, so they sell it as a 10 core part.

Speaker 3:

You know, I think that this, like, efficiency versus tower cores on the same diet already exists in some packaging, but it may not be on every part.

Speaker 1:

Yeah. I

Speaker 2:

I I'm pretty keen that this becomes, you know, more, more or less ubiquitous. The general purpose CPU is kinda work this way.

Speaker 1:

And so it it it's so kind of

Speaker 3:

wonder what you think the split is going to be on efficiency versus power on those. Like, do you have a thought in mind that in the next 6 years, what do you think a core count would look like, and what number of those are gonna be like?

Speaker 2:

I I really don't know. And and then I I was thinking about my predictions, thought about this one, and then looked back at my predictions from last year and saw that I had predicted, made some predictions around the power efficiency being increasingly, you know, relevant to, you know, how we we when and how we we run workloads. So it I I think that when I get in a predicting mindset, my my mind turns to, some of these, you know, power scarcity, you know, environmental factors, creeping into how we construct systems.

Speaker 1:

Oh, I I so I don't necessarily think that they're gonna share an ISO. I think what we're gonna see is more this accelerated compute being brought on DIE. And but I shared I I I like the vision of being able to actually have a single system that has much greater insight into this. And it does actually remind us of a prediction that we, you know, didn't revisit, which should have, Adam, is the the we had, a a guest, last year or a year ago, talking about the NVIDIA acquisition of Arm, which is very much on a year ago, and saying that that would ultimately it would take a long time to fall apart, but it would fall apart over antitrust. And, of course, it fell apart much faster than that.

Speaker 1:

And that would that so that was a very pressured prediction. And I feel NVIDIA closed arm, like, it it but that your prediction has got potentially a very different flavor to it where it is, and but absent that, I think it's I think it it looks more like perhaps an x86 future or x80 I'm not sure. You probably see a future for Arm as well. But it it it feels like Nvidia would have had more a part of that future before that. It was just ballpark.

Speaker 2:

Yeah. Yeah. Having having very heterogeneous, like, single die kind of systems. Yeah. For sure.

Speaker 1:

And as long as we got yeah. Sorry. Go ahead.

Speaker 2:

No. No. Go you after you, Brian.

Speaker 1:

I did. Well, I was gonna say that we also have, the the keep in mind, we've got Laura's prediction that risk 5 is gonna be, is gonna be in the data center here. I mean, I think we've got, what, 2 more years to go on that one. So, maybe maybe Discord will alienate their user base by putting risk 5 in the data center, you know, giving, giving Laura a split decision.

Speaker 2:

Right. And then the last one I wanna make for 6 years was that, the VC funds for that that invested in 2022 and 2020 3.

Speaker 1:

Oh, you think they actually crush it? The VC funds

Speaker 2:

in 20, the funds that have invested in 22 and 23. Okay. Have crushed it. That this this this winter that we're going to has caused extremely shrewd, capable, smart investment, and that that, generates a new frivolous bubble, you know, 6 years from now.

Speaker 1:

Only 6 years to the next bubble.

Speaker 2:

Yeah. Yeah. Yeah. Well, I think that I think that, that that VCs are gonna be so convinced that that they have the miner's touch that it's just gonna be another cycle.

Speaker 1:

And how about all these VCs with these crypto funds, the web 3 funds, funds, the web 3, the the term that we don't even recognize anymore, per your prediction a year ago? Well, well, that was like

Speaker 2:

I mean, I'm talking about, like, either that was, you know, before the the correction in, like, March 22.

Speaker 1:

I'm I'm talking

Speaker 2:

about the investments after that. Yeah.

Speaker 1:

There you go. Okay. So you the the investments that are made in the coming apocalyptic wasteland, are actually payoff hugely in 6 years.

Speaker 2:

Usually. Yeah. That's right.

Speaker 1:

That's exciting stuff. I, did you make any one year purchase there? I feel like you got out of a 1 year production.

Speaker 2:

Oh, my my, tech workers of the world unite.

Speaker 1:

The tech workers of the world unite. Okay. Yeah. Yeah. So okay.

Speaker 1:

So I am gonna I so I actually feel the that I I think you're you're definitely onto something and that we I think that you're seeing increasing tension for sure, and we're seeing that kind of bad behavior. Certainly, I I think that Musk normalized bad behaviors we talked about on when we talked about layoffs. I wonder if we're not gonna see more bad behavior as you have, folks talk about unionization because you most execs in tech don't actually understand at all their legal obligations when it comes to workers who are organizing. You end up seeing labor violations as, like, people are fired, for example, for advocating a union, which it which you can't you can't do. So I wonder if we're gonna see more of that in the coming year before we get to full on unions.

Speaker 1:

But

Speaker 2:

yeah, may maybe we won't get to that point, but I think that, I don't know. I think that, you you know, obviously, the pandemic has has has and will continue to have repercussions for how we work, and the relationship between employers and employees. But I also think it's inspired this, and maybe this is just because I moved from San Francisco to the East Bay, but, like, folks are seem to be a little more relaxed about their relationship with their employers. And, maybe some of the folks who were strivers are striving a little less hard and saying, you know, I I wouldn't I, you know, I I could I've seen more people taking gaps and quitting without necessarily having the next thing lined up. And I think that either there there may be a relaxation and and and feeling like, you know, while I agree, Brian, that there there is this contraction in is this winter that we're in, or is that coming, that will increase, employer power.

Speaker 2:

I think on the other hand, employees can can sort of fuse how much they want to acknowledge that. And, I think that some of that might be coming as well.

Speaker 1:

It's gonna be a, a tempestuous year, it sounds like.

Speaker 2:

Exactly. Strap in.

Speaker 1:

Strap, which ultimately, the the, the it is the union at HPE that that forces their hand to be acquired by HPE. I can't wait. Do we have and I think we had a we had, I'm just looking at the chat at some of the other there's some good predictions in, in the chat. I there there is someone who you know, a lot of folks agreeing that, yeah, I I don't see PR being mainstream, but you do have some folks who love their VR headset. So, you know, it is definitely gonna have its place.

Speaker 1:

The one

Speaker 2:

I like this one. 1 year, Twitter is bankrupt. In 3 years, Facebook slash meta is bankrupt. Those are bold predictions. Sam, 801.

Speaker 2:

Actually, one more bold than the other, I think.

Speaker 1:

Raggy asks if anyone has got a strongly held Copilot prediction. Adam, do you have any Copilot predictions? Do you have you used Copilot?

Speaker 2:

I have not used Copilot. I really should. Have you?

Speaker 1:

No. And I would not. I Copilot, just the thought makes me this is like Copilot reminds me of software engineers that I I I just cannot deal with writing code with. Namely, I I actually I I really wanna sit down and, like, think about the problem with my notebook and be kind of, like, free of distraction. And Copilot, it feels like it's exactly the opposite.

Speaker 1:

It's just like someone who's, like, very loquacious and certainly constantly suggesting things and won't shut up. Just like, okay. I really need can you leave, please? Can I just think about the problem by myself, please? I'll leave.

Speaker 1:

Actually, I'll leave.

Speaker 2:

You know, the prediction I wanted to make around around this space that that my, my heart talked to my head out of was, you know, around this generative stuff, putting low code and no code kinds of products out of business, where you can just say to the computer, this is the kind of program I want you to write. But my heart dissuaded me from that because I don't know who's gonna debug that, and it certainly is not gonna be the the same thing that wrote it.

Speaker 1:

Absolute yeah. Yeah. I think it's a

Speaker 6:

The the thing that I will put out as a prediction is probably more in the 3 year kind of time frame, but that you are going to see languages that are placing more priority on having a pre trained Copilot style model than on documentation. Because ultimately, like, if it's a brand new language, but everybody gets the general gist of how to write code, like, you can make the argument that, like, it's much easier to read code than it is to write it if you're writing it in a language you have no clue about. And if you're writing some sort of like domain specific Rust style, HPL style kind of thing, that I could make the argument that if you are going to be supplying a Copilot style thing, it will be possibly, you know, more the tool that people reach for rather than box or Stack Overflow because it doesn't tell you that you're a fucking moron every time you ask a question.

Speaker 1:

So is I got I got such a excuse for that. I I also kinda feel that, like, the I feel kinda the same way about the I I know that IDE support is very, valuable, but I also feel that it allows people to kind of paper up the complexity in a system and allows the system to potentially become I I've seen this like, it feels to me that, like, you see these Java systems that Java based systems that to me are it's just insanely complicated and really require an IDE to reason about. I don't mind. Am I am I am I am I characterized?

Speaker 2:

You are sounding yes. You are sounding like a cute person. What what was your question?

Speaker 6:

Hey. Hey. Hang on. Like, part of the problem with Java is that, like, that was just like everyone getting so high on the concept of object orientedness that, of course, I'm gonna need an enterprise Fizzbug Fizzbuzz Factory Builder Builder Factory Builder Enterprise Fizz Factory. Right?

Speaker 2:

Yes. But but but, Brian, for example, like, I I can't I mean, I struggle to write or understand Rust without Rust analyzer. Or maybe put another way, like, I don't really wanna live in a world without Rust analyzer. This is not an intervention.

Speaker 3:

I'm like,

Speaker 1:

you know What what would an intervention look like? I understand this is not an intervention. And if it were an intervention, what would it look like?

Speaker 2:

I would be saying your name a lot more. Mhmm. But Brian

Speaker 1:

feel like we'll leave you that unsaid.

Speaker 2:

But but, like, when I when I'm looking at, code review on GitHub, I'm like, what the fuck are the intermediate types, in part because I'm in some more, mappy, foldy code here and there. And you're like, I I I don't wanna like, I I love these annotations that that the IDE has given me these days.

Speaker 1:

Alright. So the does that relate at all to Copilot? So in terms of the, I I mean, to to the prediction that we're gonna have languages that have instead of having documentation, have, like, no. Don't worry. This, like, assistant will help you write it.

Speaker 2:

I, I think it's a great prediction and I hate it.

Speaker 3:

I mean, it's not it's not too far to vote, the prediction that Brian made of having a tool that could ingest your documentation and generate, a language model or a model based on that documentation. It's just another step along that journey. Right? Instead of generating a glossary, you could generate a, you know, a, block towards it. You could ask a question of and it could tell you the answer.

Speaker 3:

Right? Like, what happens when I load a model from the database using this method or something. Right? It it could give you an answer in in in text as it may come from the documentation or it may come from the the, you know, someone who read the documentation and say, we're gonna spit out something from that. So, like, the it's not that far away from that?

Speaker 1:

Yeah. I mean, I think in the well, this is and then you're kinda getting closer and closer to, like, Clippy, which I I mean, I what what are you what are you feeling kind of Clippy, Adam? I I'm I I like Clippy.

Speaker 2:

Complicated. Yeah. I I think I mostly like I think I mostly like Clippy, and one of the reasons I like Clippy is because I'm I'm not ashamed to admit. I use Versus code. I use an IDE, and it, like, lets me, like, one click, like, try it out, its suggestions, and then, you know, one keystroke to convert it

Speaker 1:

if I hate it.

Speaker 6:

This is clippy some sort of rust thing as opposed to, like, the Microsoft paperclip?

Speaker 2:

Correct. It it named named it homage for some re weird reason.

Speaker 1:

Oh, to a very weird reason. I I kinda feel like anyone who's gonna get this reference is gonna be disgusted by it, and someone's not gonna get the reference. Like, what's the point? I that's a it's very strangely named. But, yes, this is not Clippy as in Microsoft Clippy circa 1990 6 or whatever.

Speaker 1:

This is Clippy, the the the the the Rust program. The Rust Linter, well, it's and and it kind of it advises on best practices. So it will say which I mostly agree with. There are some of them that I think of, like, oh, come on. I mean, really, that's more readable to you, Clippy.

Speaker 1:

But there and then there are also there are also some situations where it gets a little bit confused, and it's it's found something that is, like, not great, but its suggestion is, like, definitely worse. But the fact that it's not great is actually good point. Like, alright, this is a good point. And then I do feel like man, the thing that it definitely will, like, bust me on, which I do appreciate is, when it looks okay. That's that's an intervention right there.

Speaker 1:

Like, that's a goddamn intervention. No. Not file size at all.

Speaker 2:

Okay. Just a guess.

Speaker 1:

But but but but why don't we why don't you get this off your chest? Like, let's just be done with it.

Speaker 2:

Have you ever loaded up dtrace minus c in GitHub? I'll tell you, you haven't because it just doesn't load because you'd still be sitting there.

Speaker 1:

And that's my fault. That'd be my my fault. It is my that that's my well, I get their their port software is is my fault.

Speaker 2:

Well, it's somebody's fault.

Speaker 1:

Look. I it it is actually and when they did the the port of the there I mean, obviously, the has been ported in several different ways, but the the the Linux folks at Oracle, the first thing they did is broke it into a 1000000 different files. And I the the you know, I I to me, it just doesn't add that much value. I know that this I I you you're fine.

Speaker 2:

I'm actually I'm actually with you. And I'm I was just saying because Clippy has that kind of fuzziness. I would also say the free MBSD folks did the same thing, and they immediately regretted it. And it didn't take 6 years for that prediction to come true.

Speaker 1:

Yeah. And it's, like, because it's, like, it you're not you you're actually, I mean, there's definitely time and a place for, obviously, for different files and different do you to use the file system as part of program structure. And I probably okay. Look. Fine.

Speaker 1:

I've I probably use it less than maybe even I should, but certainly less than others.

Speaker 2:

We talked about Java. I mean, one of the things I hate about Java is the 10,000,000,000 files necessitated by it. So I'm I'm I'm with you on the other side of it being just as strong.

Speaker 1:

Yeah. There there is another extreme. And I do feel like so that's a good good example of where when you've got a gazillion tiny files spread throughout the file system, You are really relying on the development environment. Like, you can't like, you're not just saying anyone can kinda bring an editor. It's like you actually need in order to reason about this, you're gonna need something that really understands how the file the file system structure correlates to the program structure.

Speaker 1:

And the the the that that that can be that can be a bit brutal. But so I What

Speaker 2:

does Clippy actually ding you for?

Speaker 1:

Clippy so Clippy the way Clippy really busts my chops, and it's just, like, totally right is when it I I hit some of its complexity on like, complexity. When and this is where I I getting, like, dirty. I mean, because I think one of the things I love I do love about Rust is Rust is very rigorous, but it's also very loose in some delightful ways. I mean and it's easy to get strung out on tuples. It like, real easy.

Speaker 1:

And it's easy maybe maybe it's easier for me than than the other people. You're like

Speaker 2:

you're like, it's great. Like, I don't I can have this incredibly complicated structure, and I don't have to name anything. I can just do dot 17 and pull out the 17 component.

Speaker 1:

Desugar it when you need it. So you can actually, like, eat I mean, it it like, having a, a result of a of a b tree of a bull.

Speaker 6:

Bit field. No. No. But you just produce a massive unnamed bit fields?

Speaker 1:

No. These are 5 fields, but it's like it's kind of easy to accrete a and and this is when clipping blows the whistle on this, it's, like, always right. It's like, what is this? And you're like, Clippy, what is this? Clippy is like, this should be a type.

Speaker 1:

You know, it could be this actually should be a type. It's like time to actually turn this into an actual, like, structure. So I I and then the other one I I hit that it, whenever Clippy is upset about

Speaker 6:

the number of arguments to a function, it's it's right, and I know it's

Speaker 1:

right, and I've known it's right before I ran it. Function, it's it's right, and I know it's right, and I've known it's right before I ran it. So I no. I think it's good. I I I I like Clippy, and I I generally like its suggestions.

Speaker 1:

Other than, like, just the is empty thing bother you, it does not.

Speaker 2:

As opposed to ln equals 0.

Speaker 1:

Supposed to ln equals 0? It's like, really?

Speaker 2:

That's the hell we're gonna weird. I I don't know. I sort of like that one, but in that it's simple and right. But I agree with you that it's like, come on. What are we what are we doing here?

Speaker 1:

This is where I feel like if we can channel some of our colleagues. I feel like Cliff would explain to me why is empty would actually allow the compiler to generate better code than length equals 0. So I'm sure there's a good reason for it somewhere, but, then that one, I've also just, like I also just, like, accept it. I just, like I'm not I'm not fighting that one. And Yeah.

Speaker 1:

In general, not fighting Clippy. I I think Clippy is pretty good. But so I I guess the question is, like, I I like I like Clippy, but Copilot gives me an autoimmune reaction. And it I and I I I and I guess this is the my prediction with it had more heart than head is that I would like to see some of the technological foundation that is being used in something like Copilot actually go into things that I I would use more the way I would use Clippy. I would use it more as a tool and less as a colleague.

Speaker 1:

So I

Speaker 6:

So, Brian, are are we hearing that d trace is adding, large language model support to prompt?

Speaker 1:

I that's it. Well, I think

Speaker 2:

Can you imagine writing empty script with

Speaker 1:

Oh, no. No.

Speaker 2:

Pilot sitting over your shoulder?

Speaker 1:

Oh, no. I did this. I this is what I did. I so no. I I asked it.

Speaker 1:

And this is where you realized that, like, this actually this chat GBT as it exists today is more gimmick. So no. I asked it to, instrument, all I asked it to, like, instrument functions during boot time or something like that. I the the and the script that it gave me was wildly wrong, wildly wrong. And it gave it to me with, like, total confidence.

Speaker 1:

And I asked some follow-up questions about, like, how did you like, who taught you to do it this way? And it was crazily wrong. And I think, like, Detre's lives in this, like, really tough milieu for with respect to these large language models where it is, like, large enough that it can, like, you know, go read Brenton's blogs or whatever. I mean, it can inhale enough information to get some confidence, but not actually enough information to get to get because it's not, you know, Rust or, you know, it's not it it is not a it is not a system that many people actually use. And it it enough use it to put stuff on the Internet, but not enough to actually be able to, I think, correctly train these models.

Speaker 1:

So, yeah, have you done this? So I don't know if you haven't read these scripts.

Speaker 2:

No. I haven't.

Speaker 1:

Do you should have it do it. Because I also feel it's like and this is where I'm also realizing it. Like, this thing is a bullshit artist. With, like as with a bullshit artist, when they're bullshitting about a topic that you don't know, you're like, oh, this person seems convincing. And then and then they, like, drift in a territory that you know well.

Speaker 1:

You're like, that's just totally wrong. Why would anyone did did you see me ask

Speaker 6:

I mean, part part of that's kinda expectations. Right? Like, are are you considering it on the level of a senior engineer or on the level of, like, an intern who would happen to, like, be familiar with Python because they learned it in school? Well, so, like, you know, like like, if if like, a lot of us could really, like, you know, just, like, okay. Fine.

Speaker 6:

Yeah. Write the boring parts. I can code review. Right? Like, here.

Speaker 6:

Do do my unit test. Right? The problem is I I expect you to, like, generate, like, a rate export or something out of

Speaker 2:

no I would rather write operate the unit test. It can write the code. I I'd rather split it up that way.

Speaker 1:

Well and I I think that the the the the problem is that it's certain and it's wrong. It is the problem. So it does it's like that's actually not a good attribute of the junior engineer. That's actually a terrible attribute to junior junior. And and Right.

Speaker 2:

That's actually the worst attribute.

Speaker 1:

That's actually the worst attribute.

Speaker 2:

Like, you want the person who could ask questions.

Speaker 3:

Well Yeah.

Speaker 6:

I mean, the the the question is, do you interpret what it puts out as confidence? Right? Like

Speaker 1:

Oh, it's it's confident.

Speaker 6:

Like, I mean well, I mean, like, it spits out something because it's a computer. Right?

Speaker 1:

That's right.

Speaker 6:

Like, I mean, the I mean, I I guess they could put, like, smiley faces next to it if they, like, are really happy or, like, you know, like, sad faces if they're, like, and then, like, you know, like, the the scared, like, begging emoji if they really have no ethical what's going on. Like, I don't know. But I just missed

Speaker 1:

so alright. Just speaking of which, we gotta get to you. You because you you came in, I think, with a 3 year. Do you have a 1 year and a 6 year,

Speaker 6:

in terms of predictions? Like, 1 1 year has gotta be something about CXL since you're a computer company. Like, 2023, the year of CXL, I don't know whether it nosedives or, like, actually continues after that. Whether you'd call it the year of CXL because it's the only one or because it's, like, the start of it. I don't know.

Speaker 6:

Well, it's What's your take up?

Speaker 1:

Well, you're you're you're getting me in a in a in a wounded state, because, we just came out of the I think we we are been spending, this week doing our our compliance, for, for electromagnetic interference. So, you can't you catch me just coming out of the chamber, trying to avoid the watching the double e team try to to hone in on exactly where, we are are seeing radiated emissions, particular frequencies, and where that's coming from. And, actually, CXL came up as part of this. And, part of the challenge is, boy, you push those clocks over cables are just bad, bad news. There's a there's a lot of challenge when you got a clock over a cable as I and so I'm I'm probably a victim of my of my work day to day.

Speaker 1:

But I definitely think that that is a challenge with respect to CXL. I I personally think that that there's also this very weird idea that CXL, was like, don't worry. Optane is dead, but it's being replaced by CXL. You're like, that makes no sense at all. This idea that it's gonna kind of allow for for for disaggregated memory, seems kind of nutty to me.

Speaker 1:

So I don't know. I think CXI who knows? I I I I'm I'm not sure that we're gonna see that in the next year.

Speaker 6:

But I mean, so, like, I are are you thinking of that in the sense of, like, the the wildest dreams mesh network over PCIe fabric disaggregated your entire data center, one gigantic machine with CXL?

Speaker 1:

That's right.

Speaker 4:

Yes. Yes. Okay.

Speaker 6:

But that's never that's never that's never flipping happening. Like, the the the that is the way.

Speaker 1:

Yes. But it's it's a fever dream. Not gonna happen.

Speaker 6:

But, like, in terms of, like, you know, replacing, like, the DDR 4 buses, maybe, Like like that almost sounds more plausible.

Speaker 2:

Yeah. I I don't know. CXL feels like a solution in search of a problem right now.

Speaker 1:

And also it feels like it it's calling as a as a cryptocurrency exchange. I mean, they're like, why I mean I mean, shouldn't FTX be a a bus transport at CXLV? They

Speaker 6:

I I mean, Brian, I'm just saying you have the power.

Speaker 1:

And then, Matt, do you have a 6 year?

Speaker 6:

6 year prediction, we will see an increase in the rate of improvement of power efficiency of high end computing.

Speaker 3:

All right.

Speaker 6:

So like like on average you see like roughly 10x over 6 ish years. We might be seeing like 20 or more this time.

Speaker 1:

That's you know, and I think it's

Speaker 6:

It's like like like you know you know what? If if you want me to put, like, some numbers on it, you know, exaflop for a megawatt.

Speaker 1:

Exaflop for a megawatt. That the the way to bring that prediction home. I like it. Yeah. I think the and I you know, Adam, this is kind of in line within spirit of your prediction last year.

Speaker 1:

And, I yeah. I I I'm with you, Matt. I think that that is and I like the the exaflop for a megawatt. I I I like, the but I do think that we are we've got to be close to beginning to really think about these efficiencies writ large. Of course, it does also feels like this, along with, like, the death by Tanium, I'm trying to think oh, and also that the, the main memory becomes nonvolatile.

Speaker 1:

I don't know. That was also one of our our our our our classic evergreen that is definitely not not yet come to come to pass.

Speaker 6:

Well, I mean, like, I I Itanium didn't really ever make sense for how it was marketed. Right? Like, there there's no way that was ever gonna be like a consumer PC architecture. Right? I mean, not saying they didn't try, but like.

Speaker 6:

And

Speaker 1:

then, the one of the question I don't know if you I don't know if you've seen the chat is I it it a question about rust predictions regarding the Linux kernel. I think that we are gonna see whether it's I mean, I think the Linux kernel too. But, oh, or or, Adam, did you reply no to that about the No.

Speaker 2:

I said, has any no. I was just saying, has anyone made a new No. No. They're not.

Speaker 1:

Yeah. Oh,

Speaker 6:

Can can can I just tee up another one here on with regards to the computer science Rust course?

Speaker 1:

Sure. I

Speaker 6:

think that you will see a better chance of it succeeding in the double e department than in the CS department.

Speaker 1:

That's interesting. That is interesting. I I that's interesting because I also feel like and this is something I was trying to kinda find a way to to express earlier that I do feel that to to best appreciate Rust, you really need to understand memory, and you you kinda need to understand what the alternatives are. Just like the the, you know, the the presence of structured languages did not chase assembly out of the curriculum. And we hit assembly, like, pretty early, in in in the curriculum.

Speaker 1:

We hit we hit assembly, you know, often 1st year students learn, assembly, and that to me was a I mean, because I kind of like I don't know about you, Adam, but as, you know, coming into university, the assembly that I had done had basically been from copying out of magazines or what have you. Like, I I had seen it, but I had no idea what I was doing. And then to learn that, like, you have these primitives and to really understand what memory meant, I felt like it was a big moment. And I kinda feel that you need that moment before you really appreciate Rust. So, Matt, to your point, I think getting a little more traction in the the from the double e side is is really that's interesting.

Speaker 1:

I think that that that that seems plausible.

Speaker 6:

And and I've got a also sorta, like, a completely different set of reasons for it, and it's more sort of of the clean water variety. Because, like, the first thing in c that you learn is that a char is 8 bits and the second thing that you learn is that a char is not always guaranteed to be 8 bits because a byte may be 7 or 9 because of somebody in the 1960s even though the rest of the world has moved on and like you know, rust, like, with, like, u 8, u 16, I 16, f 32, f 64

Speaker 1:

You want 28?

Speaker 6:

Make, right like you know like all of this like actually is like okay good we're finally being honest about how a computer looks in 2022. In addition to that, like if you're teaching microcontroller c which is normally like the first programming course in double e.

Speaker 1:

Yeah.

Speaker 6:

You don't necessarily have malloc, right? You may be writing in an environment with like 4 ks of RAM. So you're not you're not gonna be mallocing and freeing and doing all the stuff that like makes people scream at rust because like like, what what do you mean? You're not, you know, like, what what do you mean you don't have a mail? Like, oh, okay.

Speaker 6:

Fine. We can work with that.

Speaker 1:

You you know what?

Speaker 6:

You get you you you get something that's like a print f but actually, like, decent. You know?

Speaker 1:

It's interesting. And we certainly got, you know, with the double e's at Oxide who, have learned brass at Oxide and have have have made contributions to to Hubris and the the other things that we built, and I've I've been and I've used I mean, I dare say, you know, Adam, for our colleague, Nathaniel, really started his first Rust was in Hubris and the microcontroller. But then, when it came time to do something totally different where he was collecting part numbers and doing something that was very kind of much more, you know, it would strictly in user land. Definitely not no standard, absolutely standard, when to go use ROS to do it. So I think there's there's some interest.

Speaker 1:

Interesting thought, Matt, for sure.

Speaker 2:

Yeah. I really like that math to see what

Speaker 1:

So And would

Speaker 6:

it like the

Speaker 1:

sorry. Go ahead.

Speaker 6:

The other point on that, I guess, is, like, to to go why way off into left field on, like, Erlang. One of the things that, like, Armstrong, one of the inventors of Erlang says is, like, all that we did is just acknowledge the physical reality that the speed of light exists and therefore cache coherent pneuma is a fiction and, like, guess what? Erlang just popped out of that. And, like, I feel like Rust is a slightly less opinionated way or a slightly differently opinionated version of the same vector and like if you start with relativity and like physics and like no. There is a physical computer core.

Speaker 6:

Like, the the abstractions make more sense just in the sense of acknowledging that the physical world exists.

Speaker 1:

Yeah. No. That's it. That that's very true. Alright.

Speaker 1:

Well, I think I Adam, I do I know we've we've we've gone, we always know this is gonna be a bit the of a big one, but we've gone I Yeah. I've I'm sure you're, he's making his own predictions, I'm sure. Yeah. That's true. Maybe he's predicting that he's gonna be upset that dinner is late.

Speaker 1:

So, with with what what

Speaker 2:

But we're on, we're on to season 3 of the accident friends. Who would have thought that

Speaker 4:

Yeah.

Speaker 1:

And it's been it who not and I the the spin I mean, really, terrific. My audio problems, my my previous audio problems aside, embarrassing audio problems aside. This has been a lot of fun animated, and I I'm glad that you highlighted at the top that, we we actually, made the you you got us as a podcast. You made your New Year's resolution to do that, and that has been huge, I think, to get that that vector has been fun for other people to to to see it. My mom has started listening to the back catalog, by the way.

Speaker 3:

She Nice.

Speaker 2:

Yeah. My my my dad too. He listened to the one with, Sean Silkhoff and and was relieved when we sorted out the audio issues. So, you know, some things don't change.

Speaker 1:

Some things don't change, but it's been a lot of fun. It's been great to have everyone in this new, the the new vector. We are glad that Laura's prediction is not yet correct as she is she is also, I think, believed that that her prediction is not yet correct. Discord continues to be a pretty good experience. But I interesting themes that she so this theme your themes, to recap, Adam, it seems like, we had a lot of, generative AI, chat gpt related predictions, a lot of ARVR related predictions, and a lot of Rust related predictions, actually.

Speaker 1:

I think it feels like the good themes of Yeah.

Speaker 2:

And if you didn't get them in, feel free to post them in, onto the, to the GitHub site, and we'll review them year or in 3 years or in 6 years, assuming we're still doing it.

Speaker 1:

Yeah. Absolutely. And for those of you who made pictures in the chat, a lot of good pictures in the chat too. So let's be sure to get those, into the show notes so we can actually capture those. A lot a lot of good stuff as always.

Speaker 1:

Alright. Alright, Adam. I I vow so I think I'm just gonna, like I'm I'm not gonna try to use the, I'm I'm gonna stick with the the phone here. So I I vow to to not have crappy audio.

Speaker 2:

That's good good good resolution to end that.

Speaker 1:

Yeah. Absolutely. Alright. Thanks, everybody. Happy New Year.

Predictions 2023!
Broadcast by