AI in Computer Science Education

Adam Leventhal:

And Shriram. Shriram, Hello.

Shriram Krishnamurthi:

Hello. How are you? Excellent. How are you doing?

Bryan Cantrill:

Did you get

Adam Leventhal:

Kathy too? Yes. Kathy's here.

Shriram Krishnamurthi:

Alright.

Kathi Fisler:

I'm here.

Adam Leventhal:

Hey. You're here. Oh, Kathy, can you say more? Just because you sound a little you're.

Kathi Fisler:

Am I coming across or no? Is it okay?

Adam Leventhal:

What do you have for breakfast?

Kathi Fisler:

I actually don't do breakfast.

Adam Leventhal:

So I sound great.

Bryan Cantrill:

I'm with you. I'm yeah. You're you're among friends on that one, Kathy.

Kathi Fisler:

Okay. Guys,

Adam Leventhal:

it's the most important meal.

Shriram Krishnamurthi:

I thought this was gonna be some sort of weird personality test. But anyway, I don't do breakfast.

Bryan Cantrill:

You know, I actually did eat breakfast this morning, but it was unusual for me. I dude, are you always a breakfast eater? No. Okay. So were you asking that question knowing that you're that

Adam Leventhal:

I ask for breakfast because it's a way to get people saying something.

Bryan Cantrill:

No. No. Know. It's like the classic and you're showing that like, look, look, I'm a sound guy. You complained about the audio in this podcast.

Bryan Cantrill:

Yeah.

Adam Leventhal:

Exactly. We've read the Wikipedia page on

Bryan Cantrill:

You've made it on sound guys? I know the kind of questions that sound guys ask.

Adam Leventhal:

Yeah. That's what a sound guy would say in this situation. Right.

Bryan Cantrill:

Sibilance. Sibilance. Mic check. Mic check. Sibilance.

Bryan Cantrill:

Yeah. Well, Kathy, welcome fellow non breakfast eater. It's actually true now. The the eye of Soran turns to you. Do you eat breakfast?

Bryan Cantrill:

What do you would you have for breakfast? Uh-oh. No, no, no, no. This again. We can try that.

Bryan Cantrill:

That's a green bubble. Why is that working? Hold on. Let's try that again. And now it's work.

Bryan Cantrill:

What is going on?

Adam Leventhal:

Okay. Kathy?

Bryan Cantrill:

Can you hear us?

Kathi Fisler:

Hear you just fine.

Bryan Cantrill:

But you couldn't

Shriram Krishnamurthi:

hear us. Can hear you just fine too.

Adam Leventhal:

Wonderful. Okay.

Bryan Cantrill:

You couldn't hear us for while.

Adam Leventhal:

This is a real professional podcast you're on. Don't Wait, it's a

Bryan Cantrill:

real pressure podcast because we ask you what you had for breakfast. That's how you know it's a professional podcast.

Adam Leventhal:

Right.

Bryan Cantrill:

Shuram, what did you are you a breakfast eater?

Shriram Krishnamurthi:

No. I don't eat breakfast either.

Bryan Cantrill:

That is what bonds all four of us.

Shriram Krishnamurthi:

We are we are I know. We're on, like, team no breakfast. Yeah.

Bryan Cantrill:

Okay. No breakfast. No. I think that's actually know, it's about time, honestly, that our team, like chalks up a w. I feel like like team breakfast has really had their day in the sun.

Bryan Cantrill:

And I think you all

Shriram Krishnamurthi:

Some study I saw recently that said people who don't eat breakfast basically die sooner. So I'm just just letting you know, like science. There's lots of science on this.

Bryan Cantrill:

You know, but I think they have a lot more fun.

Adam Leventhal:

Know what? I just think we shouldn't authorize the breakfast eaters and No.

Shriram Krishnamurthi:

No. You're all Bay Area types and it's kind of like, you know, not very Bay Area coded to be like, we're gonna die sooner by not eating breakfast. That's all.

Bryan Cantrill:

That's alright. Yeah. Yeah. Thank you. Thank you, Shrum.

Bryan Cantrill:

I bet you're able to correct yourself before we had to correct you.

Shriram Krishnamurthi:

Thank you.

Bryan Cantrill:

Well, welcome, Shrum and Kathy, to this very amateur hour podcast where we we ask you what we have for breakfast. I'm apologies for the audio. So I am I'm really excited for this topic. I mean, this is turning into, like, an absolute evergreen topic. I feel like, Adam, there's a while ago we were apologizing for so many AI related episodes, and I don't feel like we need to apologize anymore.

Bryan Cantrill:

Just feel like it's like this

Adam Leventhal:

Only to some people. Only only to mastodons. Only to mastodons. That's right. Do you wanna say what it is where your subject you're excited about?

Bryan Cantrill:

Yeah. No. I do wanna say what you're I do. I do. Listen.

Bryan Cantrill:

I we're only like two minutes in, and we're already getting to the subject. So I think like everyone just doesn't need to rush me. The the so we are we're talking about AI and computer science education in particular. So we had Michael Littman on a while back. And Kathy, I understand that you listened to that episode as well with Michael Littman.

Shriram Krishnamurthi:

Oh, yeah.

Bryan Cantrill:

And we love Michael and talking about AI in higher education writ large, and he is the AI provost. I think I'm not it might now that I say it, it sounds like

Kathi Fisler:

Associate provost of AI.

Bryan Cantrill:

Excuse me. I will the AI provost will thank you not to refer to the associate provost for AI. I thought he's the assistant to the associate provost. Yeah.

Shriram Krishnamurthi:

He's probably the vice associate provost. And I do I think you should not take these titles seriously because it just goes to people's heads. Even Michael, I'm worried about.

Bryan Cantrill:

Oh, no. I I I can only imagine. So the and Michael, of course, is a professor of computer science. We talked a little bit about computer science on that podcast, but we mainly focused on higher ed. Yeah.

Bryan Cantrill:

But Michael is still a professor of the computer science department. And because I think he was the third instructor on this course that you two did. Right? Yep. But Right.

Bryan Cantrill:

So I am so today, we're we're really kind of focusing in on computer science education, which is a subject that is near and dear to all of our hearts, all of our non breakfast eating hearts. And, Sherem, do you maybe take us away in terms of the course that you and Kathy designed? Yeah,

Shriram Krishnamurthi:

basically here's what happened, right? So Kathy and I, like everybody else over winter break, we got clot pilled we sort of talked about it and said, look, we cannot imagine teaching computer science the way we have taught it exactly the way we've taught it before. We felt something absolutely has to change. The analogy I went through in my head was, I've been teaching assembler and now a compiler comes out and I have a choice, right? It could be like, wow, this thing's kind of buggy, this thing we don't quite understand it fully.

Shriram Krishnamurthi:

Maybe we should wait five years for all the storm to settle down before we start teaching it. Or we can try to figure out how to incorporate in some semi sensible way. So that was the proposition I started with and Kathy being the very sensible person said like, Do you know what you're doing? And I said, No, of course not. I have no idea how to do any of this.

Shriram Krishnamurthi:

I'm just gonna like eat something out there and see what happens. She's like, Can you please not do that?

Kathi Fisler:

This is how most of our collaboration has gone by the way over the

Shriram Krishnamurthi:

last thirty years. This is a thirty year pattern. So nothing about this conversation surprised either of us. The slight twist to it was Kathy was on teaching relief this semester and I'm on sabbatical this semester. But at the same time, we felt something had to happen in the fall or maybe even nothing had to happen, right?

Shriram Krishnamurthi:

But we didn't even have the data to determine whether something or nothing should happen in the fall. So we really need to do something this spring to prepare for the fall when there's a whole new bunch of students on campus. So that was the starting motivation. We literally put, great thing about Brown is you can say like, kind of the night before the semester begins, you can sort of put together a course and Brown's a little chilled with it. I don't actually know if they are, if there's like any real provost listening to this.

Shriram Krishnamurthi:

That's of course not what we did, but really that's what we did.

Kathi Fisler:

Oh, remember, I do this stuff professionally for the department. We did it by the book.

Shriram Krishnamurthi:

Okay, there you go.

Bryan Cantrill:

All right, all right. Provosts, you heard it here first. Yeah, I'll, we'll just, the provost that complains, Kathy, we'll inform him. Him or her that you did the by

Shriram Krishnamurthi:

Yeah, do you want to describe a little bit about how we thought about this from there?

Kathi Fisler:

Yeah, yeah, yeah. And I'll just preface this by saying Michael and I had had conversations in the summer when he was just starting as Associate Provost for AI, that it would be really fun to figure out what to teach sometime. He was busy and I was busy and we said, Yeah, that's fun. We should talk about that. So when Sherman and I were having this conversation in January about what we really should do something and we should do something sensible, I think the the turning point was saying we need to gather some data to understand what are our students going to be capable of.

Kathi Fisler:

And the because we really didn't know. You know, we knew what we were doing with LLMs, but we also knew that we had a whole lot of experience to build on with all the apps and things that we were trying to build for ourselves, but we didn't know how students were gonna react. We decided

Shriram Krishnamurthi:

I'll throw one thing in there.

Kathi Fisler:

Yeah, go.

Shriram Krishnamurthi:

Sorry. I'll just throw one thing in there, right? Which is there's this phenomenon in education called the expert blind spot, which is pretty much what it says on the tin, right? As experts do not understand what it isn't like to not be an expert. And every educator tries their best and when I say educator, I don't just mean like teachers or professors, but even like, you know, a lot of the people on this call are probably like programmers, professional programmers who also onboard a new developer or something like that.

Shriram Krishnamurthi:

So we're all in some kind of education business in all of those roles, but we're really bad at simulating what it is like to be in the shoes of a beginner. And part of certainly my experience, think Kathy's experience also was as we've been like, vibe coding or whatever, we've been doing stuff over winter break. We could feel times when we had leaned back on our own expertise. I knew there were definitely times when, and I know you folks have talked about this too, both on social media and on the podcast. There are times when something happens and I can tell you later on like a hilarious war story about this, but I don't wanna derail the conversation.

Shriram Krishnamurthi:

But we've all had this time when something happened and we're like, Oh, I think I know where to look, or I think I know what might've happened, I think I know what to do. And the students by definition don't have, in our case, forty years of programming experience. Now, what we don't know is maybe they have other techniques, maybe they have other means of finding out what to do, maybe they have the benefit of not having forty years of experience and all the crud that comes with it. Maybe this is intellectually honest, just a story we're telling ourselves to make ourselves feel like we didn't waste the last forty years. So, that was the sense of ignorance from which we were starting.

Shriram Krishnamurthi:

So, to go back to what Kathy was saying, Kathy.

Bryan Cantrill:

And you also have an advantage in the students that you've got now have a lot of experience with LLMs. LLMs have been a significant part of I mean, this is not this is

Shriram Krishnamurthi:

You know, some of them, many of them yeah. I don't know. Kathy, you wanna say anything about that?

Kathi Fisler:

I mean, I would say when we decided to put this course together and we opened the application form, because we decided we had to do this with a limited number of students, because we didn't know what we were doing in terms of the assignments, right? I mean, we literally were throwing this together two days before the semester started, but we also wanted to be able to gather data from the students and be able to read through it and make sense of it. So we said we're gonna limit this to 20 students. So we set out an application form and 80 students filled it out and many of them really knew nothing about LLMs.

Bryan Cantrill:

I mean, like, have not used an LLM, though? I mean, it just feels like

Kathi Fisler:

I mean, they had used a little chat GPT, played with it a little bit from a computer science standpoint.

Bryan Cantrill:

Mean, just knowing like my own

Shriram Krishnamurthi:

Actually, I think you're repeating this thing that is a thing that I'd like to push back against, which is you're basically doing the modern version of the digital native narrative, And I think this is a thing that sidetracked a lot of people for decades and continues to, where they just assume, well, and I know that's not exactly what you're saying. So I'm going to caricature you here a little bit.

Bryan Cantrill:

Oh, Adam finally has some

Adam Leventhal:

company in character.

Bryan Cantrill:

He's relieved. Yes,

Shriram Krishnamurthi:

go ahead. Know, that's the only reason I signed up for this podcast.

Adam Leventhal:

Oh, 100%,

Bryan Cantrill:

absolutely. Absolutely.

Shriram Krishnamurthi:

So, you know, this digital narrative thing is like, well, you know, grow up and, you know, when iPads came out or whatever kids grow up with this stuff, they know how everything works and we don't really need to teach them that stuff they know more than we adults do anyway and it turns out yes sort of sort of not and you know people like Dana Boyd and others have written about this a lot of people have written quite intelligent observations about this which is they know some things and they are, yes, they've had a lot of experience with some things, but there are other things they don't really understand that well at all. So even in our case, all used ChatGPT for sure, almost all of them have. I think again, there's a handful of students who push back against using LLMs for various reasons, ethical and other reasons. But We're setting that aside

Bryan Cantrill:

saying they do. I mean, that's Honestly, I'm just looking at the lens of my own kids. So 21, 18, 13, and their history with LLMs. I'm wondering if the kids that say they don't have experience with LLMs are wondering if they're being questioned by the university. Like, no.

Bryan Cantrill:

As far as you know, no. No. I've never used ChatGPT. I don't know. Why?

Bryan Cantrill:

Why are asking? Who's asking?

Kathi Fisler:

I don't. I think it's did it at

Shriram Krishnamurthi:

I mean up. Yeah.

Kathi Fisler:

One of the things Sherm and I always laugh about is the opposite ends at which we came into computer science ourselves. You know, Sherm did computer science because he discovered it and it was just came naturally to him. And I did computer science because I had not really seen it before I got to college in any meaningful way, and it made absolutely no sense to me when I started. But I was just tenacious and couldn't believe that I was losing to a hunk of metal on my desk. And I think it's given us very different perspectives on what it means to teach and learn this stuff.

Bryan Cantrill:

I I do like the the the dramaturgical dyad of of human versus machine. We are actually in opposition. We've always been in opposition to these things. This has always been the enemy. And, Kathy, like you're I'm not gonna let this hunk of metal best me.

Bryan Cantrill:

I'm gonna Right.

Shriram Krishnamurthi:

Right.

Bryan Cantrill:

Okay. So one question about the the the boundary that you put around the course. Once you have 80 people sign up, you got 20 slots. It's a computer science course. It is, I I know that we've we've redone the numbering now at our alma mater and now every, like, every it's got like a a VIN for every computer science course, but it seems to be an advanced undergraduate course by its numbering.

Bryan Cantrill:

Is that was that kind of the thinking?

Shriram Krishnamurthi:

Oh, no, no,

Kathi Fisler:

no, no. That that's because we did this too late to get it a normal course number.

Bryan Cantrill:

Oh, okay.

Kathi Fisler:

So the only way we could pull off running it was to put it in as an independent study section. So that's why it's actually the special number that refers to independent study courses. But we explicitly decided that what we wanted was students who had had students stay in their first or second year. So they had to have had at least a semester of programming experience Oh, for

Bryan Cantrill:

that's what I was going ask. Okay. Yeah.

Kathi Fisler:

They had to have at least And was

Shriram Krishnamurthi:

intentional. Yeah.

Kathi Fisler:

It was very intentional because we didn't want to have to deal with teaching them how to program. We really wanted to think about how far can we push them learning how to use agents for programming.

Shriram Krishnamurthi:

And

Kathi Fisler:

the expectation was

Shriram Krishnamurthi:

But also, we didn't know whether they would run into a situation where they would need to go under the hood. Right? And that was the real issue if they did, if we ran to a situation where they want you to read this code. One way to think about it is we were trying to essentially simulate students who were near the end of their first semester or maybe in their second semester of a programming class. Right?

Shriram Krishnamurthi:

That's one useful way to think about the the setup.

Bryan Cantrill:

Okay. But that's interesting that you were the reason your reasoning behind that is we just don't know how much under the hood they're gonna need to get. And if it is if it turns out they actually do need programming knowledge to just make this stuff work at all, we wouldn't wanna have a bunch of students who basically can't or are really struggling to fit to to finish. So you kinda did it almost as a as a precaution. But it is

Shriram Krishnamurthi:

But this goes back to the brilliant spot part. Right? We don't know what we don't know. The whole thing is driven by we don't know what we don't know, and we to figure out what we don't know. And I think it's also important

Kathi Fisler:

for having a vocabulary, right? Everybody's had some programming experience, at least when we're talking about what we do and don't know, we've got a common vocabulary to work on.

Bryan Cantrill:

And then so is the objective of the course to figure it almost sounds like the objective is to kind of figure out what the role is for agentic work in the curriculum. What what or I mean, do you with what was the How did you come up with the syllabus? Or did you vibe code the syllabus?

Kathi Fisler:

Oh, we did a little better than vibe code the syllabus. I think we did write a syllabus. Md file at the beginning and then we

Shriram Krishnamurthi:

vibe code. Exactly.

Kathi Fisler:

No, I mean, I think the vision we had for it was our feeling has been that software engineering basics are going to end up moving earlier into the curriculum. And so we were curious how much could we push on this?

Bryan Cantrill:

And so when you say the software engineering

Shriram Krishnamurthi:

Let basics me build on Kathy's vibe code comment a little bit, because I don't think that's not quite the framing I want to take for this. What we did was very much, we are gonna assemble this. This is literally the plane is being assembled mid flight. That's the sense in which was vibe. So, we said like literally every day, we were not sure what was gonna happen the next day.

Shriram Krishnamurthi:

We'd go to class, we do a thing, see what happened and then decide what happened the next day. So, that's the sense in which we're like sort of vibing the whole thing as we're going along. So, think our main insight was, so I'm gonna step back a little bit here. One of the things that very much frustrates me as an educator is how people overuse metaphors and metaphors are always nice because they relate the unknown to the known, but they're also bad because they relate the unknown to the known in a way that the person relating understands, but the person trying to learn does not. And they always pick up on the wrong features and you end up with all these arguments because people are arguing about the metaphor rather than the thing itself.

Shriram Krishnamurthi:

Okay? But at the same time, we wanted to give our students some kind of usable working metaphor for how we were thinking about the whole task. The way I like to think about this is from a student's perspective is that an agentic programming system is what I'm gonna call a flaky compiler. It's obviously a compiler, it's turning text into code, it's turning specs into code, but it's also obviously flaky for various reasons, non determinism, statistical sampling, ambiguity of input, ambiguity of what is even are we talking about in the world, etcetera, etcetera. And so we looked at this as once you take that framing and you both might not agree with that framing at all and I'd love to argue about that with you guys.

Shriram Krishnamurthi:

But if you take that framing, then we looked at the course as sort of having two tracks. One part of it is, it's a compiler. Compilers let us do awesome things. What amazing things can we do with this new compiler? But it's flaky and the flaky thing means it's gonna do all kinds of bad things.

Shriram Krishnamurthi:

How do we keep it from doing bad things? And if you take that formulation, then what we're trying to do is to say, look, we've got this flaky compiler, it's kind of like in the middle of our build system and we need to figure out how we're gonna get a reliable program out of it. But we've always had to deal with flakiness. We've had flaky people. We've had flaky coworkers.

Shriram Krishnamurthi:

We have flaky ourselves. Ourselves, like six months after we're at a code base, we're like, Oh my God, which idiot put that? Oh, I put that bug in there. So we've all had to deal with flakiness and in some sense, all of software engineering as a discipline, and I don't mean software engineering as coding, but as a discipline was all about how do we control this flakiness? How do we use specifications and requirements and testing and verification and feedback loops?

Shriram Krishnamurthi:

All these things to bring some measure of control into the world so we can all have these flaky human beings build sophisticated and complex systems that we can depend on. So that was the real way we thought about the course. And then everything else was sort of downstream of that.

Bryan Cantrill:

So really embracing the worst aspects of being a software engineer, I think in many regards, just in terms of like that non determinism. Mean, I you're talking about like the flaky compiler. I mean, like the the and actually, and I remember we had flaky we had courses with flaky compilers, and there was nothing more disastrous as a TA than having a course of the flaky compiler because the you could I mean, now, I mean, they would often also be deterministically flaky. So which I mean, it's this is where and I I don't wanna do exactly what you cautioned us against, Riyram, and kinda debate the metaphor. But the so okay.

Bryan Cantrill:

So do you so we're gonna embrace the nondeterminism. We are going to embrace the fact that these things, and we're going to explore the nondeterminism and kind of its consequences for the creation of software.

Shriram Krishnamurthi:

Yeah, I'm not gonna say embrace, right? We're gonna withstand, we're gonna tolerate. We don't have a choice about it. That's the part we're not given a choice about.

Bryan Cantrill:

Yeah. Okay. And then what is the so how did you come up with the labs for this course?

Shriram Krishnamurthi:

Yeah. Yeah. That's all the fun, right? Kathy, do you wanna say anything about that?

Kathi Fisler:

I think what we were what we were trying to do was say, what's an assignment that we think the LLM won't do right off the bat?

Adam Leventhal:

Okay.

Kathi Fisler:

But that the students might try to use it. The students might assume, oh yeah, I could just give this to the LLM, so we wanted them to kind of experience the flakiness of the compiler, see what they would do with that, and then come back into class and talk about it and say, hey, you know what? What went right? What went wrong? What might we have done to fix that?

Kathi Fisler:

So we kind of let them them fail a little bit and then pulled back and took it apart and said, should we take away from this for practices? And it's kind of we just did that over and over again. The projects got a little harder each time and then we would pull back and introduce something new. So we would be talking about testing or we would talking about using types or talk about using constraint solvers or SQL libraries. As we went along, everything was designed to let them struggle and then show them how to do lot

Bryan Cantrill:

of it. Could you be concrete about what some of the science Yeah, might look

Shriram Krishnamurthi:

so concrete examples. One of the things that coding agents tend to do as of now is they will vibe code a thing that already exists in the world and works better. For example, if you give them some data to analyze, they will gladly code up, hand code up some data analysis rather than saying, if I put this in a database, I'd get so much more scalability and get all the advantages of using an external database. If you give them a task that requires solving constraints, they will generate a crappy, greedy version of an algorithm rather than sending it off to a proper constraint solver. It's like Philip Greenspun's tenth rule over and over and over again in the code that they generate.

Shriram Krishnamurthi:

And so we set up assignments where the students would do a thing. So for example, with data, we gave them some data analysis to do and then we said, okay, now what happens if you really need to scale up the data? We gave you a small part of the data set, here's a much larger part of the same dataset, right? Or here's the full dataset. And suddenly you realize, Oh, the Vibe coded version didn't really work.

Shriram Krishnamurthi:

I need to understand, I need to use this external thing called a database. But also, that's a point of departure. Remember, we're talking to early stage Computer Science students. That's a point of departure into a future Computer Science class where if you wanna understand what the technologies that enabled us to work really well, you go take a database class. Or we gave them another problem where it was a bookstore, for example, with like discounts or something like that.

Shriram Krishnamurthi:

And it generates some janky code and you're not even sure if it caught all the corner cases, you're not even sure what all the corner cases are. And then there are algorithms for actually doing this kind of thing like business rules, like the REIT algorithm. Or we gave them finding whether you've met the requirements for graduation. And that's a classic problem that a constraint solver or SAT solver can solve but their code that Claude generates for them, we were all doing this with Claude code by the way, the Claude code, the code that Claude generates is all like bad, greedy algorithm code, when in fact you could send this off to a constraint solver and get it solved much quicker with like a mathematical guarantee about what the solution has. So there are three examples of the kinds of things we were setting them up for.

Shriram Krishnamurthi:

You set them up for failure and then teach them the concept and then say, if you really wanna understand this concept better, here's a future course in the department that'll teach it to you.

Bryan Cantrill:

So this is like an amuse bouche for the rest of the department? This is

Shriram Krishnamurthi:

Yeah. That's what every intro should be. Right?

Bryan Cantrill:

Sure. Did it work? I mean, this is real I mean, it's kind of fascinating. Well, okay. Well, actually, one question is, first of all, did it did these things perform as poorly as you hoped they would?

Bryan Cantrill:

Because the you know, a new model comes out and it's like, oh, actually, the new model is actually doing much better on these things. We kinda lost the the pedagogy of this. Like, actually, what I've learned from this assignment is I don't need to take the databases course because this thing did it so Or they reliably nondeterministic and flaky?

Shriram Krishnamurthi:

So here's the great thing, right? If you have one person doing this, they might get lucky. When you have a class of about 20 students, that's no longer the case. So, let me tell you about our first three assignments. I think it was super fun.

Bryan Cantrill:

Yeah, yeah, yeah.

Shriram Krishnamurthi:

The first assignment, the very first assignment, day one, we walk in and say, Okay, all of you are now gonna run, you're gonna get a clock to generate, sorry, Tetris. Now, for those of you who are on the call who didn't go to Brown, you might not realize why we asked them to pick Tetris particularly but we picked Tetris because there's this course that Andy Van Damme's taught for like, I don't know, like sixty years or something at this point. And his final project, the big crowning achievement of that course is they write Tetris at the end of the first semester. And it's like two weeks of massive amounts of slogging at the end of which they mostly get a working Tetris. And you go to Claude and you say, give me Tetris.

Shriram Krishnamurthi:

And it'll ask you maybe one or two questions like which language do you want? You want the basic version, you want the advanced version? You're like, let's burn some CO2, give me the advanced version. And forty seven seconds later, there's a full working Tetris on your file system. So they go through this experience, they're like, Oh, all right, well, that's interesting.

Shriram Krishnamurthi:

And you can see the students spent two weeks on this feeling like, Woah, that was kind of unfortunate. So that's the first experience. Okay, so then we go into class and we say, All right, here's the second assignment. You're going to do Tetris, but what we want is a slight variant of Tetris where gravity goes up. Now think about that for a moment.

Shriram Krishnamurthi:

Right? Because there is Oh, I

Bryan Cantrill:

don't have to think about it, Shiram. I don't have to think about it, my friend, because I have coded it. So I I I I for a moment, I thought you're gonna have him do Tetris as an homage to to a certain Alright. CS one thirty two project way back in the day.

Adam Leventhal:

Yes, Shyam. You you may have come to Brown just in the waning days of this, but there was this game Battletress that that Brian and and some classmates coded up in like 1994, 1993 kind of neighborhood?

Bryan Cantrill:

I originally wrote it actually at the 1993 playing against when I was at home in Denver playing against my sister. And the that I and it had what was I thought a very promising prototype played over a no modem cable with two laptops because networking basically hadn't been invented yet. The only copy of that you know, and I would like to say, by the way, that people rightfully pointed out that you and I have started repeating anecdotes on the podcast, which we are gonna do from time to time. Like, we we we've we've lived the lives we've lived. We're gonna repeat some of this stuff.

Bryan Cantrill:

We we we both try not to do it. I feel confident that this anecdote

Adam Leventhal:

has This is dry powder

Shriram Krishnamurthi:

for sure.

Bryan Cantrill:

I I think this is dry powder. The the that I had I brought it back to school, and I was very excited based on this promising prototype. And I should it should stop to explain the the the premise of the game. Did you play Wesleyan Tetris? No.

Bryan Cantrill:

So Wesleyan Tetris did you play Wesleyan Tetris, Sherem? Kathy? Wesleyan Tetris? Do you recognize that? We've never done Okay.

Bryan Cantrill:

Wesley, we're gonna we're gonna go anecdote on anecdote here because I know I'm I'm I'm just I I'm in I'm in old Timber here. The, so Wesleyan Tetris was a game for the Mac, in the early nineties, and it would screw up your game, and it would then make fun of you. And it was a perfect game for for for college students. Oh my god. There's a Wikipedia page for it.

Bryan Cantrill:

No. That's not Wikipedia. That is some other that is some down market Wikipedia link. Was dropping.

Shriram Krishnamurthi:

Focus.

Bryan Cantrill:

The the but the Wesley and so we played a lot of Wesleyan Tetris. The at one point and we would do what what freshmen would do, first years would do on a Saturday night, which we would gather around and take turns playing Wesleyan Tetris making fun of one another. And there was a girl who I think had romantic interest in one of my in in one of my good friends. And, she was really wanted us to go out and do something else other than watch one another play Wesley and Tetris. And she was just honest, like, I just can't believe I just can't believe that I'm that you all are just playing video games on a Saturday night.

Bryan Cantrill:

She went on for so long. At one point, paused the game. I'm like, you know what? At least I'm not watching someone else play video games on a Saturday night. And I would say that she walked out of his life at that moment.

Bryan Cantrill:

So I'm sorry, Scott, if that was your your your future life not lived. But the Wesley Tetris, very inspiring. And what we made is a two player Tetris game where you could accumulate dollars and screw up the other person's game. Yeah. So that and but and that included many features, including all sorts of things like gravity going the wrong way, the the the boards being flipped upside down, pieces pieces spinning nonstop, etcetera, etcetera, etcetera.

Bryan Cantrill:

So the

Shriram Krishnamurthi:

I knew we would get on topic at

Bryan Cantrill:

the Oh, I I you know, listen. I you know, it's I I can't say the weave, but it's the it's the weave.

Adam Leventhal:

Look. Just just just because I'm not sure we're gonna get there back here again for another four years.

Bryan Cantrill:

Yeah. Yeah.

Adam Leventhal:

Sure. I just feel like to close out that anecdote, at one point

Bryan Cantrill:

Yeah. No. You do you Brian

Adam Leventhal:

Brian was, Brian was dating a woman.

Bryan Cantrill:

Oh. Oh. Oh, we're gonna do that. Oh, I thought No.

Adam Leventhal:

We're skip ahead.

Bryan Cantrill:

We're gonna skip ahead.

Adam Leventhal:

Brian was dating a woman Okay. Who, you know, was was not great at this game, Battle Trisk.

Bryan Cantrill:

Well, you gotta say we're we we've left school now. We're at

Adam Leventhal:

Oh, I'm sorry. Years passed. Years passed. It is now, like, the year 2002 or 2003 or

Bryan Cantrill:

something like that. That's right. And No. It's not 2002. It's like 2001.

Adam Leventhal:

2001, pardon me. And Brian's dating this woman who is like learning how to play Battletriss.

Bryan Cantrill:

Because we've resuscitated it.

Adam Leventhal:

That's right. We kind of a quiet period between Slayers nine and Slayers 10, for some reason, like about a quarter of the Slayers Kernel development team was was avid Battletriff contributors briefly. And what in particular,

Shriram Krishnamurthi:

there were two And we wonder why sun went down.

Bryan Cantrill:

Well, No. Honestly, this is like a kind of like back to the future ZFS hanging in the balance because Matt Arons was an avid and bad battle first player.

Adam Leventhal:

Yes.

Bryan Cantrill:

He and and and and Bridget, the Spoiler. Spoiler. My my my now wife was my girlfriend at the time was was stuck on her dissertation because these two basically had reasons to procrastinate. They're both terrible terrible battle for us players. And they would play these prolonged games that would go nowhere.

Bryan Cantrill:

Like four and a half hour games. Right. Because they couldn't kill one

Adam Leventhal:

another. Like two kittens trying to murder each other. Right. It's just not gonna

Bryan Cantrill:

happen. When it it should be said sorry. Yeah. I know you've joined the Battlefords podcast already in already in progress. The and when Adam and I would play, it was a full thermonuclear exchange between the great powers.

Bryan Cantrill:

That's right. And and I'm Adam, I'm just gonna say it. I'm gonna say it right now. I'm gonna say it in front of everybody. Adam was the better batter.

Bryan Cantrill:

Wow.

Adam Leventhal:

Okay. Good night, everybody. Thank you for joining. Appreciate it.

Bryan Cantrill:

But it but it was it was high stress and it was like, because Adam and I are both trying to, like, amass weapons to just kill one another.

Adam Leventhal:

So sorry. Oh, so so fast forward this, you know, Brian's then girlfriend, finally beats Brian one day. Brian comes to me and says, hey, Brigid took a game off me at Battletriss. And I turned to him and I said, Brian, you know what this means?

Bryan Cantrill:

I'm like, what does it mean?

Adam Leventhal:

It means this could be the one. And it was.

Bryan Cantrill:

And she's the one. She's the one.

Shriram Krishnamurthi:

Remember That's where this was going.

Bryan Cantrill:

Happily ever after. Anyway, sorry. So tangents on tangents. So I was anyway To close

Shriram Krishnamurthi:

the parenthesis. To close the parenthesis. Yes. So basically, unfortunately, most of the Tetris in the world is not trained on Brian's version of Tetris, but like actual original Tetris. Right.

Shriram Krishnamurthi:

Sadly. Right? Sadly. Right. Which means a statistically trained language model is not attuned to the fine points of code that was written between Solaris nine and ten.

Shriram Krishnamurthi:

So what that means is that you give out this assignment and students come back and you're like, How did it go? And about half the class raises its hand. It's like, Yeah, went great. And this is the important thing, right? Like 50% of the class is not raising its hand.

Shriram Krishnamurthi:

So the smart move here is to realize oh, I kind of got lucky on my coin flip. And so we go around the room and start asking okay, what didn't go right? And as people are talking about what didn't go right, you can see some of the other hands starting to go down as well. Interesting. They're like, oh yeah, maybe I didn't try that.

Shriram Krishnamurthi:

So okay, so that's assignment two. So now we're already getting to the point where like, here's a task that it like one shots. By the way, you know, the the fun thing about Claude is you don't even have to like say, you know, you can just go to and say Tetris. Like, you don't even please and think, you know, you can just say Tetris and hit enter and boom. Forty seven seconds later, you have a working Tetris.

Adam Leventhal:

Right? It's true. I'm just to unpack it a little bit, and it was just this one tweak of of of gravity train away. Yeah. None of the training set having gravity up flipped or upside down just Right.

Adam Leventhal:

Kind of made it

Shriram Krishnamurthi:

It's enough Freak out. I had no idea. It's off its scent, right? It's not necessarily scent. Half the class, roughly half the class had a working version, but the other half the class had at least a slightly buggy version or a somewhat buggy version.

Shriram Krishnamurthi:

Okay, so now we give like part three, right? So we're still in the first the first sequence, first week and a half of the semester. Part three is, okay, we're going to do Tetris one more time, but this time what we're going to do is every time you hit the G key, gravity is going to flip directions. Okay. Now you have to think about that a little bit, right?

Shriram Krishnamurthi:

Because this is now really testing your representations of the board. Because if you just, you can't just thing, mirror right? You kind of have to drop all the blocks and if you haven't stored the blocks with block integrity properly, both the up and down gravity versions don't really have to store blockness. They can just treat each cell as its own thing. But when you do the flip of gravity, now they have to have block integrity.

Shriram Krishnamurthi:

But also what you've now done is essentially every function is modal. It's modal in the direction of gravity. And if you fail to get even one of those functions right, you're going to detect this within one or two gravity flips. So this is the thing, from a human perspective, if you had written Tetris, you could write inverse Tetris. If you were written Tetris and inverse Tetris, you wouldn't have a lot of trouble writing the gravity flipping version of Tetris.

Shriram Krishnamurthi:

Already you can see both the power and the pain of working with something like CalledCode. And this was intentionally set up in week one. So it's like, okay, now we know that AI is amazing and AI can also produce terrible things. Now let's go. Wait, that was set up for the course.

Bryan Cantrill:

I love this because actually, I mean, is what that that first implementation of Battle Trust, Adam, was a mess. Because I was constantly I was playing against my sister and she and I were constantly coming up with new weapon ideas. And I was going back into my code, okay, like I'm all do this, I'll do this. And it's like, and trying to, and I had to like throw the entire thing out. And really had to think about it from first principles actually, in terms of like the design and the actually, I remember like the kind of needing everything to be flexible took me to a totally different design.

Bryan Cantrill:

That was much more, that was really, that you would not have come to if you didn't need to go do this. Right. And that's really interesting, Shiroam. So what did the students come at and so that's assignment number one, going through that progression.

Kathi Fisler:

Right. Right. Well, other-

Bryan Cantrill:

And so Kathy, are the students At this point, are they frustrated? Are they excited? What's the kind of What's the vibe, if you will?

Kathi Fisler:

Well, this was another fun piece of what we were doing is we were actually having them keep journals through the entire Because we wanted to capture the points at which they said, oh, yeah, like I'm really Claude can do this and that's awesome. And the points when they said, holy wow. What the what the heck did it just do? So when you look at what they wrote on the first assignment, they were just stunned by how good Claude was. Right when they're just doing regular Tetris like wow, this is amazing.

Kathi Fisler:

It just finished my my end of course project. But by the time we get to antigravity Tetris, they're realizing there's something here that they can't always trust. And it was really good to get to that pretty early on. Now what we did immediately after that was we wanted to change the kind of application we gave them to write. So we did more of a data analysis exercise the second time.

Kathi Fisler:

So we gave them something that involved a couple of tables to join about airports and weather delays and trying to do some prediction stuff. And it was partly just to get them working with a different language because all the tetrises came out in JavaScript. When you switch

Bryan Cantrill:

I was gonna ask. Yeah.

Kathi Fisler:

Yeah. And when you start doing some data science stuff, now it's more likely to produce Python. And not all the students have had these languages before, especially the students who had only had one semester of programming. By and large, they hadn't seen JavaScript before and only some of them had seen Python. So it's actually interesting that they're having to wrestle in languages that they haven't worked in.

Kathi Fisler:

But this was the one Shura mentioned earlier where we gave them we wanted to really think about testing. You know, we said, look, one thing you learned out of Tetris is you gotta think about how to test what you're gonna do because gosh knows what you're gonna get back. And so we have them set up some testing plans and things they're gonna watch for in this this dataset, and we give them a 50 row dataset, and they find some stuff. But then we give them, I think it was like a 10,000 row data set and now they don't even know

Shriram Krishnamurthi:

how

Kathi Fisler:

to Right? Test So they're kind of going back and forth between we have an idea of what kind of skills we need to use, but we're realizing we don't actually know how to use those skills all that well. And this is where the software engineering stuff is coming in. Like, yeah, you've learned by now I'm supposed to say testing, but how do I test something that's a pile of CSVs? And I don't even know what the stats are supposed to look like.

Kathi Fisler:

So so those were the kinds of things we we started started doing with them. And what we were really trying to drive home is have a design plan, have a testing plan, have an idea of how you're gonna gonna work with the system. Now these were both small enough code bases that they could kinda do it in one shot. And where we're going to after that are programs where they really shouldn't do it in one straight prompt. It's just gonna get it wrong.

Bryan Cantrill:

Yeah. Interesting.

Kathi Fisler:

Yeah. So that that's kinda how we're we're thinking about it. Now in the background, what what we have going is we have a list of all of these software engineering ideas like reliability and readability and whatnot. And we're as we develop these new assignments, we're making sure that we're hitting these things. We're doing a lot of code review in class or sit with the people around you.

Kathi Fisler:

Everybody look at everybody else's code, everybody else's output. And I think this is the other fun thing is they're really getting to see that not everybody got the same thing they did.

Bryan Cantrill:

Yeah, interesting.

Kathi Fisler:

Right? So they're really this is where the nondeterminism, I think, really starts to pop up for them. Like, wait a minute. Your your system did that? How did it do that?

Kathi Fisler:

So it's it's getting them to have conversations that I don't think they normally get in an early CS course.

Bryan Cantrill:

This is like a smoke the whole pack kind of view of a Genetic AI. Like, oh, you want a Genetic AI? Oh, you're gonna have a Genetic AI. Yeah. Yeah.

Bryan Cantrill:

You the and Pretty much, actually. Yeah. Yeah. By the time you get out of this, you'll be begging me for types and and and program proving and, constraint solvers. Yeah.

Shriram Krishnamurthi:

One of the early things we said was, look, we want you to do everything in TypeScript wherever appropriate, not JavaScript. And we talked a little bit about reviewing the types. I personally actually default to TypeScript and I make it actually produce types like no any types, for example, because I want to review the types and I have a pretty productive workflow where I first get it to design the types. I read the types, I go back and forth on the correct type design. And one of the lessons I've always tried to teach in my intro class is that the slogan we use is types tell a story and we do this over and over and over again by the time we get the end of semester, wanna put on a T shirt, right?

Shriram Krishnamurthi:

Types tell a story and different types tell different stories, even for something that is functionally similar or trying to do the same thing. And by changing the types, we can tell a different story. We didn't quite get to do as much of that. Yeah, go ahead.

Bryan Cantrill:

Yeah, I well, was just gonna say that. I mean, I I mean, I think we we in general, we have seen a resurgence of types in the last Yep. Decade. There was there was a time when we were with duct type languages was right were were in vogue. And people didn't want, they wanted that velocity of It's

Shriram Krishnamurthi:

like, know, Jamie Zawinski's Law of Software Envelopment, right? Eventually every dynamic language grows to have a static type

Bryan Cantrill:

system.

Shriram Krishnamurthi:

Even if you resist it, even if you hate it, somebody's gonna come along and build one or two or three of them for you because everybody else wants it even if the original creator didn't. So And that's

Bryan Cantrill:

I also think, I mean, to talk about it's like smoking the whole pack. I mean, I feel like, you know, Adam, you and I were very early effectively JavaScript developers when we were using, I mean, think I we were of that vintage in 2006, that kind of Ajax vintage that felt like I'm the first person with a computer science degree to look at JavaScript. Yeah. And Right. Right.

Bryan Cantrill:

Course, there were many people in parallel going through that same kind of idea. This is when people were surprised that we were on the raw DOM. Like, this is not exactly running assembly here. I think that then, and thinking like, oh, maybe we didn't need types. And then coming out of that experience being like, no, no, no, actually like really, really need types.

Bryan Cantrill:

TypeScript is obviously a very important development. But I just, I love the idea of using the LLMs to, on that part of the labor, iterating on the type definition.

Shriram Krishnamurthi:

Right. You want to find the right side Sorry, Kathy. You about to something? No, finish

Kathi Fisler:

your point and I'll come back.

Shriram Krishnamurthi:

You want to find the right sized thing within which to have a conversation with the agent. Right? It's gotta be a thing that is, there's this other project we've been working on called PIC, which is like, how do we tell if the RegEx is right? How do we tell if a LTL formula is right? You would write And the key thing is you want whatever the LLM produces that you review to have two characteristics.

Shriram Krishnamurthi:

It's gotta be a meaningful thing to review, right? And it's also gotta be a moderate amount of work. And like, if you think about classic, like security alerts, they're all moderate, you just have one thing to answer, but they're not meaningful because it's the thing that's preventing you from getting the job done. You always say yes and you move on. In contrast, you should review the entire code generated by the LLM, nobody's actually gonna do that, we're all human.

Shriram Krishnamurthi:

I wouldn't do that, nobody would. So that's meaningful, but it's not moderate. And the real thing is finding like, a thing on that Pareto boundary that is both meaningful and moderate and types happen to be one of those things on that boundary. You generate the types, we're gonna have a dialogue about the types. I've had situations, for example, where I'm building some GUI thing and it generates the types and I look at it and say, You know what?

Shriram Krishnamurthi:

I really wanna make a distinction between the absence of a thing and an empty string. And so, I want you to now turn this into an option type, because I want the empty string to be distinguished from I don't have a thing over here. Go back and redo the types like this. Once you do that, it's also the experience that those of us who programmed a lot with intelligently type programming languages have had that the code tends to follow from the types, it has certain patterns that it follows. So it's just much more likely to be correct anyway.

Shriram Krishnamurthi:

And others like John Raguerre and other people have all written about this, right? You want to constrain the agent as much as possible so that it can take advantage of the freedom that it's otherwise given. Working through a good set of rich types is one of those ways of achieving that constraint. Kathy, you were going to say something.

Kathi Fisler:

Yeah, I mean, was also going to just bring up the pedagogic value of this for students at the point in the curriculum where we're catching them, where by forcing them to name the types and talk in terms of the types, we're forcing them to say to be precise about what the concepts are in the thing they're trying to build. And with some of these projects, we initially throw them at the LLM, it says, sure. Here's a JSON for it. Well, that JSON object doesn't have a name. It's just, yeah, I got back a JSON object or I got back a dict in Python.

Kathi Fisler:

And so you know, but what is it? And you would see places where they trying to figure out, what do I name this thing? And does that name make make sense? So we would do things like not have them produce code, but come back and produce a set of types for this. And then we'd put them up all over the whiteboards and say, go around and look at what types people came up with.

Kathi Fisler:

And they realized that they were breaking these problems down in different ways. So I really just like the types because of the precision and the vocabulary that they encourage students to develop.

Shriram Krishnamurthi:

That's also, really call this course, we call the course AgenTic Studio. And I think we would have liked to do more of this than we did, but we were specifically in this like studio mindset of, when I say studio, I'm really being inspired by the college that's down the hill, right? Island School of Design, right? When you go there, you do a crit, right? And what is a crit?

Shriram Krishnamurthi:

A crit is you're gonna take your work, you're gonna put it up on the board and then everybody's gonna come around and somebody's gonna waggle their finger at it and say something and that's just the mindset you have to work with. In fact, about twelve years ago, I had a PhD student, Joe Politz, had previously studied with Kathy actually. And Joe actually was doing really neat work in computer security and then decided he was actually more excited about education. And so he did a PhD on peer review. And in fact, one of the first things we did was we went and found like RISD professors and sat down and said, tell us how you do crits because we need to learn from you.

Shriram Krishnamurthi:

There's so much there in that art school crit stuff that we don't really take advantage of. We haven't really like exploited. So what we would do is we'd literally have students go to like four different boards in the room and put up their stuff and then we walk around and talk about it. It's not exactly a crit, but it's a pretty close simulation of one.

Adam Leventhal:

And Chirom and Kathy, would they also talk about the prompts that got them there or was that just too messy and the types were a better manifestation of just what came out the other side?

Shriram Krishnamurthi:

Yeah, we didn't really talk about the prompts.

Kathi Fisler:

Too much about the prompts. I think what we talked more about was the granularity at which you would prompt Claude. As we got into the harder assignments, they started to realize that things went much more smoothly if they decompose the problem a little bit first.

Bryan Cantrill:

Yeah. Yeah.

Kathi Fisler:

And so that was the extent to which we ended up talking about prompts.

Bryan Cantrill:

So how, so you had, and actually, how did you down select from the 80 to the 20? What was your rubric for that? For the students?

Kathi Fisler:

So we were basically looking first of all, we wanted people who were in our target experience range. But the other thing we were really looking for, we build it to the students that we need to learn how this is going to play out when we try to bring it, say, into a novice CS course. So when we looked at the answer students gave, we wanted students who were actually interested in the educational side of it and not just, oh, I need to learn agentic coding. And we were really afraid that if they only saw it as an agentic programming course, they would be disappointed Because we weren't teaching them power use of agents. We weren't talking about skills and things more complicated than let's figure out what you do and don't know about just doing the software development of something.

Kathi Fisler:

And we look

Shriram Krishnamurthi:

at the- In fact, we were very expressly because we're also on Twitter, right? And seeing like, the insane number of things people are posting. This was also soon after the semester began. Soon after the semester began is when like OpenClaw took off. So we're like very expressly, we are not doing that.

Shriram Krishnamurthi:

We are going to be very calm and sedate about the whole thing. We are not going to like burn through. Yeah. So this was not like a token maxing class. And so we were selecting away from that kind of student.

Bryan Cantrill:

Are you finding that students are, I'm sure this is on students' mind in terms of what is the future of the domain? And obviously you've got a lot of people shooting their mouths off saying kind of ridiculous things. How are the students processing all of that? And was that a subject that you all were discussing in terms of like, what is the role of this in software engineering?

Kathi Fisler:

Oh yeah, we were talking about it a lot with them. And we waited until we got a bit into the course so they had enough experience with LLH.

Bryan Cantrill:

Yeah. To be informed.

Kathi Fisler:

Informed conversation about it. But we would bring up some articles. We would post out some things to them. And what was interesting is about halfway, two thirds of the way through the course, we started seeing students say more things like, I don't wanna use this as much. It's it's taking up too much of my thought process.

Kathi Fisler:

I don't have enough understanding of what I'm doing.

Bryan Cantrill:

I can't breathe in this smoky closet. I don't want to look at another cigarette. It's like, all right, it works.

Shriram Krishnamurthi:

You know, we've seen this before too. Now, this is not the same thing at all, but two, three years ago, we added like for our logic for systems class where they learn about formal methods, we added like, at that time it was like GPT 3.5 support. And we saw relevant Actually here, okay, Brian and Adam, here's what we did. We went out and bought credits on OpenAI. We made an estimate of how much our students would spend, roughly how many interactions do we expect per assignment, made some cost estimates, and then doubled the cost estimate.

Shriram Krishnamurthi:

So we came up with a cost estimate of about $2,500 So the 12, well, estimate was $12.50 dollars so we doubled it to $2,500 and we went and put in $2,500 worth of OpenAI credits. You know, was the first time.

Bryan Cantrill:

Corners into the machine, you put $2,500 of corners in the machine and it's gone after

Shriram Krishnamurthi:

You know where your alumni donation went. And so now the question for you is, so 2,500, right? That's how much we spend. How much credit

Bryan Cantrill:

Forty eight hours.

Shriram Krishnamurthi:

Students actually use? Oh. How many of those $2,500 did they actually use?

Bryan Cantrill:

Oh, well, I was gonna I mean, was betting the the dollar. Yeah. I was gonna say they exhausted them in forty eight hours.

Adam Leventhal:

Okay. That's right. Rules, I'm gonna go $250.

Shriram Krishnamurthi:

Brian. $2.50? And Brian, like the exhausted info.

Bryan Cantrill:

I'm I'm I'm I'm I'm I'm

Shriram Krishnamurthi:

pick some numbers in chat, chat, chat, chat. I'm gonna wait for the chat people to one k.

Bryan Cantrill:

Tyler, is Wait. Wait. One k in the chat. Yeah. This is yeah.

Bryan Cantrill:

The we we need, like, a livestream video. Don't we this 9 where

Shriram Krishnamurthi:

20 oh, 927. I like it. Yeah. Okay. 2500.

Shriram Krishnamurthi:

Okay. I'm gonna wait another three seconds. Okay. Okay. 2000.

Shriram Krishnamurthi:

2000. Okay. 5000. Okay. Yeah.

Shriram Krishnamurthi:

Well, I Bryan yeah. Bryan doesn't contribute that much to Bryan, I don't think. Okay. Correct answer. $1.16.

Bryan Cantrill:

No way. Woah. Holy smokes.

Shriram Krishnamurthi:

Dollars 1.16. Now, are lots of potential confounds here, lots of potential confounds, but the key thing is like even I mean now again and this was several years ago, I'll post a little blog post, we wrote a paper about it, but I'll just you can get the gist of it from the blog post, but I've also given away the punchline. The key thing here is that students didn't feel they're like, look, I'm taking this course to learn a thing and if I delegate everything to GPT, I'm not gonna learn the thing that I came here to learn. Now, a lot of that there's self selection, maybe in that case we told them you can use it freely, there's not gonna be any penalty, we're literally telling you you can use it, going back to the comment you made at the beginning of the chat, Brian. So, all of that, I think there's this kind of social media meme of like, Oh, kids these days, blah, blah, blah.

Shriram Krishnamurthi:

I know I think it's more complicated than that.

Bryan Cantrill:

I agree with

Shriram Krishnamurthi:

you. I

Adam Leventhal:

definitely agree with you.

Shriram Krishnamurthi:

You sure, they're certainly feeling conflicted. Now, some of it may very well even be. It's like, look, this is the damn thing that's going to take away my job and therefore if I don't use it, maybe none of us use it. I don't know. I think there's actually a very complicated and I think that's probably also one of the phenomena, but I think there's a very complicated collection of phenomena that are running through students' minds with the result that they're actually approaching these things in a much more ambiguous way than the discourse would make you think.

Shriram Krishnamurthi:

At least at Brown, I can't claim for other places.

Kathi Fisler:

And in particular, the number of them that said, I feel my own attention slipping. You know, it's interesting. They felt they were falling into the trap of just, you know, trying to get stuff done quickly, and they were they weren't comfortable with it.

Adam Leventhal:

This is this is really heartwarming. I there was a there was a call. Sherman and Cathy, I'm not sure if you're on it, but with some, Brown CS alumni and some professors talking about use of LMs and so forth. And and one of the constant fears I heard from some of the professors was around people cheating.

Shriram Krishnamurthi:

Yeah.

Adam Leventhal:

And and, obviously, like, I know that's a real concern, and I'm sure that it happened. I know it happens. But my feeling was like who they're cheating ultimately is themselves. I think there's some professors who are really worried about the impact of that socially. But it's great to hear that the overwhelming feeling was, like, I wanna learn this stuff, and I have this other thing to it.

Adam Leventhal:

To on this point, I've got a special guest to the show. Surprise guest. Surprise guest. Will, are you there?

Bryan Cantrill:

Is Will there? Will,

Adam Leventhal:

hi. This is my son, Will.

Shriram Krishnamurthi:

Oh, hello. Is this the baseball player?

Bryan Cantrill:

No. No. No. No. No.

Bryan Cantrill:

No. We'll hear well, no. We can hear his perspective on this later. Let's just say it's a lot less interesting.

Adam Leventhal:

Yes. My my my other son is also, is also writing some code using using Claude, but we'll talk about that later. But no, this is Will. Will is a a sophomore at Emory University in computer science

Bryan Cantrill:

and math.

Shriram Krishnamurthi:

Awesome.

Adam Leventhal:

Awesome. Will, what what do you see in terms of, like, your your peers using LLMs and and and any thoughts on what you've been hearing about this course?

Will:

Yeah. I think it's become pretty normalized among my peers, at least in the CS classes. Like, the I'm not in introductory anymore, so I can't, like, say what's going on there. But especially in, like, data structures and algorithms that I've noticed, like, the teachers have started to kinda make some adaptations to the classes. My professor right now is, like, very open to using it.

Bryan Cantrill:

Yeah. Interesting.

Will:

But he's kinda structured the course material to, like, have us do actual thinking. So we do have, like, programming segments, which is still nice. But then we have presentations afterwards where we, like, run the algorithms on certain datasets. And then we have to kinda guess what is in the dataset that makes algorithm run that specific way. And that's kind of like using the material we gained from the class previously.

Will:

And of course, there's also tests and stuff. It's DSA, they can just make us, like, chug away at the algorithms on tests.

Bryan Cantrill:

Will, are you seeing because I think this is really interesting, Shermann, and, Kathy, what you're saying about the the kind of the disposition that students have towards it. That's what I see in my own 18 year old and his use of LLMs, which as a study aid, it's been an invaluable study aid for him. It's almost all tests for him in terms of evaluation. Are you finding that like are are where how are students what is their disposition towards LLMs with respect to like, I want this thing to do the work for me versus I want this to to help my own education?

Kathi Fisler:

We have the spectrum, as you might expect. We've got everything from students who take the handout, stick it in the LLM, turn in the solution, and then you call them in and they say, oh yeah, I did that. I didn't think you'd catch me. We have students who are certainly using it as a replacement for office hours. We're seeing many fewer students at the department level.

Kathi Fisler:

This isn't in our course. This is as I'm talking to colleagues. Many fewer students coming to office hours and preferring the ease and the short lines of just asking an agent for it. And we have students who don't want to be asked. They don't want us to ask them to use them because they're afraid of what it's going to do to their learning.

Kathi Fisler:

You have all of these dispositions the room at the same time in courses now. And so it's pretty challenging to figure out how to design for all

Shriram Krishnamurthi:

of that.

Bryan Cantrill:

Yeah, interesting. And Will, are you seeing that same spectrum among your peers?

Will:

Yeah, definitely. And also, it kinda changes from person to person as well, like over the course of the semester. Like, I've had one, like, person I was working on with a project with, and they were pretty, like, adamant about using just using themselves to, like, write the code. So we

Bryan Cantrill:

work with

Will:

the projects, then on the most recent project, he just turned swimming in, and I looked at his code because, like, we're writing the code together, and I was

Shriram Krishnamurthi:

like, That's

Adam Leventhal:

all slop. This guy's tested.

Bryan Cantrill:

This guy, this guy used to be like an Amish furniture maker and now is just industrialized.

Adam Leventhal:

Well, I can imagine also as push comes to shove as like the semester's ending and work is piling up, I'm sure the temptation increases to, like, hand hand over the reins a

Shriram Krishnamurthi:

little bit.

Bryan Cantrill:

Well, it's alright. So so Kathy Truman

Kathi Fisler:

Not only that. It's also with the job market doing what it's doing, students are have a heightened fear that they have to look really good on paper or they're not going to get the jobs. Right?

Bryan Cantrill:

I know.

Kathi Fisler:

So there's the whole concern about grades. And now if you know that some of your classmates are using LLMs to get the assignments done and you're doing them on your own, well, if that course causes your grade to be lower, well, then what are you doing for your career? I mean, that that is another factor that we do hear from students.

Shriram Krishnamurthi:

Were you contemplating arms raised. Yeah.

Bryan Cantrill:

Yeah. Was this mandatory SNC, or were you contemplating that?

Kathi Fisler:

We didn't make it.

Shriram Krishnamurthi:

Sorry for reference for people who don't know what that means. SNC means like a pass fail course. You can make a course mandatory pass fail. Yeah.

Adam Leventhal:

Yeah. It's fail if it if you fail, it doesn't appear in your transcript. It's a special Brown special.

Bryan Cantrill:

Well, first two NCs don't appear on your transcript. As a TA, you will have someone who took a course for a grade and that they're going to get a C in and they are begging you to fail them in the course because the NC won't appear. But I actually think

Shriram Krishnamurthi:

the Basically, imagine you design a curriculum in 1968 and pretty much that's what you end up with.

Bryan Cantrill:

All hail Ira magazine or man? What I mean, I I do think that this is people I mean, this people don't necessarily appreciate this about Brown, but but Brown was a real revolutionary in terms of what undergraduate education should look like. And the and in particular, the the what was called the is is it's not an Ivy League thing. This is a Brown thing. In fact, actually

Shriram Krishnamurthi:

It's very brown.

Bryan Cantrill:

Yeah. Oh, it's definitely a brown thing. Brown has endured trust me. Like, this brown has endured a unbounded amount of abuse from everybody. Like, if you are a the in in terms of because there's, oh, you know, a bit Brown, like, you would take all your classes pass fail or what.

Bryan Cantrill:

I mean, there there there was a lot of but I think Brown's big belief was that an 18 year old should be taking responsibility for their own education. Something that I I very strongly agree with, obviously. And so they did a bunch of things, and they they this is where they introduced the idea of you could take any course pass fail, that that two no credits would not appear on transcript. Like, all that dates back to the new curriculum. But the and so did you what were you doing in terms of actual assessment in this?

Bryan Cantrill:

Because you would love you would love to, like, not have a grade in this course, I I would imagine. You would love students to, like, really focus on their own education in this course rather than the grade, but you're obviously left. Meanwhile, here on planet Earth, you actually have to do a grade. Well, did

Shriram Krishnamurthi:

say that? One more comment to before Kathy answers that question for people who don't know this about Brown is you can choose to take any course you want pass fail. So what Brian was originally asking is did we make the course compulsory pass fail? But you can also choose as a student say the course is not compulsory pass fail, but I'm choosing to take it passfail.

Bryan Cantrill:

Yes. A great strength,

Adam Leventhal:

I think.

Bryan Cantrill:

But and and what did you do?

Kathi Fisler:

So, yeah, I think what we we okay. I was the one designing a little more of the assessment side of it, I think. I was more concerned with students getting feedback. I thought that was actually the point of this.

Shriram Krishnamurthi:

100%, yeah,

Bryan Cantrill:

I agree with that.

Kathi Fisler:

We did a little bit of grading, not as much as we had intended to, just because writing some of the infrastructure code and everything took more time than I thought it was going to. We did some grading of peer reviews, for example. So they had to they did some reviewing of each other's work and then they had to turn in final versions of say a testing plan and we graded the testing plan. We had them do, again, getting a lot of feedback from peers, these kind of crit studio classes where we would put things up and have everybody see what everybody else was doing. So we were really doing a lot of that style, see the range of what other people were trying as a way to give them feedback.

Kathi Fisler:

And then on the final course project, everybody had to come in and do a twenty minute live code review with us.

Bryan Cantrill:

Yeah, interesting.

Shriram Krishnamurthi:

Also, we were interacting with these students every single day in class, right? So we knew how they were doing. One thing, Kathy mentioned earlier was we were getting people to write these reflections. Something we were really, really, really clear about I hope and I think they internalized is the code is going to be generated by Claude and that's perfectly fine. But we want every word of the reflection to be written by hand by them.

Shriram Krishnamurthi:

Like this is true artisanal reflections. We're like, look, we can ask Claude to generate reflections. We don't need that. We need to know what you were thinking, what was going through your mind at that moment. That's what we're really here for.

Shriram Krishnamurthi:

And so even just by eyeballing, you can tell that people are making a good faith effort. There's not like a right or wrong answer, right? What they experienced is what they experienced, but that was like a big part of our grading is are people actually doing their reflections? Are they taking it seriously? Are they turning them in?

Shriram Krishnamurthi:

All of that, that was very much central to the grading process.

Bryan Cantrill:

Yeah, and think a course like this is not designed to be a weed out course. It's not designed to be an arduous, there are gonna be more arduous courses in the department. Something like this is not gonna

Shriram Krishnamurthi:

be Well, mean, I'm gonna move further. Gonna move further, Bryan. This is not a course, right? This is something that I think is easy. This is a research project masquerading as a course.

Shriram Krishnamurthi:

To be really clear, in case anyone's concerned

Bryan Cantrill:

about this,

Shriram Krishnamurthi:

we were really explicit with the students that this is what's going on, right? We are trying to get into your brains. We can't do this. We're not gonna ask an LLM to pretend to be you. We want you to be you.

Shriram Krishnamurthi:

We need to get into your brains. So this is when we teach a regular course, we might very well use a very similar sequence of assignments and tasks but it will probably be different in terms of its expectations and grading and that could be 80 or 200 students in it. And then we're not going to be able to get by with this, like, I know what every student is doing in class everyday thing. But this was a one off where the grading here is like just completely by vibe is perfectly reasonable. But how to make this work for a regular class?

Shriram Krishnamurthi:

That's a different question.

Bryan Cantrill:

Well, I think also, I mean, you're welcoming students onto the frontier of like, we're gonna all kind of figure this out together, what this looks like. Is like, I mean, is what the seventies were like, you know, in terms of,

Shriram Krishnamurthi:

like This also what this is what works well at Brown. Right? This is very much the student ethos as well. Yeah. Yeah.

Shriram Krishnamurthi:

Yeah. Yes. I said then what

Bryan Cantrill:

were the results? What were the what were the reflections like? I mean, you it were what were some of the surprises? First of actually, before we get to that, Kathy, you said something that I wanna get back to. You said the infrastructure code was more work than I anticipated.

Shriram Krishnamurthi:

Tell me

Bryan Cantrill:

more about that. Because, like, I didn't I mean, you've got caught doing infrastructure code. Right? Like, is

Shriram Krishnamurthi:

it interacting? Do we not byte code it? Do we not know how to byte code?

Bryan Cantrill:

Is that what you're Brian? That's right. Yeah.

Shriram Krishnamurthi:

Yeah. Yeah. I think that's what you're asking.

Bryan Cantrill:

Did did Claude delete the database despite the instructions? Like, do not delete the database. Wait. What happened?

Kathi Fisler:

So so we had this this moment. We we decided that they were itching to do something real. So we told them that what they had to do was write a requirements checker for the Brown computer science requirements. And this is kind of a fun activity because we actually have multiple versions of the requirements because we've been in the process of changing them in in recent years, but they were they were given like the web page. We said here's the web page and here's the handbook with all the exceptions to the to the requirements, and you've got to make a program that determines whether someone should pass or not.

Kathi Fisler:

You're gonna turn in a set of tests, and we're gonna run everybody's tests against everybody else's implementations. Okay. And so the the this is something Shrum's done in his classes for a while, And we said if you find bugs in other people's implementations, then you'll you'll get extra points. So we had this grand vision, of how we were gonna gonna grade this. And when you got into the nitty gritty of actually trying to assign points for this, you start to realize that, well, this student read this part of the exceptions one way and this part read the exceptions that way.

Kathi Fisler:

And so it was the human design problem of what the heck constituted right and wrong in some of these assignments that made it hard to automate.

Adam Leventhal:

Interesting. I love that you have these, like, Talmudic scholars of the requirements for the CS department that are completely untracked.

Shriram Krishnamurthi:

The other funny part of is

Kathi Fisler:

I'm currently the director of undergrad studies for the department. So I know like six versions of these requirements in my sleep. So I was their ground truth expert. So the idea was that, you know, I would be the one to say, nope, that actually matches the requirements and that doesn't. And they were finding bugs in the handouts.

Bryan Cantrill:

I was wondering about that. Yeah. Right. Exactly. Or or bugs in the requirements.

Bryan Cantrill:

Like, oh my god. We've Right. It's

Adam Leventhal:

a cycle.

Bryan Cantrill:

That's right. Yeah. Look, Adam, we're not telling you this, but, you're like, you don't actually have a valid CS degree.

Kathi Fisler:

Right. Right. Right. Right.

Bryan Cantrill:

That's right. And then okay. So that was more work than you anticipated. What was the it it's has the course wrapped up? I mean, are we where are we in the semester right now?

Kathi Fisler:

We we have wrapped up.

Bryan Cantrill:

You wrapped up. And how was it? What what did the students think? What What were the findings? What were some of the things that surprised you?

Kathi Fisler:

I was surprised at how quickly and how deeply some of them have really gotten nervous about what working with Claude does to their code quality. Not

Bryan Cantrill:

only that

Kathi Fisler:

was a bigger group. The last class we turned it over to this question of what should we now do with Intro CS? In light of this? And one of the students, said flatly flat out, 18 year olds cannot be trusted with what these can do.

Adam Leventhal:

How old was this person?

Bryan Cantrill:

Is like 19.

Shriram Krishnamurthi:

This is like an axe to the heart of Brown University, basically.

Bryan Cantrill:

Oh, yeah. Wild. Okay.

Kathi Fisler:

We can't risk giving these tools and intro. They don't have the responsibility yet to use them.

Bryan Cantrill:

And did you I I wait. I mean, it's so charming to hear. I I mean, unfortunately, it's it's a little hard to contain it at this point, isn't it? I mean, it's it is I I I I certainly sympathetic to that view. Okay.

Bryan Cantrill:

So yeah. Interesting. What were some of the other, so you you you and was that a majority position, plurality position? Was that a

Kathi Fisler:

common position? Many people nodding.

Bryan Cantrill:

Yeah. Right.

Kathi Fisler:

Would say that there were a good number of people nodding and not sure how we would put the genie back in the bottle as it as it were.

Bryan Cantrill:

But maybe you figured it out, though. If you've had a lot of people okay. So I mean, you have a lot of people nodding and your vision for this course is that, like, the the for smoke the whole pack, for lack of a better word, the smoke the whole pack agentic course happens relatively early on. Maybe that is your answer. It's like, no, to the contrary, we are force feed this to 18 year olds like a like a gavage goose.

Adam Leventhal:

I thought you're gonna make some sort of LLM foie gras.

Bryan Cantrill:

Yeah. Yeah. You know I was gonna foie gras. Exactly. That's right.

Kathi Fisler:

And As you can tell, that's what I'm doing next semester. So next semester, I'm offering a pilot class for students who are actual novices. They don't have a programming experience. We're going to look to see it's not going to be all LLM programming, but it's going to try to do this balance again of we'll work some with LLMs and then we'll say, Okay, what did we not know how to do? Let's go back.

Kathi Fisler:

Let's learn those techniques and try to see how we balance this out. Because we have to teach students how to use it. I mean, a number of

Shriram Krishnamurthi:

students how talk valuable

Kathi Fisler:

this was, how much they learned about design and testing and just how to question work in ways that they hadn't experienced before. So the majority of the students were really excited about this.

Bryan Cantrill:

Yeah, that's really interesting.

Shriram Krishnamurthi:

Also, I'm excited about what it lets me do, right? I mean, I'm also rethinking my entire upper level programming languages class, but we can talk about things that we had to wait till junior year. We can say, just think about that Tetris thing, right? Okay, it takes two weeks to write the first Tetris, it's going to take another maybe at least a half week to write the anti gravity Tetris, and probably another week to write the dual gravity tetris. There's almost a month of your semester gone, that's one third of your semester, there is simply not enough learning objective value from that three weeks of work to justify doing all three assignments.

Shriram Krishnamurthi:

But if I can do the whole thing in like a week and I can illustrate part of what happened there was that was also a point of departure to say, Okay, so when you said Tetris, Claude just one shotted it. Great. But when Claude doesn't work out of the box, how are we going to get it to produce what we want? Well, we're going to have to test the program. We're going to have to give the agent a harness, a testing harness.

Shriram Krishnamurthi:

That means we have to write test cases. Well, how on earth do you test a thing that is Tetris? And for whatever, for better or worse, the course that teaches them Tetris doesn't teach them how to test Tetris other than sitting there manually poking at buttons. So we're like, Okay, I need to teach you a concept called model view controller. And you're going to write tests against the model, you're going to simulate the controller, blah, blah, blah.

Shriram Krishnamurthi:

So all of these learning objectives were only possible because we could one shot Tetris and then do like three versions of Tetris in a week, right? And then set up for something and by the end of it they're not all exhausted as they would be if they had to do like three weeks of Tetris. So I don't just want to think of this as like a oh the genie's out of the bottle, we can't put it back in. Mean all those things are true so simply from a pragmatic point of view of course you know we can't we're gonna have to do this but you know I was talking to somebody recently for a while like people in CSEd if you went to CSEd conference there was like this immense I mean there probably still is immense amount of hand wringing about like, Oh my God, what do we do? We have to make our assignments harder so that the LLM can't just like one shot it or can't just generate it.

Shriram Krishnamurthi:

And somebody is pointing out, I mean, you can do this, you can just go to the latest benchmark and find whatever program's not being done. But this is a moving boundary, right? At some point, what's your intro assignment gonna be? Assignment one solved the halting problem? We know that's a thing it can't do, but what the hell kind of use is that as an assignment?

Shriram Krishnamurthi:

That's not any point. It's like the same thing in math, right? Like these things are starting to, I don't know, they seem to credibly be knocking off like IMO problems. So what's your homework number one in your intro math class gonna be? Know, International Math Olympiad problems because that's what the LLM can't do?

Shriram Krishnamurthi:

That's insane. That's a completely absurd way to think about any of this. This is like completely batshit nonsense. Instead, I think we have to figure out how to put them in a position of understanding it's not magical, that's what we did initially, then take advantage of it so that we can get to much more interesting things. One thing that happened around mid semester was they were building all these UIs and we're starting to build systems that were going to have multi user systems and I did a whole lecture in an hour and twenty minute.

Shriram Krishnamurthi:

I distilled down what used to be my graduate level course on usable security down to an hour and twenty minute presentation on usable security. And students just loved it. They're like, Holy crap, that's amazing. We had not thought about any of these things. We didn't know there was a whole discipline here.

Shriram Krishnamurthi:

Again, there's a course you can go take on it. But imagine talking about usable security issues in an intro class, you previously could not imagine doing it, like what's there to talk about? But now we can talk about it. So I want to think of us like, how can we embrace this? And yes, there's lots of things I'm sweeping under the rug, but how can we embrace this so that we can like rethink intro computing all over again?

Adam Leventhal:

Yeah. That's true. I mean, in particular, I was thinking about this transition you go through from learning to program to programming to learn. And I remember in particular in some of the graphics classes, in the database classes, in the operating systems classes, you're not learning to program at that point. The programming is the vessel or the means by which you're And learning these higher order you still want the learning to program part.

Adam Leventhal:

You still want the learning to use LLMs, but how can those together reinforce the learning that previously was only accessible through this kind of programming to learn model?

Kathi Fisler:

Well, on top of that, how do we bring the students along with that? Because as we are talking not just to the students in our class, but students in the department more generally, you know, we have to communicate to the students of understanding of what learning means now. Yeah. You know, we have students who very firmly believe that they learn by writing every line of code for themselves. And helping them envision that, well, we could be learning, there are other things that we could be learning if we had the power of these systems to do some of the low level work and you get to work more with the concepts.

Kathi Fisler:

So there's a whole parallel conversation that we have to be doing to help our students understand what learning even means.

Bryan Cantrill:

Yeah. I'm also very sympathetic to that point of view, though. I mean, because you don't wanna cut that out completely. The implementation No.

Shriram Krishnamurthi:

Definitely

Bryan Cantrill:

does really matter. And I think that actually one, I mean, there was one of the real strengths of Brown when at least when I was there was the, it was very lab intensive at a time when not every computer science curriculum was. I think it is now, I mean, now it's much more common for computer science, but there was an era.

Shriram Krishnamurthi:

That's right. There were departments that were like pure theory places. Like not our problem. Code is not our problem. I'm not gonna name any places, but I think Adam and

Bryan Cantrill:

I think it might include your alma mater, if I'm recalling correctly, Shuram.

Shriram Krishnamurthi:

All sorts of places.

Bryan Cantrill:

And I think that, I've always felt that like you need that lab component in addition to the classroom component, like you need these things working together.

Shriram Krishnamurthi:

So here's the thing, Here's the thing, I think there's very roughly, obviously there's an infinite variety here, but very roughly there's I think two ways of looking at the world. One approach is we're gonna, and this is just thought experiments. I certainly don't want anybody listening to this saying, Oh my God, this is what Brown's about to do. This is not what Brown's about to do. I'm just describing thought experiments, don't like roast me on social media.

Shriram Krishnamurthi:

I mean, you can, but I don't care. But anyway, one way you can imagine slicing the world is that we want to, You could imagine just all LLMs upfront, you do all your coding agentically upfront for say your first year and then years two, three and four, you start going below the threshold and you do sort of everything by hand. This is a little bit what actually happens right now in the sense that you start with higher level languages and then you work your way down to like understanding what's happening in C, what's happening in assembly, what's happening in computer architecture, etcetera. You go from a highest abstraction to lower abstractions.

Bryan Cantrill:

Well, also a lot of framework supporting you right now. Like you have a lot of

Shriram Krishnamurthi:

framework don't around have a reliable compiler. You don't have a flaky compiler. You've got a reliable compiler, It actually, for the purposes of what you're doing, it actually gives you a non leaky abstraction for the most part. Whereas here you're getting this extraordinarily leaky abstraction. The other model is we're going to just split it, I don't know, fiftyfifty or whatever, We're going to split it down the middle.

Shriram Krishnamurthi:

Every course is going to have some agentic component and then it's going to have a manual component and it's going to figure out what that mix is that's best for it. And I think these are roughly speaking the two models and every of course with an asterisk. And I think we're still sort of debating which of these two models makes more sense. But I think we're going to end up going with the fiftyfifty model, right? Because it seems unreasonable that if a student takes like one course or something like that, then they're never going to have written a line of code by hand.

Shriram Krishnamurthi:

At least for the next two to three years, that feels like fundamentally a wrong thing to do to a student. Maybe seven years from now, who knows? But certainly in the next two, three years, that seems like a wrong thing to have done.

Bryan Cantrill:

When you definitely don't wanna You wanna make sure that you've still got a principles based curriculum where students are still learning how this I mean, because to me, and actually, it was funny, Kathy, when you're talking about your own introduction to computer science that you didn't really get into computer science until you got to university. I mean, was basically the same way in that, I'd grown up programming and I had But I also thought I knew it all by the time And I was gonna be an economics constraint.

Kathi Fisler:

I didn't

Bryan Cantrill:

think there was a you. Yeah. Yeah. Well, yeah, exactly.

Adam Leventhal:

I bet you have.

Bryan Cantrill:

Yeah, exactly. So so have several of your colleagues in the department. You can

Shriram Krishnamurthi:

ask them about me. Yeah. All we know.

Bryan Cantrill:

The and the the I mean, to me, like, that kind of the that that that watershed moment was realizing that there's, a real discipline here that there that to me, like, learning the math behind computer science. Like, learning about time complexity was just like this incredible watershed moment. And learning about and then and then computer architecture and operating systems and like, you know, learning how much I didn't mean, the the whole purpose to me of undergraduate education is at some level to break you down and build you back up again. And the to kinda get you to bring you in, you almost want to come in overconfident and then break that confidence. And as I I this is why I kinda like to smoke the whole pack course because it's just like, hey.

Bryan Cantrill:

Look. We're worried that some of you are coming in not overconfident, so we're gonna actually gonna force you do some some magentic LLMs, and then you'll all be overconfident. And now and and then we're gonna we're gonna break you all down, which is good. And then and then we'll get

Kathi Fisler:

I wanna

Bryan Cantrill:

talk about

Kathi Fisler:

things on the other end of this too, which is we have an increasing number of students who take one CS class. Now many of them are doing it because it's a requirement for whatever major they're doing. But you're going to have students who come in, they take one CS class and now they're trained to go be dangerous in another field doing data analysis, say for a professor's research project. Those students I think are a different kind of critical audience for this because LLMs will look very appealing. Really need to have this kind of awareness that, hey, these are the ways these things don't go wrong.

Kathi Fisler:

This is what it means to build something trustworthy and reliable, even if you're not going to build anything big. So we can't paint this just about the CS majors. In some sense, have few students who not majors.

Shriram Krishnamurthi:

Some we don't have a course that speaks to this early on in the curriculum, students are going to go listen to some random influencer on the internet or something and say, well the CS department's all outdated, they're not willing to teach me how to work with Claude or whatever, and so I'm just going to go listen to some random videos on YouTube which could be amazing but they could also completely fail to convey any sense of like responsible software engineering to the students. So this is also a little bit of a fear that I have or I don't know, the library will decide to offer a course or some random thing. We've seen this before, by the way. About ten years ago, there was this huge upswing of data science courses and data science departments and majors. At Brown, we handled this in a very harmonious way, but a lot of places data science went off and broke off to do its own thing.

Shriram Krishnamurthi:

Kathy and I actually wrote an essay about this. On the one hand, they were making a valid critique of intro computing curricula, which at some fundamental level had not changed since, I don't know, like 1985 or something, We're still like CS1, CS2 look the same at most places, excluding stuff like what Adam did at Brown and things like that. But most of those had not changed and that was really bad. But on the other hand, they were also doing it without any heed to things like testing. They're like, Oh, testing, that's like an upper level computer science subject.

Shriram Krishnamurthi:

So wait, you're gonna train people to go off and do data analysis. They're going to be the most trained person in the room, they're going to produce answers that somebody is going to make a decision based on, like, I don't know, a social policy decision or a business decision, and you're not even going to tell them how to test their code never even mind statistical testing and all of that that it entails just even basic unit testing. Guess what? That's the stuff now that is completely automated all the stuff they spent teaching a semester teaching you Python packages that's the stuff the LLM will click forty seven seconds, like you can just say data science will just generate code for you. It's like thinking through what are the cases, what are the difficult cases, what do I need to worry about, what does my customer worry about, what is wrong with my data sets, All that critical thinking stuff that they didn't teach, that's the only thing that's left in that intro data science class now.

Shriram Krishnamurthi:

I wonder how that played out for them. But people will just repeat that mistake.

Bryan Cantrill:

I will be very interested to see what happens with those folks who are who kinda take one computer science course because I think that's gonna go away. I gotta tell you, I think that that's gonna go away. I think that that you're gonna have where mom and dad are saying, hey. Look. You should I know you're premed, but you should take one computer science course.

Bryan Cantrill:

I think that the there may you know what? You don't need to take the computer science course. You've got, you're going have quad for that. You actually don't need to. Yeah.

Bryan Cantrill:

Or when the engineering department says take one, it is

Adam Leventhal:

this one that's like fiftyfifty. Like, yes, you're going to use LLMs a bunch, but we want you to make sure, have the means to understand if the

Bryan Cantrill:

thing you've produced is correct or incorrect. Right? That becomes the more becomes the germane course to take. Yeah. I mean, think, I mean, and I'm sure you're all braced for it.

Bryan Cantrill:

I mean, I think it's probably going to be honestly healthy that there's going to be some enrollment decline in computer science.

Kathi Fisler:

Oh, yeah. We're we're already seeing it. Yeah.

Bryan Cantrill:

Yeah. Yeah. And I think I mean, good for Ball, I think. I mean, I know it's because I think it it the computer science was it was too oversubscribed and it's actually better I think for the university to have that cool

Adam Leventhal:

I'm sure we've talked about this, but like Cathy and Sherem, check me on this one, but I think something like a quarter of Brown University students

Shriram Krishnamurthi:

were a

Bryan Cantrill:

third. Oh, was it a third? Yeah. I mean, at least when I was at the

Shriram Krishnamurthi:

Some ridiculous percentage of majoring you mean in computer science.

Bryan Cantrill:

Yeah, you can sign bananas. If you have a concentration in computer science and another domain, they were asking you to please go to that other domains graduation. Just for like seating and parking.

Kathi Fisler:

But then the problem is a lot of those students were doing computer science slash economics. So you send them to the other department and they're no better. I mean, they're just as bad.

Bryan Cantrill:

Even bigger. Right. Well, I think it's it's gonna be really interesting to to watch all of this. I gotta say, am I'm really proud of of you two, you three, I guess, with Michael and Brown for taking kind of a really experimental approach here and and trusting undergraduates to to play an important role here in what this looks like. Because I think that I mean, that to me has always separated Brown a bit in terms of of the trust that it has in 18 year olds.

Bryan Cantrill:

And mean, course, the 19 year olds don't trust the 18 year olds. But I think but but the institution itself, I I I think that the you know, I I think encouraging people to take responsibility for their own education is extremely important that you're you're you're not here to get

Shriram Krishnamurthi:

I board don't know how it works at other institutions. People keep telling me, Oh, you couldn't trust students that way, blah, blah, blah. And I don't know, I think on the one hand, I feel like you probably are right. I'm going to guess that there's something about your student body that's like that. But on the other hand, if that's the case, the one person who trusts them, everyone's gonna flock to that one person.

Shriram Krishnamurthi:

I've been talking about this stuff and I've been, when I go, because I also do education research, I go to give like technical talks, I often also do an education talk and it's just weird how there's this like culture of not wanting to like try the new thing and everyone can, I mean, think, know, we're all human, right? It's always easy to find a good reason to not do something and you've got to have mechanisms by which you find a way to say, I'm going to do it no matter what, because the 100 different reasons, yeah, they'll always be there, but so what?

Bryan Cantrill:

Well, I'd love the plunge you're all taking. Are you going to take the results of this and kind of summarize them anywhere? I mean, terms of like- Oh

Shriram Krishnamurthi:

yeah, yeah, yeah, mean, we need to because we need to redesign our fall classes. So that's the summer job is redesigning the fall classes.

Bryan Cantrill:

They're redesigning the fall classes, yeah. We can look to, I'm sure there are other departments that are also looking to your experience and this is something that every department is grappling with and what this change means and how do we educate our But I think it actually, at some level, it's gonna mean you're gonna have to trust your undergraduates a little bit more, find a way to do it, to trust people with their own education because you've got the kind of the the this all knowing machine that people can go to. This flawed, nondeterministic, flaky, pewterly all knowing machine. Yeah. Yeah.

Shriram Krishnamurthi:

Yeah. It'll be great. We should

Bryan Cantrill:

Well, this

Shriram Krishnamurthi:

What a time to be alive, man. Come on. Like, I think we should

Bryan Cantrill:

It is time be alive. No. I think it is a little bit

Shriram Krishnamurthi:

If you're gonna go up

Kathi Fisler:

a sabbatical and a teaching relief, you better have as much fun as we did this semester.

Adam Leventhal:

Sarah, my my my not featured on this podcast is my other son who has also been vibe coding. This is a baseball player who's eight years old. And he is off on his own, has decided to have chat GPT write the prompts for, Gemini because he feels like chat GPT is better at prompting and Gemini is better at writing. I don't know where he came up with it. And, yeah, he's like building stuff.

Adam Leventhal:

It's it is truly terrifying. So anyway, on a future podcast, we'll let him weigh in.

Bryan Cantrill:

Yeah, exactly. He'll be like, no, I I I'm I'm I'm being I'm force feeding myself. I love it. It's all you can eat over here. Well, Shrum, Cathy, thank you so much for joining us.

Bryan Cantrill:

Again, I'm really proud of you three for taking this on and really taking a novel approach here and really embracing all of this and seeing what it means to the rest of us. So thank you very much for joining us.

Shriram Krishnamurthi:

Thank you so much. We're an amazing group. And we'll still stick around and answer some questions and chat, but thank you all so much for coming. Really appreciate your time.

Bryan Cantrill:

Awesome. And we non breakfast eaters need to stick together.

Shriram Krishnamurthi:

Indeed, indeed. We should get together and not do breakfast, Chloe.

Bryan Cantrill:

That's right. That's right. Alright. Thanks, everyone. See you next time.

AI in Computer Science Education
Broadcast by