Another LPC55 ROM Vulnerability

Laura Abbott joins Bryan and Adam to talk about **another** vulnerability she uncovered in the LPC55
Speaker 1:

Laura, can you there?

Speaker 2:

Yep. I'm there. Here.

Speaker 1:

Awesome. Nice. Well, what what is it? It's all there and here. Where am I?

Speaker 1:

Where am I? So, Adam, do you listen to Heavyweight with Jonathan Goldstein?

Speaker 3:

No. I don't.

Speaker 1:

The this is do you listen to podcast much? I'm not sure how it would

Speaker 3:

do you Not much. Not much. List listen to the the Enron one recently and the Elizabeth Holmes Holmes one before that, but not that much.

Speaker 1:

So I again, I I I can say listen to, like, doing dishes or driving the kids around or whatever. But so heavyweight with Jonathan Goldstein, Gilwit podcast, outstanding podcast where he goes to people and kinda solves, problems that they have had deep in their past. You know, often a reconciliation with someone from their past. It's very, very good. Very well done.

Speaker 1:

Done. But the every episode of heavyweight starts with him calling his friend, Jackie, and Jackie usually hanging up on him. And I listening to all of the I thank you so much for uploading all the past Twitter spaces into to Spotify, to to the to indicate them as a podcast.

Speaker 4:

Oh, yeah.

Speaker 1:

So I've been listening to a bunch of them just kinda like I said, driving around. And I realized that we have our own, like, Jackie. We start every episode complaining about Twitter spaces, basically. That's how more or less I mean, complaining is actually overly that that, I mean, we actually should give ourselves a hard time because we're we're observing things that are busted about Twitter site.

Speaker 3:

Yeah. I was wondering where this where this was

Speaker 1:

Meandering journey was going. Yeah. That's where we are.

Speaker 3:

I I I don't know if you noticed this I I I I was reminded of my own in joke listening

Speaker 2:

back to some of the now that it's in podcast form, that I

Speaker 3:

podcast form, that I went through about 10 episodes in a row where I cut it to start with you saying alright. And there are about 10 episodes that if you've if you've advanced through them quickly, you just get Brian saying alright as the as the first word.

Speaker 1:

No. This is this is not our dedicated listeners should know. Your first act of editing prowess. Okay. Tell me more.

Speaker 1:

I feel the it's Pat Gelsinger and Andy Jastig. Right?

Speaker 3:

Yes. Yes. Yes. The, Pat Gelsinger, Andy Jassy super cut, of them disagreeing on whether it's on premises or on premise.

Speaker 1:

But they're not actually disagreeing with it. Just one says it one way, the other says it the other way.

Speaker 3:

That's right. Back back and forth. Yes. I thought you might have meant, I did one cleanup episode in the show, and I won't mention which one. And then a different cleanup where I added you saying happy Valentine's day, Bridget.

Speaker 3:

Yes. But the but the edit was so clean that I I think that it it may not be observable.

Speaker 1:

I will I will pass that on to her. But she will be she will immediately smell a rat. Although, actually, as recently as this morning though, you are actually sending the 3 of us, like, embarrassing photos from our shared task. So I I don't know if it's gonna because he did that one. There we go.

Speaker 1:

So but then we, Laura is here. Laura, we thank you so much for for joining us. And, Laura, I was trying to remember because I feel like we talked about the first vulnerability here, but then I realized that the dates don't quite line up because

Speaker 2:

I thought we did or at least I I definitely remember talking about it.

Speaker 1:

Yeah. Right. I think so too. I don't know where we did. We did, I think, at some point, but I think we started doing the Twitter space kinda after you had the big disclosure a year ago.

Speaker 1:

So to be clear, we are talking about another LPC 55 vulnerability. Although for people who follow the company closely, we're not talking about another another LPC 55 folder, but it did occur to me that I wait wait a minute. Did Laura find another vulnerability since, like, a week and a half ago? The answer to that is not yet. But, Laura, I wonder if you wanted to maybe start with, some a little bit just a backstory of the vulnerability you found in now December of 2020 is the is when you found the first vulnerability.

Speaker 1:

Right?

Speaker 2:

Right. So, backing up a little bit a little bit. When we talk about the LPC 55, this is a chip from NXP that we selected to use for our hardware root of trust. As part of Oxide's shtick about building a new server is that we wanna be able to have this hardware root of trust to be able to measure what's on the system. And, after doing a lot of deliberation, we decided the LBC 55 was, the best, choice out there.

Speaker 2:

And then we started the process of trying to, bring up the ship. And while, you know, it's certainly easy to be able to, like, get initial code running on a trying to do some of the more advanced features, that we actually wanna be able to use from it. Well, you know, proving to be kind of difficult to use. And also sometimes, partially related to, you know, struggling with some of the documentation. And, I went through a series of problems trying to get this going, which resulted in me breaking a whole bunch of chips.

Speaker 2:

And, I actually, when I visit when, I I visited, the office a a few weeks ago, Ricky actually gave me a stack of those, chips I, bricks. So they're sitting on my desk right now. I need to do some sort of art project with them to to commemorate those.

Speaker 1:

Sort of I think you need to, like, adorn an actual physical brick with them. Because, I mean, there was a stack. I feel like there were, like, 5 or 6.

Speaker 2:

Yeah. There were another of them. I I kinda wanna put them in resin or epoxy or something. But, And

Speaker 1:

so you should explain to folks, like, how it was possible, like, the the level of software you were developing and why it was even possible. What we mean by by breaking the chip, first of all.

Speaker 2:

Yeah. So, I mean so okay. Okay. Back to the table a little bit more is is that, I used to be a kernel maintainer. And some time one thing I would always tell people is is that, you know, it's generally okay to experiment with your kernel because it's pretty hard to actually script your system assuming you're, you know, trying things pretty carefully.

Speaker 2:

I mean, sort of, like, putting random failures in your file system, there's a pretty good chance that if you try and do a kernel and it panics, you know, it turns out you know, panics for the most part are recoverable. But for what we're doing with these, lp65 microcontrollers, they have some settings that are designed to be, programmed. And it turns out because, what it is is that when the chip boots up, it goes through through the boot ROM. And which is where I eventually found the vulnerability, and I'll get to. But as part of, going through the boot ROM, we'll check various settings.

Speaker 2:

And it turned out that there are it's possible to get those settings, incorrect, and as a result, have the ship not be able to boot up and not be able to fix it in a meaningful way.

Speaker 1:

Yeah. So when we say bricked, I mean, we mean, like, the the it's lost its seed. It's basically, as far as we are aware, no way to recover those

Speaker 2:

things. Yeah. So, I mean, I ended up with this stack of stuff right there. And then I I think it was actually, one one of my, coworkers, Cliff, had actually got the idea, you know, hey. We could read out the ROM because, NXB hadn't, like, re protected like, unlike, you know, other, chips out there.

Speaker 2:

So what if we just started to disassemble it to try and see what's going on? So he pointed me to that, and I ended up picking that up and, you know, going to try and take a look at it a little bit closer just so if I can try to identify code paths where exactly I might have screwed things up. And, you know, I did eventually find some places where I think I screwed things up. But as I started taking things a little bit closer, I it was actually very interesting to try and figure out what this thing was doing. I spent a lot time cross trek cross, checking the, addresses that were being referenced with those against the manuals.

Speaker 2:

And I found this one, set of addresses that were not actually mentioned in the manual, but they were definitely within the hardware space. And then I ended up cross checking that against a spreadsheet, that,

Speaker 1:

would be

Speaker 2:

yeah. So so it's Michael

Speaker 1:

You should describe the spreadsheet because the spreadsheet was a surprise was a total surprise to me. I mean, it was a surprise to you.

Speaker 2:

Yeah. I I mean okay. So it was one of these things we don't just looking at it a little bit more closely. But, so we had the spreadsheet and then eventually referenced this region called ROM patcher. And I was able to cross check where these this, part of the Flash was was being accessed, and I was able to put that 22 together.

Speaker 2:

And using exactly what the seeing what the code was doing in the ROM and looking at this assembly, I was able to get a good idea about what this wrong patch was doing. So that was really cool. Then, of course, you know, I got, you know, the brain cells connecting and decided, okay. What else can we do with this? I mean, is that our I've the first thought when you see something like a ROM patch, you must assume, okay.

Speaker 2:

You know? There's there's a good reason to have, a RAM patch. I mean, I I think that should be emphasized that, like, the the thing about a RAM is that, of course, if if it's true mask ROM, you can't change it. But if there's bugs in your ROM, you're going to need some way to fix that. And so that's the idea of what this ROM patch was supposed to do.

Speaker 2:

So there are good reasons you you want this. But the catches and where we ran into problems was that was that it was possible to modify, the ROM, patches, after boot up. So what it is is is that we discovered that it was possible to do this, and it allowed you to potentially violate various security boundaries, both in the, privileged, non privileged area. And, more importantly, I think for, our eventual purposes was violating the boundaries between trust zone.

Speaker 1:

Well yeah. So a couple of inch really interesting points to that. I just wanna go back and emphasize. 1, and I I don't know if I quite realized this kind of the origin story of the origin stories of going in to disassemble the ROM because to make up for the lack of documentation.

Speaker 2:

So some documentation, but, like, I found some of it kind of confusing and hard to follow. And I mean, I I really see this as an example of about I know I have a lot of respect for documentation writers, and I think they are certainly doing their very best. But I mean, I I I see you see this as an example of of a company that is, you know, not chosen to put documentation at the forefront of their strategy, for example.

Speaker 1:

That is definitely true. And then the spreadsheet, as I recall, was contained in the PDF.

Speaker 2:

Yes. Attached to a PDF, which I had never seen before. And I I actually realized you could do that.

Speaker 1:

Yeah. I did not realize you could do that either. Like, what can you just have, like and then, of course, people who know PDFs are like, oh my god. You've got notes. It's like a virtual machine.

Speaker 1:

You can have it. It's not a p it's not a document. It's a program that you're running at your computer. You're like, okay. So that was it was definitely eye opening.

Speaker 1:

And then so you you're getting into the the the Flash patcher discovering that, like, the ROM patcher, which I think you did a great job articulating why these things need to exist. That actually, it it's when you're developing software that is go that you is immutable, it's a little scary, and you have to have some way of kind of of fixing defects certainly. So then but, Laura, I think it's also worth emphasizing that NXP made, I would say, a classic error in that as you were disclosing to them the fact that this thing was not locked down, it was had not been secured. There's kind of a sense where they were challenging you to escalate the privilege. There's kind they they they kind of minimized it, honestly.

Speaker 2:

I mean, I I think we were definitely not happy with the way that, escalation ended up happening. I think just simply because I I I think, especially from our perspective, there was, I I think Rick referred to it as a lack of creativity involved about what we're trying to do. You know? I gave a proof of concept about what it can do, and they kinda came back and said, okay. Maybe you can do something with this as opposed to but they took it at at face value as opposed to trying to think through all the implications about what you could actually do with this.

Speaker 2:

And then I think that was at that point, I think it was really Rick went back and, you know, came up with the idea by demonstrating this using using their own, trusted firmware. And then, we engaged in, I think, a a game of, assembly code golf to figure out exactly how we could get a proof of concept in, in in a useful way.

Speaker 1:

And that, they were more receptive to.

Speaker 2:

I think,

Speaker 1:

they were still I I think we're a little bit dismayed about that that what we felt was an unnecessary delay in the process. I also feel Laura, I don't know what your take is on this, but I feel like I feel the work that you know, the work that you were doing is so intensely creative in terms of understanding how you can basically take this thing out of its design center. Challenging the creativity of someone so creative, it doesn't that is that is not a path to success, I think. I do not think that's dangerous.

Speaker 2:

Yeah. And and I mean, I think this is this is ultimately, comes down to thinking about threat modeling. And I mean, I I'm grateful for colleagues like Rick Rick who have, you know, really taught me all about how to how to approach problems like this and, you know, think about exactly what problems are you solving when you're trying to do things like this. And can you can you imagine and, you know, just come up with all the different ways that, you can come up with with with problems like this. So I think, yeah, I think we we've definitely learned how to be, very creative in our approaches for things.

Speaker 1:

And then so with this vulnerability, we there is a CVE. I was shocked. Laura, I don't know if you were shocked by that. I was floored to learn that NXP's disposition to CVEs was that it was basically an opt in system, and they didn't feel like opting yet. And I had never I mean, I I mean, that was definitely never my I I thought that CVEs I mean, clearly, it's not like there's not regulatory compliance issues, but I thought everyone had the attitude of, like, no.

Speaker 1:

This is these are common it's a common vulnerability database, and everybody needs to share this for the good of the industry. I don't know. Laura, were you surprised at all by their disposition on that?

Speaker 2:

I was a little bit of surprised, but I would also say you could probably do an entire episode talking about, like, CVE security processes just because I I think it's important to remember is is that what is we think about what is the purpose of of CVEs in terms of trying to get things out there. I mean, I I I think CVE was the CVE, discussion was more of an an example of, the bigger picture problem, which is, a lack of transparency in terms of actually getting this information out there. I mean, I all all of its flaws, though, the CVE process is a great way to be able to get that information actually out there.

Speaker 3:

Yeah. I I had the same reaction to you, Brian, which I I didn't think it was an, sort of optional people could say, Shirk could say, not us. And it almost see as as a way of burying some of these details in that it's such a you know, because everyone uses it, it's like, it's sort of hidden within other disclosures. But I guess they didn't view it that way.

Speaker 1:

And I like I can understand where they're coming from. I mean, it's like it's doesn't feel good to have a vulnerability in something that you've really tried hard on having been there. And I I get that kind of that temptation, I guess. It's so that you have to be you have to to listen to the better angels of your nature for sure on that. And it's kind of unfortunate that they didn't.

Speaker 1:

And especially for a part

Speaker 3:

like this. Right? Like, especially for a part like this where, I mean, may may maybe it cuts both ways on this, but one where trust and security are are part of the part of the

Speaker 1:

oh, you're thrown on the tin. Well and I don't know. And maybe I I I guess I don't do this anymore, but I would look at the CVEs to get a sense for what has been discovered in a part. And if you look at NXP, they're like, oh, there have been 4 CVEs in the history of NXP. Like, wow.

Speaker 1:

That's amazing. It's like, well, actually, it's not that amazing as it turns out because they have vulnerabilities, just not disclosing them. So that was and then, Laura, you remember them saying, like, well, we can create a c like, if you want, we can create a CBE. Like, yes. We want.

Speaker 2:

Yeah. And, you know, I I think this is also a case that was about, and when when we talk about the weaknesses of CVEs, CVEs are very software focused. And so when we're dealing with a hardware company, it's one of these things that it's like, okay. What exactly does the does a, you know, CVE for hardware actually mean? I think when we go back to, you know, everyone's favorite Spectre and Meltdown, this is another case where, especially in trans terms of trying to do this, the concept of signing CVEs for spectrum meltdown didn't always make a lot of sense because it wasn't track tracking things in a useful manner.

Speaker 2:

So I can sort of see exactly how our company may not approach things, in a different manner. But again, I I I think this is going back to to, you know, my previous point. Like, this is more just related to lack of transparency. I mean, if they didn't want to do CVEs in their particular manner, maybe that would be okay as that if they had other ways of making sure they can get things out there.

Speaker 1:

That's right. And so on that so during this whole process, one of the things that we were asking for I mean, just to your to your point, Laura, we've been asking them for more transparency for everybody, for their customers. And in particular, we were asking them to and still would ask them and would ask all vendors to make the the source code to the boot ROM available so we can see it. That way, Laura doesn't have to go reverse engineer to figure out how to use this thing. And we think it'd be very beneficial for the source code to be out there.

Speaker 1:

And, NXP does not agree. And so they they told us under NDA, they told us, they're I'm gonna read their quote. Or, Laura, do you wanna read their quote? Or I I I don't wanna take you take away any out loud reading from you.

Speaker 2:

I don't Andy with me, sir. If you wanna do, a dramatic reading, you know Yeah.

Speaker 1:

If I if I need to

Speaker 3:

if you wanna if you wanna violate the NDA, knock yourself out, boss.

Speaker 1:

Well, no. I I exactly. That's right. Oh, damn it. I thought I was gonna need some figures on the revolver.

Speaker 1:

No. No. No. Okay. I'll explain that in a second.

Speaker 1:

So, yeah, they they shared this to us under NDA. They say, even though we are not believers of security by obscurity, keeping the interest of our wide customer base, the the product specific RAM code is not open to external parties except the NXP approved common criteria certified test labs for vulnerability reviews. And so, like, even though, like, you just explained security through you're not even though we are not believers in security by by obscurity, we believe in security by security. So we asked them, can we okay. Fine.

Speaker 1:

You're not gonna release the boot ROM. This statement, can we issue this statement publicly? And they released us from our NDA for that statement. So there you go, Adam.

Speaker 3:

There you go. Alright.

Speaker 1:

Good. So I they don't seem I mean, like, we kinda think this is like, well, you're kinda showing your bare ass here, but okay. Fine. That's a you know what? There you go.

Speaker 1:

So and then Laura wrote a terrific blog entry on this. This is, again, this is a year or plus ago. So, Laura, when did that blog entry come out? It came out yeah. God.

Speaker 1:

It was, yeah, almost a year ago, 11 months ago. And and, Laura, what was the the kind of the reaction to that? Was that I mean, I was this the first is this the first big vulnerability that you've discovered? I mean, what would kinda walk us through what what it was like to, to discover this and then and then to get it out there publicly.

Speaker 2:

I mean, it it was, you know, kinda pretty intense to to be able to actually get it out there. I had done some sort of informal mostly bug hunting, never anything that was, real real rose to, this type of level before.

Speaker 1:

And then it should be said that you, you ended up doing you and Rick did a Defcon talk. Yes. Which was terrific. We can link the the video. That was, and that was were you how did you feel?

Speaker 1:

Because Defcon was not in person. Did you miss doing the in person thing? Or I personally would be I'm so scared to take an electronic device to Defcon.

Speaker 2:

Yeah. I'm still disappointed that, it was not actually feasible to try and go, in person. I had been to Defcon once, well over 10 years ago, and it was a it was a good time. And, I mean, I'd love the chance to maybe get a chance to to go again. But, I I did get, you know, the speaker badge and everything.

Speaker 2:

So

Speaker 1:

Okay. That's pretty cool. But so okay. So use but you you still feel like you got some unsettled business in that, you know, due to the pandemic, we couldn't be there in person. I would have liked it

Speaker 2:

to be there in person.

Speaker 1:

So then I think we fast forward to to December. We're continuing to build our product, and walk us through how you found found this vulnerability.

Speaker 2:

So, okay. So even after I found the vulnerability in this ROM, I sort of continued to try and poke at other various parts of the ROM that were there, mostly because, like, you know, we talk about creativity. It was also, like, a problem that was sort of sitting there, you know, staring me in the face. Like, I wanna try and figure this out. I mean, just because what exactly all the parts of this this that actually did.

Speaker 2:

And, I mean, I I, you know, I eventually was kept trying to convince myself that this was actually good for the product. So, you know, I I think one of the areas I started to look at was the, in system programming ISP mode, which is designed to be able to, program things in a secure manner. And and one of the features of this ISP mode is to be able to take a signed update image, to be able to apply that when the chip is in a fully locked down secure mode. And I the there's a there's documentation out there for the format itself. But as far as, you know, what the ROM was actually doing with processing, I was curious exactly how this thing, actually was.

Speaker 2:

And, you know, I I will admit I kind of had a motivation of all of this because I didn't actually wanna sit down and write our own processor. But, I I really kind of got curious about exactly how well, things were bounds checked in terms of, parts of this format just because it's a it's a fairly complicated format. And again, I'd like to say, if you think about the higher level problem about what this is trying to solve like a lot of it actually how you get a format like this actually does make a lot of sense. Is that, microcontrollers are very memory constrained. And so the way this format is designed, it's broken up into 16 byte chunks.

Speaker 2:

So the idea is that you don't have to have a lot of memory required to be able to, stage the update. And, things need to be encrypted because, okay, fine. You can do that. It's just sort of that if you think about the firmware update problem and you follow the logical conclusion, I can easily see how you get something like this.

Speaker 1:

And this is the SP 2 format? Correct. And you had asked on Twitter. I don't know. Did you get an answer?

Speaker 1:

What is the history of this format? Because it's just kind of like where does it come from?

Speaker 2:

So I I I asked on this because I I was looking around this just because I'm really curious about it just because I I think as far back as it was referenced, it seemed to come out of Signatel, which to the best of my knowledge seemed to be a company that was involved in making, audio chips. And if you you could see the parts of this that are referenced in the format itself just because it uses a couple of magic numbers like s g g t l, which I think is short for sigmaton smtp, which is, the, name of 1, the chip. So I think this format has existed. It's it's also interesting to look at this as as, you know, how how how exactly this got to to NXP. And I think the answer is is that, Signatel was acquired by Freescale and then Freescale acquired by NXP.

Speaker 3:

This is awesome. And, Laurie, can I ask you tactically, when you're when you're disassembling this, when you're looking at the different, you know, capabilities of this of this RAM? What kind of tooling are you using to to understand this?

Speaker 2:

So I I should probably, you know, give a caveat that, like,

Speaker 1:

I I am by no means

Speaker 2:

a reverse engineering expert. I I I'm I'm also just use Ghidra, which is an open source tool that's available. And it what it does is it provides a dump of the, assembly and the then a disassembly, you know, back into c code side by side. And the the disassembly generally does a very good job, but it's certainly not perfect. So you're sometimes less, you know, scratching your head trying to figure out exactly what makes sense.

Speaker 2:

Then then it's up to me to be able to annotate the variables and the function names to be able to identify what exactly this thing is doing.

Speaker 3:

I see. So it gives you it takes the assembly, turns it into, you know, pretend c code, and then and then you get to edit it to try to make sense of it.

Speaker 1:

Yeah. I would love to because I just seeing what you've done with, I have not really had the occasion to really use it in anger. I would love to. What I mean, Adam, it's it's so neat what it allows you to do, and it allows you to, like, save this state and share it with other people too. So Laura could, like, work on it and then Laura, correct me if I'm wrong, but a lot of you could come, like, annotate the stuff and share it with someone else who might be able to to offer their perspective.

Speaker 2:

Yeah. And I I think it's certainly designed in a manner such that it's supposed to be used for for teams who are reverse engineering. It's possible to have it be a shared project. But, I mean, I just have it as an individual project, but you can certainly, do things in a much more powerful way. And, again, I'm I'm I'm not an expert in reverse engineering by any means.

Speaker 2:

This is Deidra's one central. I know there are other tools out there that are also do similar things.

Speaker 1:

But it's really neat. Okay. So then you pull it up in in Deidra. And I actually, can I ask you a question that I feel like? What does s b 2 stand for?

Speaker 2:

You know, I'm honestly not sure. I'm pretty sure the 2 stands for the 2nd generation because we're you talk about a version 2 format. I'm not completely sure what the s b, stands for. I'd love to have an answer for that.

Speaker 1:

Right. Some carcass of a company that was devoured. Like, it's like the Plankton that was developed by the devoured by the Sigma Tel or what have you. So the you you start reverse engineering this. And are you, is your disposition, like, we wanna understand if this is a format?

Speaker 1:

I guess you're you'd said this earlier that, like, hey. This is a format that was gonna save us a lot of work. We'll use it if it makes sense.

Speaker 2:

Yeah. And I think that was our original motivation because of when I was writing up, like, what exactly we were going to do for this is that, I mean, I talked about the trade offs. I think after we found this other vulnerability, you know, we had gone back and forth about whether we wanted to use this. And, I mean, I pointed out the advantages about if it's there. And I think our initial plan was that, okay.

Speaker 2:

Fine. We'll use this as a backup. And, you know, possibly a worst case scenario or be able to do things that's case, you know, it should be fine. You know, we we we think we can isolate the wrong patch enough. This shouldn't be an issue.

Speaker 1:

And so how shortly after getting into this code with Ghidra do you realize that, oh my goodness, the balance checking, maybe not so much?

Speaker 2:

I mean, I had sort of again, this is sort of like a long term project. I would sort of work on it here and there. Honestly, probably sometimes working on the ROM more when I should have been doing other things. I know this is a bad thing with my boss.

Speaker 1:

Not at all. Not at all. No. This is actually actually so I think that that's actually an extremely important point actually is that because I think it is really I found this of a lot of the the things that we have done that have been most important, honestly, have often had this kind of origin where someone was just kind of, like, was following their instincts or doing something that was, like, related but somewhat ancillary. So I actually think that's that's really important that you felt the freedom to go, and you should because it's what you found is really, really important.

Speaker 2:

Brian, I'm very glad to hear you say this as I, you know, I have this, you know, hey. I have not yet disassembled the USB stack.

Speaker 1:

Right. You are now sharing a Google doc with me that actually I would like you to to repeat it. Yeah. Exactly. That's great.

Speaker 1:

No. I think the subscribe I I I think that the stuff I think it all of this stuff that I think that people so often make the mistake of being driven by a a particular deadline or what have you and driving past a wisp of a wisp of smoke or not exploring an issue that they should. And then as it turns out, that deadline gets blown because, boy, you should have explored that issue and discovered that there was something really substantial there. So I'm really, really glad that you explored this, and, and plenty of these explorations, I'm sure, do not bear fruit, but, boy, this one sure did. So Yeah.

Speaker 1:

When yeah. Go ahead.

Speaker 2:

I'm starting to take a look at this. I mean, I was kind of not I was honestly not expecting to find anything. I was hopefully be able to, you know, say I I took a look at this and be able to report back, you know, with our Huber's huddle that says, okay. You know, I took a look about this. It seems that they do they have pretty good balance checking and something yada yada.

Speaker 2:

Just have it, you know, kind of be an afterthought. But

Speaker 1:

Okay. So can I ask you your disposition in this? Because I feel like there is a change in disposition when you know that code is wrong.

Speaker 2:

Do you know what

Speaker 1:

I mean, Adam? Like, the the where you like like, as a code reviewer, I try to summon that same feeling that I have when I'm like, I have debugged a problem, and I have debugged a problem down to this function. I know this function has a bug. Like, the way I read code when I I I feel when I when I have that disposition is so different than the way I read code. Like, I'm it's a total and, Laurel, when you go in, are you what is your disposition, like, I wanna understand this?

Speaker 1:

Or is your disposition, like, no. No. I am convinced there's a there's a problem in here.

Speaker 2:

So I initially just started just wanting to understand exactly how the parsing actually worked and just to be able to try and understand this just because it's it's end up being a a fairly large state machine. And, I mean, state machines are one of these things that I think is, you know, kind of like the bread and butter of, of a programmer in terms having to something to to implement, in terms of trying to see exactly how this things works. But, I mean, I was also curious about what exactly it was had to do in terms of trying to process all these things and how exactly you checked all things. Just because, I mean, the the format is, somewhat complex. And I mean, there are a lot of exactly how it worked, especially some of the quirks about, you know, how how it does, some of the, encryption and other things like that.

Speaker 2:

So I think I think I started out with just, you know, genuine curiosity. And then, you know, got the brain cells or ideas going and says, I wonder how well exactly they're balance checking. I mean, I I think I just, you know, went in with a curiosity just, in terms of you just trying to figure out exactly, you know, looking at this as if I were code review says, okay. You know, the input I'm giving is coming from me. What exactly can I do exactly if I, you know, set these, fields to be larger or smaller?

Speaker 1:

And do you think that the presence of the earlier vulnerability changed your disposition at all?

Speaker 2:

It's possible that I I I think it probably did have some impact. I I mean, I I think especially just because, in terms of other things, you're just trying to keep take a a look about trying to figure out what it is. And, I mean, honestly, I I in sort of counterintuitive, I think if NXP had released the wrong source code, I probably would not have spent all this time trying to, look at it just because auditing ROMs are brought any sources, you know, before I was

Speaker 3:

That's awesome. It would have been too boring to read the actual source

Speaker 1:

code. Right.

Speaker 3:

It was much more of an adventure to grovel through the disassembly.

Speaker 2:

Yes. So I I mean, I but I but I think the fact that, you know, it it was, there and sort of, you know, staring me in the face in terms of what exactly is this thing doing.

Speaker 3:

That that's fantastic. I love it. And and with regard to the balance checking here, if I read the blog post correctly, it wasn't that they did it wrong. It's that they didn't do it for for this, for this block. Was that right?

Speaker 2:

So it's that that that's a good way of putting it. So for for those who haven't read the blog post, the the way this, vulnerability end up working is is that I as I mentioned, this is an update format that's, designed to be processed in 16 byte chunks. And what it is is is that, the blocks are numbered, you know, from 0 onto the end in terms of identifying where certain pieces would be. So there was a one one piece of the header was supposed to be, where exactly you could find, the the set of key blobs for keys for being able to do do the decryption. And what it it turns out what they were about to checking on in terms of being able to do some copying was that they were not checking is is that if we had gone up to the header size instead you were talking about when you reach a specific number, and you end up with the classic buffer overflow that way.

Speaker 1:

Yeah. Wow. That must have been amazing for that light to go to no. When you kinda made that realization, were you thinking, like, okay. I must be missing something somewhere.

Speaker 1:

It can't be it's there's gotta be this code.

Speaker 2:

So I I mean, I I would I'd probably say when I was, backing up a few more steps about how this works. So the this buffer this bug ended up being pretty bad just because it was designed to be used in ISP mode. And ISP mode is supposed to be pretty restrictive in terms of what you're supposed to be able to do. You're not supposed to be able to run arbitrary code. But, I was initially doing some testing because the same parser code is used, to be able to do this from in application programming, and I was stepping through it with a debugger, to be able to see what it is and be able to get a better idea.

Speaker 2:

And I mean, I really knew I was actually onto something when I was stepping through it. I could see it was writing outside the bounds of the buffer I was there. And I mean, once that happened, I knew I could Oh, wow. It was just a matter of could I actually turn it into something useful or was I just reporting that there was an overflow?

Speaker 1:

Okay. And then okay. So then I would assume that, like, that is a real inflection point where you're like, okay. Now we are we go from curiosity to now my the the the tenor of my curiosity has shifted.

Speaker 3:

Blood is in the water.

Speaker 1:

Exactly. Well, I'm trying that not necessarily make Laura sound like an Apex Predator. But at some point, you must feel like, yes. Flood is in the water.

Speaker 2:

I I mean, I I I'm not gonna lie. I'm forever gonna be chasing the high of, you know, getting code execution.

Speaker 1:

Like a so it must be extraordinary, to to realize that, like, wait a minute. I and I mean, this vulnerability, I don't think we're exact this vulnerability is is is, I think, much more serious than the and not not to downplay the severity of the vulnerability you found last year at all. But this one is really, really bad news.

Speaker 2:

Yeah. I think what it is is in particular with this one, as I mentioned, the the update format here is both encrypted and signed, so it's designed to come in. But this bug actually comes in before the signature checking actually happens, which makes it pretty bad. When I started to take a look, I was kind of expecting to see exactly I might have found something after the signature had been checked, which would have been kind of bad but less bad just because that would have meant that you would have actually had to sign an image before you could actually do something, useful, which is, again, you get to a different level of threat model there. But, I mean, the fact that, basically, you could send any, num any set of bytes and be able to get this then.

Speaker 1:

It's bad news. And especially because if if you have a secure microcontroller in your architecture in your device, it's probably because you have something worth securing. I mean, there's so the kind of the presence of this. Right?

Speaker 2:

But I feel like it's also worth saying, you know, we're very doom and gloom, but I think it's also worth establishing, like, what things is it that this this vulnerability didn't did not give us. I mean, this vulnerability lets you give us execution, but not fully persistent execution if you have, things locked down and fully signed, you know, with their what what that means is is that, you you would if you did a reboot, you'd you'd have to be able to do this again. If you're using the security NXP security mechanisms, things are still protected there. You can't fully change part of the key store or other things like that. You can't actually, do the signer you still have the signature checking.

Speaker 2:

But one thing I I would say that you, that was pretty bad was being able to extract, the device secret, which was related to, the dice feature.

Speaker 1:

Yeah. This is one of these where, like, okay. This is, like, not that's not my domain of expertise, but, like, extracting the device secret, that seems bad.

Speaker 2:

Yeah. And and what it is is is that, Dice is a feature that comes from the trusted computing group that's designed to be able to give, an identity used for attestation. So the idea is is is that, it's based on, a h HMAC to be able to do a calculation based on what code is is on is in there. So the idea is is that you can get, an identifier that can be used as a secure, key. And the idea is that you're supposed to have a device secret that's, well, secret so so that nobody else can, use that to be able to, eventually be able to derive the key.

Speaker 2:

And, what it is with this vulnerability, you're actually able to extract that before things get locked down. And, I mean, this is one of these things that actually it's it it sounds bad, but but it's kind of a fallout about when exactly this bug had happened. Because what it is is because the device secret is used in in conjunction with the image that's on the flash, you need to have those 2 tied together. So because ISP doesn't actually have you booting into an image yet, it's necessarily going to be, on flash yet. So I think the fact that I was able to extract it didn't necessarily mean there was a weakness necessarily in how that was protected.

Speaker 2:

It was much more just that I was able to do this before you had an image there.

Speaker 1:

Right. Right. Right. Right. Okay.

Speaker 1:

Because you're able to basically inject. But this is before, we actually got the image. The, so talk so you discovered this on, like, right before Christmas? Yes. And so I and I recall us.

Speaker 1:

To be get on a call. I mean, definitely recall you discovering this. The the thing I thought was really interesting is in addition to contacting NXP, because I think unlike well, I guess, like the previous vulnerability too, we had a real concern about what is the impact on our product here. Like, this actually could be really devastating if we can't use up an update functionality at all.

Speaker 2:

Yeah. And and I think that was why once, you know, we got done trying to do the vulnerability reporting, I I think I was worked with, you know, Rick and others to try and do a write up about what exactly this would mean for a product product in terms of trying to evaluate our options in terms of trying to actually get a product out there just because, again, you know, it turns out this is all miss nifty stuff, but, we can't actually sell, you know, the the, pride in terms of finding vulnerabilities. So

Speaker 1:

Right. Well, I I I'm not sure we can't. We we can we can try. Yeah. Exactly.

Speaker 1:

But, no, that's true. We we we can't sell Defcon talks. It turns out they give those videos out for free. So we actually need to, so but I thought that that so we, as folks probably know, we have these requests for discussions, RFPs. And so, Laura, you you your go to was to write an RFP about this new vulnerability and our level of exposure.

Speaker 2:

Yeah. And and I think, honestly, I was doing it half and, you know, to organize my thoughts as well in terms of trying to to figure things out and be hopefully be able to give a summary. I mean, mostly because I think to be able to point back to it later and says, okay. Why exactly do we make this decision in terms of being able to just lay out all the trade offs? And I think that was where I I think I screwed up you know, pointing out.

Speaker 2:

It says, okay. This is not affected. This is affected in terms of being able to say it out for everyone then being able to lay out figure out what exactly will we likely be able to do about this.

Speaker 1:

And I think that and this is I think this RFP is great because it is so helpful to and I it sounds like it was helpful for you as well to kind of, like, okay. What are the actual paths here? Certainly, one of the paths and one of the paths that we definitely got got asked about. Or I don't know if people ask you about this directly as well, but they're like, what if I hey. How many vulnerabilities you're gonna have to find in this thing before you start using something else?

Speaker 1:

Like, Like, are you in a bad relationship with this part? And That that was yeah.

Speaker 2:

We had seriously considered in terms of trying figure out if it was out there. But again, I I I think what it is is is that the the set of, features we were looking for in a part, it was actually very hard to try and find something that we wanted that was out there. And so I and again, I I think we sort of went around in circles is is that there wasn't anything, that was out there that I think will that was actually available we could purchase, because that's the key. I think there were a couple of things that always sounded promising, but I think even when we were starting to search for the part, I think back in, you know, when I joined in, early 2020 before we started to getting into major chip shortages. I think there were certain things that were still unobtainium.

Speaker 2:

And now, you know, even more of these things are even more unobtainium. So I think it it was sort of the doing a careful calculation about what exactly are we supposed to do at this point. Is it that these things just aren't out there, and we have to ship something.

Speaker 3:

Well, and even then, what would we look for? A part that had not had vulnerabilities found in it yet. Right? Like, because when because when we were evaluating the NXP 55, these, you know, these vulnerabilities were unknown. And, you know, it hadn't been Laura Abbotid yet.

Speaker 1:

Exactly. This is your OAC Jack in the Box. This is the after the the after after the E. Coli outbreak in California Jack in the Box. So that's the

Speaker 3:

This is something I thought in my twenties. True. I would not eat a Jack in the Box.

Speaker 1:

I that that is an unverifiable claim. I don't I don't think that that's right at all. I think that that is yeah. Talk about challenging creativity. You're now challenging me to put you in a situation where you will not even jack the box.

Speaker 1:

So good luck with that. Because you know what? Jack the box is actually pretty good. Like, you I I that's what I gotta say. Jack the box, not that bad.

Speaker 3:

So Laura. But So so but but, when building a product around this part or where this is a key component, it's not that, like, no one vulnerability is necessarily debilitating or, it doesn't preclude it, or you make the whole product insecure.

Speaker 2:

Right. And and I think that this has again goes back to our threat model in terms of trying to figure out what exactly it means in terms of trying to mitigate this. We think about when we go back to the wrong patcher, you know, that that was a pretty bad one. But then it also turns out we did the analysis is that we could narrow down about under what circumstances, we could act we would actually be vulnerable to this. And we were able to, you know, sit down and discuss it and decide, okay, if we take these precautions and, set things up like this, we feel we've isolated enough to avoid any potential issue.

Speaker 2:

And I think we we did the same thing with this one as well, especially in terms of evaluating what the what the update mechanism is. Is is that this, issue was limited to being able to use the update mechanism. So I think, you know, the the safest thing I think is to be if you can't get a a fixed card is just to not use that update mechanism.

Speaker 1:

Yeah. And the and then I mean, the stakes are higher too for us in terms of timing. I mean, we are you know, Laura found this. I mean, the timing was in one hand great. She found it after we done her EBT one, but as we're working on our second EBT rev.

Speaker 1:

And it's like, if we're not gonna use the OPC 5, like, we need to make that decision tonight. I mean, that is there was urgency to it. And, Laura, I think you'd be exact line you had in the in the RFP was that the the l p five s 69 is a is the winner in a sea of mediocre candidates. And it is, like I think it's important for people to realize that it's this is not and I and, actually, to be fair to the domain, this is a hard, hard, hard problem. Secure silicon is really, really, really, really hard.

Speaker 1:

And there are so many vectors, and you're gonna put this thing in someone's physical hands and then expect it to keep secrets. And that is super hard, I think.

Speaker 2:

It really is. And and I think in terms of especially trying to figure out the the set of features, we actually wanted to be able to do this. This is a somewhat narrow. I'm I'm not gonna say that it's a super super narrowly scope, but I think also, you know, it oxide where we're kinda picky about what things are out there. And it's been I think especially in terms of of trying to look look at what what exactly we wanted to do for things.

Speaker 2:

I mean, I'm not sure if people saw when I was going around the, person, I'm blanking on his name, who who broke the hardware crypto wallet. That's an example about, you know, something we try and evaluate about what what exactly we're we're trying to do. So

Speaker 1:

Yeah. And I think the other thing that because, Adam, you're talking about, like, the vulnerabilities not found in other parts. I think for us, like, to contemplate something else, it has to be there has to be something very tangible that we're getting, not just, oh, by the way, we don't think their vulnerabilities in Southern Park because I think nxp has taught us that we never to trust again. What I would want is something that would get our attention, I feel, and this is not for our first product because we're kinda locked and loaded. But, Laura, I love your take on this, But I feel like an open a truly open top to bottom, open open Boot ROM, open ROM product is would be interesting.

Speaker 1:

I mean, it would need to be all open, though.

Speaker 2:

Something open would actually absolutely be a a great starter, at least be have a good way for us to be able to review the source code and be able to see exactly what's on that. And, I mean, I I I think it would also be interesting to to look at and see exactly we can have a smaller footprint for the ROM. I think it was actually, Cliff who was, unhappy even before when we were first selecting the chip that it seemed to have this boot ROM just because I think he preferred just to be able to have its, chips just boot up and, you know, run exactly your own code, which I can understand exactly why he does that. Especially now, we see exactly why Cliff had that opinion.

Speaker 1:

Totally. Okay. So explain on that. Actually, I didn't I I haven't actually gotten this particular, rant from Cliff. So how is so his view is that the boot ROM was completely unnecessary effectively.

Speaker 2:

I I don't think I I don't wanna put words in Cliff's Cliff's mouth here. But, I mean, I I I think I've heard him before say he generally preferred us, especially, earlier generation chips. It seem it seems like that would when you would boot up, it would literally just boot to your own code, and you would be able to do things that way. But again, I I think we talk about things like a secure processor. It turns out that you sort of end up with the bootstrapping problem about okay.

Speaker 2:

If if anyone can try and, get in and to be able to program code just because that's what you need to do if you don't have a boot ROM, then that means anybody can get in, you know, you or, you know, your nefarious hacker. So

Speaker 1:

It's also a real pain that you did you wanna be able to change the software on this thing. You don't wanna actually just ship a raw mask. Like, we actually wanna ship software to this. You wanna change the software. And that's the origin of a lot of this challenge is our ability.

Speaker 1:

I mean, that was the origin of the the with the Flash ROM patcher and with the s p 2 issue. It's like, ultimately, it boils down to upgrading the software securely or fixing it securely is really, really challenging. Yes. So okay. So now you we we are evaluating.

Speaker 1:

It's late December. You, I think, are are wondering, if I recall, kind of out loud, like, oh, wow. This is like I guess this is like a winter solstice activity for me. This is what I do for the solstice.

Speaker 2:

I forget if it was you who noted that or or I who who know that who exactly noted that. But, yeah, this kind of ended up coming, like, right before Christmas. I will admit because there was a little bit of oxide downtime, so I decided to take advantage of that to try and but I was calling going to call play around with this a little bit more. So

Speaker 1:

Right. Okay. So we, it's that time of the year. We celebrate the Solstice by finding a vulnerability. You, net you disclosed this to NXP, and it's probably worth saying that NXP's reaction this time is is different.

Speaker 2:

Yes. And I I think, NXP was was very receptive to this. I I think, especially, I think we were able to show them a proof of concept. And I'm and, I mean, I I would probably hit this entire thing. I think it was a lesson for Oxide as well into how to do, vulnerability disclosure as well.

Speaker 2:

I mean, I I think we try and be good citizens in terms of how we communicate, but I think also making sure you have a clear proof of concept and explain things as, clearly as possible. I'm I mean, I I do think, you know, NXP is full of engineers who are working very hard. But I I think, you know, if I think about it, for example, is that if we were to find a vulnerability, someone else were to find a mobility in oxide code, I would appreciate having a well written write up in terms of showing exactly where things are. So I think in terms of trying to figure out exactly where things were. So

Speaker 1:

We try to be complete as possible. You had a very, I think, very thorough it was very hard to argue with it too. I mean, it's just like, and, I mean, to their credit, like, they didn't. Right? I mean, they they accepted that this is a vulnerability.

Speaker 1:

Because I think we also were feeling we had negotiated on timeline with the first vulnerability. And I feel, Laura, I seem to recall us thinking, like, we're this time, we really are not gonna negotiate on timeline. It's really important to us. Yeah.

Speaker 2:

There were some things that I think we were based on we reflected based on our previous time. But and and I think this time, it fortunately, I think we didn't end up having to go there. I think they were very receptive in terms of trying to, get it out there and keeping us up to date and everything like like that. I I mean and and I'd I'd like to believe, you know, we were trying helpful and trying to guide them into ultimately do the right thing because what we see is just gonna mean Oh, Oh, sorry?

Speaker 1:

Did did Laura not break up for you, Adam? No. No. I think this is just the first base is no.

Speaker 3:

It's okay. Hang on, Brian. That's right. God. Stay together.

Speaker 2:

It's okay, Brian.

Speaker 1:

I it's it's not okay at all. But I'm so sorry, Laura. I missed what you said.

Speaker 2:

I I mean, I'm just pointing out that that things were going, much better with NXP this time. And, I mean, I'm I'm, you know, very glad that things, I think, did go well just because I I think I I wanna see them, you know, continuing to try and, be more proactive about security. And I I'd like to see all companies out there do that in terms of, being able to get this disclosures out there and make sure they're protecting not just their customers, but also, you know, everybody out there who uses

Speaker 1:

And the I know that this is a really good point because I think we are and, hopefully, we have never come across as just kind of, like, beating up on an XP because we actually we actually want all these companies to do better. We want them to actually do better, not just by us, but by their customers, by the industry, and we want to them to improve. So I think I would like to believe that our previous experience with NXP made them better to this time. I mean, they certainly it felt like they took us more seriously, or they certainly didn't didn't challenge the the the severity of this. And I thought they did also did a better job of, not minimizing it and getting ahead of it and then disclosing it to their own customers.

Speaker 1:

Because on this one, like, you got some customers that are potentially very, very exposed.

Speaker 2:

Yeah. And and I made it it sounded like that they did a much better job about trying to, get this out there. So I think that that was definitely something we are, you know, happy we were very happy to see.

Speaker 1:

Very happy to see you.

Speaker 3:

Now, Brian, you know, imagining the perspective of our future customers, you know, at some point on our website, we're gonna have some description about our root of trust and our security posture. And presumably, we're just going to in we're gonna embrace these vulnerabilities and describe how, you know, the attack vectors and how it's not applicable, but but continue in that in that, transparency.

Speaker 1:

Absolutely. I mean, in terms of, like, a vulnerability in oxide number, for example.

Speaker 3:

No. What I mean is, like, this NXP part, which is part of the oxide rack, has these vulnerabilities that appear on our blog. So we're gonna tell people, you know, what that what the implications are or are not for.

Speaker 1:

That's right. Yeah. I mean, I think that that absolutely. And I think that we, you know, we all of us at the company have been on the the kind of the receiving end of vendors not always being transparent, not always being forthright. And I think, you know, we wanna be the example we wanna see in the world.

Speaker 1:

So, we I I think also, even when we have issues that are and then we've seen this as recently as Okta, where, boy, companies, we can turn a small issue into a much more substantial issue of trust when they decide to minimize it. It's like it's not our like, we wanna explain to you what your exposure is or what we know the exposure to be, and then it's on all of us together to figure out what the actual impact of that is. I think it's really dangerous to say, like, oh, there's this problem, but you don't need to worry about it. It's like, why don't you tell me more information?

Speaker 3:

Especially when we've been so explicit about the

Speaker 1:

problem. Right. Right. Well, and I because I think that the honestly, the most important thing here in many regards is the in terms of of delivering a trustworthy system for the long term is the trust that we have in one another, that companies have in us, that we have in in in our partners. And that you do not wanna sacrifice that trust because of a single defect or vulnerability.

Speaker 2:

Yeah.

Speaker 1:

Absolutely. So, Laura, you wrote this terrific blog entry, which, came out, oh, like, a week and a half ago.

Speaker 2:

Was it really only week and a half ago?

Speaker 1:

I think that's right. Yeah. It's the 23rd March. It is yeah. I mean, it's, like, I guess, coming up on 2 weeks ago, but it's, like yeah.

Speaker 1:

I mean, it was not that long ago. Yeah. So so what has happened, well, actually, so one thing that was interesting is on the first vulnerability, you wrote a terrific blog entry, And I think it was, you know, it was it was technically very dense and a little underappreciated by the because I my views, like, hey, hacker news. This is true hacker news. News for hackers.

Speaker 1:

And but that story, I think it, like, it was a little slower. I feel with this one, it really took off, and people saw the significance of it right away. Is that a fair characterization, Laura?

Speaker 2:

I don't pay attention to Hacker News. I have

Speaker 3:

Well done.

Speaker 1:

I I know. I'm so sorry. I feel so dirty now. Yeah. No.

Speaker 1:

You that if you I know.

Speaker 2:

Honestly, anyone out there, if you it's if you wanna snipe Brian, you know, I can give you an itemized list of ways to snipe. Right.

Speaker 1:

Long. I know. This is sport.

Speaker 2:

We talked vulnerabilities and software, but, yeah, we have vulnerabilities.

Speaker 3:

That's right. This is our greatest secure security problem

Speaker 1:

with Azure. Look. I'm all attack surface. Okay? I'm sorry.

Speaker 1:

I it's true. God, it's true. But I thought the I thought it was what's important to me that, like, I think that this is something we should be talking about as an industry? I think this is it is incredibly technically interesting, but it's also very important, and it hits on these other bigger issues around how, we kinda carry ourselves as companies and as as consumers of technology making technology. So, yeah, I wanna I'm I know I'm putting the the the the the best possible, sheen on it, but I think it is important that Hacker News discuss this stuff.

Speaker 1:

And on this, it's like it was people really, jumped at it and discussing it in all the right ways. Of course, there were people who are asking us, like, why are we still using this part? But I thought we did a by and large, I thought that the tone of the discussion was exactly what you'd want, which is, people, both appreciating the sophistication of in terms of finding this, but then especially kinda calling on folks to do better and and make this stuff transparent. Laura doesn't have to use Gidra to figure this stuff out.

Speaker 3:

One of the things I appreciate about that discussion, was the, the fact that this RAM had been reviewed. And Laura Laura mentioned that in the blog post that, like, alleged security experts had looked at the the code, like the actual code, not not through Ghidra, but, like, looked at it and blessed it as okay. And there's this whole industry of, you know, patting each other on the back for these security reviews where it seems like they're paid for the answer to be yes, but that may be overly cynical.

Speaker 2:

I think it is kind of overly cynical there.

Speaker 1:

Fair enough.

Speaker 2:

I I I do think, it is you know, security reviews are a practice, but I think it's also a matter of question about what exactly are they looking for and trying to find. And I mean, you know, no security as I mentioned in the blog post, no security review is guaranteed to find every every type of issue. But I think if we, you know, we think about transparency, I think it's worth highlighting to, would say what exactly did your security review find? Because if if you did a security review and the feedback is only, you know, check plus, you know, a a a plus, that's a a sign that maybe it wasn't actually covering

Speaker 1:

the right areas.

Speaker 3:

That's a great point, Norm, because it's like it's like no CVEs. It's like, well, there might be two reasons for that. Either it's the most secure thing in the world or you're lying. And and and same thing. Like, if if you're if the report card comes back, you know, straight a's, no problems, never missed a day in class.

Speaker 3:

Like, that's probably not true.

Speaker 1:

Right. Somebody who's in the stands. Yeah.

Speaker 3:

And if you're not will sure. You may not be willing to release the source code, but why not release your like, the problems that you've already fixed? Like, that seems like, maybe a good first step even in terms of the transparency we'd like to see.

Speaker 1:

Yeah. Absolutely. And I love that question. What did the review find? So I I know that some folks have requested to speak, so, I wanna get, if you folks have got questions for Laura or, comments on other secure micro controllers, get kind of folks, because this is, again, a very interesting issue.

Speaker 1:

And so, Laura, what is the, in in how does this let's say, a week and a half ago, we got this out there. What has been the reception, the non hacker news reception? Have have, have folks reached out to us with superlative alternatives? Or

Speaker 2:

I I haven't heard any, you know, super exciting, alternatives. I mean, I think a lot of people thought it was very interesting, and I think there's also, I think a lot of, you know, lamenting about how this is putting something like this together is very hard in terms of making a processor and especially trying to do security response is hard. So

Speaker 1:

It is really, really hard. It is really hard. It's it's very technically hard. It's it so it is a challenge, but I think we we can do better. And I would like to believe that with the transparency from these vendors, we can do better.

Speaker 1:

So, Ian, you, did you have a a comment or question?

Speaker 4:

Yeah. Just, thanks again to Laura for the write up and for, the time today discussing the kinda lead up to it, for both bugs is a very interesting discussion. I'm curious, as you were talking about the the LPC 55 part, and the lack of documentation being a barrier to entry for you to be able to implement the things that you want to do with this part. I'm curious whether or not you're building your own internal and or personal documentation for this part as you're working your way through these questions you have for how to implement stuff in it? And if so, is that documentation something you're considering open sourcing for other people to be able to, you know, build upon the, the work you've done so far in understanding it?

Speaker 2:

I I think we are working to try and, I I I think more than anything, some some level of documentation, but also especially trying to get out, our our own tooling to be able to do this out, which I think more than anything is, a good example about how you're actually supposed to be able to do things. And I think, a shining example about the other people to do it or the, solo keys is another example about people who wanna follow. They're they're making security keys also based on the LPC 65, and they also have their own reference code. So I think, getting more reference code out there is is other thing is, great. And I think also just being able to try and find the community about being able to ask questions about, how this work.

Speaker 2:

And I mean, that that's half the reason I wanted to do the blog post is talking about how some of this actually works so that it's hopefully, out there to be able to do this. But I but I think there are specific questions people have. I mean, I think we can try and figure out if there's a a good way to be able to put that out there.

Speaker 1:

Yeah. CVE uses documentation. I love it. And then, Laura, where just so folks know, where is our kind of reference code for the the tooling and and the use of the LPC35?

Speaker 2:

It's on it's on the oxide GitHub right now. I should work a lot of it probably needs to be cleaned up and re refactored in terms of and in terms of the that for long term. But, I mean, it's certainly out there and be able to show how some of it works in terms of being able to build these images and other things like that. And I think I you know, we we have another way to go to be able to, make it for what we will do for the final product. But

Speaker 1:

But I think well, that highlights 2 things. 1, I think it's important. Sometimes people wanna wait until source code is perfect to open it, and that's really too late because it's not gonna be perfect. So I think it's great that that of the tooling that we've got there is all out there. The and then the second thing that it highlights, which is actually important, and this is not true for all of our partners, the information that we have about the way the the 5 works is not under NDA documentation in general.

Speaker 1:

Or, actually, I put it asterisk on that. I don't think we've got I think most of the docs that we've got are the public docs, which is maybe some exception to that. But broadly, the way the chip works is actually documented, which is not true for, say, the for the the host CPUs for for your AMD Intel CPUs. You don't have that that complete public documentation.

Speaker 2:

Yeah. And then, I mean, I think the documentation generally is available and you're and you're able to use I mean, we're certainly not gonna be, out there rehosting NXT's documentation, but I think describing exactly the format of images and, you know, showing exactly how to build things out there. And and I I would also say, to NFT's credit, they've also gotten better. They they have their own GitHub with with which has, their code to be able to build images and other things out there to get a better idea about how things work. And I think, seeing more of that out there is great.

Speaker 2:

And I think, that's the that's the future we wanna see.

Speaker 1:

Absolutely. Steps in the right direction. And, Ian, that's a great that's a great question. And I think you're gonna see that for other parts that we in particular AMD Milan, where we've taken a very, very different approach to the software out there. I do think that our that there'll be bits for which our source code will will serve as that kind of missing documentation.

Speaker 1:

We've kind of figured out how this thing works. Todd, did you, do you have a question for for Laura?

Speaker 5:

Oh, nope. I just I preemptively requested.

Speaker 1:

Just just getting ahead of things. Just never know. It's like or maybe you wanna fight me over Jack in the Box, but I definitely I will welcome all

Speaker 2:

the help.

Speaker 5:

No. It just it just takes time to get the mic, so I thought

Speaker 1:

There you go. That's it. That's it. Get ahead of things. And, Jason, did you have a specific question for Laura?

Speaker 5:

Well, I just just

Speaker 3:

when you made the comment about the, security by obscurity, it just reminded me of a statement I'd had heard from a company that showed Renee Maine nameless when they defined open source as, permitting select partners,

Speaker 1:

being able to view

Speaker 3:

the source code to their product under NDA.

Speaker 1:

That's their definition of open source.

Speaker 5:

Yes. And

Speaker 1:

You're like, mhmm. Okay. And just when he said that,

Speaker 3:

it just reminded me of that, and so I thought I you know?

Speaker 1:

Well, so one thing that actually I do find very frustrating that I would maybe get Rick up here on the to to offer some color on. But the this idea that you hear this excuse that, like, no. No. No. No.

Speaker 1:

We can't open source our source code because then we will get dinged for our common criteria evaluation. I would love to know how true that is because I feel I mean, Laura, am I making that up? We have heard that from folks before.

Speaker 2:

We have heard that. And and I I think from some discussion with other people is is that it turns out what it okay. I should you know, saying this on a recorded call and something I don't fully understand is probably a bad idea. But my understanding is is that, like, there are certain number of points you can assign And I I think there's some interpretation where if you have things open source, it doesn't get you full points because of the way things are, but that that doesn't mean you can't meet the criteria.

Speaker 1:

Got it. So they get points off for being open source, which is a I mean, that is a a regulatory tragedy. I'm not sure how plausible that is to change. But That's

Speaker 5:

ridiculous.

Speaker 6:

Yeah. Jeez.

Speaker 1:

And because, unfortunately, it's like these vendors, as we know, like, they need all the coaxing. It's like their their disposition is already to not make the stuff open. They need all the coaxing that they can get. So just have a giving them any excuse to not make it open is really, really frustrating. And because, Laura, we even offered to we just wanna look at the boot ROM even under we we're not even sorry.

Speaker 1:

Not asking for it to be open source. Happy to to look at it. We would like it to be open source, obviously, but we're happy to look at it under even, in in a clean room effectively, and even that was was no good.

Speaker 2:

Yeah. And and I think that that was rejected, and I think one of our colleagues, Ben Stoltz, actually mentioned is is that he had actually done that for other of his, passports. Exactly. He had done, you know, NDA review of of Boot ROM source code, I think, sitting in a in a room somewhere. So I I mean, it's it's certainly plausible.

Speaker 2:

But, again, it's, you know, you you have to be allowed to be able to do that.

Speaker 1:

That's right. And, yeah, you have to be allowed to do that. But it and, certainly, I just feel that with, this in particular just feels like I if you see the source code, this one is gonna be rectifiable by looking at the source code. Do you actually or do you want to see the source source code to the boot run? Does this make you wanna look at the source code more?

Speaker 2:

Yes. I think I do actually wanna see it at this point just to try and, you know, cross check some other things to see if I actually got things right, from, you know, my own disassembly and trying to figure out what things are doing.

Speaker 1:

And I it should be said that LoRa now or NXP knows LoRa. It's not just Oxide. But NXP is definitely, whenever they they they, like, you know, please give like, thank Laura for her disclosure. They, you know, they are they definitely, I I I yeah. I don't know if Laura might go so far to say they're afraid of you, but I think they might be afraid of you.

Speaker 3:

You I mean, you think they're on their Christmas card list? I mean

Speaker 1:

I I just think that we, you know, that they know that I I do wonder if they're gonna be holding their breath around the around the solstice. Todd, yeah. You got your heads up. And then then to to Ben.

Speaker 5:

Oh, yeah. I guess I did have a question about just security reviews in general. As someone who runs a package manager project, how do you scale this stuff? Like, I mean, I can understand, you know, reviewing for a particular product. But if you're accepting contributions from you know, say you take 500 pull requests a month or something like that, and it's for, you know, upstream stuff.

Speaker 5:

Do you have any recommendations for how to secure that or or what what the good practices

Speaker 1:

are? That's a great question. Laura, you wanna take a swing at that one?

Speaker 2:

I I think as far as as scaling, I mean, sometimes I think it's just helpful to know exactly your domain area to begin with. I I think in terms of trying to figure out exactly what you're trying to accomplish. And I and I think it's also worth, you know, considering is is that if you found one bug one time, there's a good chance you'll you'll see, other things one time. As as far as trying to, you know, scale things, I think making sure everybody is up to speed about as much, and I think also improving documentation. Honestly, I I think I probably have a pretty weak answer here.

Speaker 2:

So I'll I'll defer to others if they'd like to, you know, give a more complete answer.

Speaker 1:

Yeah. I mean, it's it's I it's not that I'm giving you a complete answer because I think it's a very thorny question. I do feel, Todd, that I mean, not to be on brand, but it's much harder to review code in memory unsafe languages and be as certain. Certainly, for me, the the vulnerability that was found in DTrace, which, which was an integer overflow issue that led to a memory on safety issue, that was lit in the lint clean code, I found to be very chilling. Like, this is code that I had reviewed.

Speaker 1:

I had written carefully. It had been reviewed carefully. It wasn't, like, it wasn't sloppy. And it was clean code, well commented, and that was very chilling to me about just how hard it is to review your way into a secure system. So I I think that the reviews are necessary, but definitely insufficient, and I do think you need to look at things like programming language.

Speaker 1:

And and then, of course, like, not all code touches the same part of the system. Not all code you know, the the boot ROM in a secure microcontroller is very, very different than, you know, a humility debugging command, for example, that that that where we're not knew he was worried about that kind of stuff. So I think having that kinda understanding that spectrum is important too.

Speaker 3:

I don't think I have any hopeful thoughts here other than saying I mean, this is, like the problem of our era. Right? Like, that we're all including in our products any random piece of trash we find on GitHub and the and the transit of closure of that. And so, like

Speaker 6:

Or or

Speaker 5:

in my product, in my package manager. You're in company make code too.

Speaker 3:

That's awesome. Yeah. Oh, well, pardon me. I mean, any random piece of excellence or any random piece of trash. Pardon me.

Speaker 3:

I just mean, we're pulling in, you know, who knows what into our products. And, like, how can we possibly

Speaker 1:

It was, Adam. I kinda feel like, you know, people kind of criticize others for, like, oh, you're just typing curl and piping it into bash. It's like, no. No. We're all typing curl and piping it into bash.

Speaker 1:

Like, you know, like like, you may not think you're doing that, but we are all downloading software off the Internet running every single one of us.

Speaker 5:

Well, yeah, I never understand that because it's like, you download a binary, you're basically typing curl and piping it directly to your processor. So, like, what's the difference?

Speaker 1:

Yeah. Exactly. Ben, you you had your hand up.

Speaker 6:

Yeah. I think the one thing that impresses me is that this kind of work is in scope for us for developing our product. You know, we're trying to serve customers who have been they they're they're between a rock and a hard place. They either have to spend a lot of money on cloud services that they anyway, they're they're they spend a lot of money for cloud services, and those companies have very deep pockets, and they're working on this problem because they need their customer's trust. They need to be able to run arbitrary workloads and, have them be trustworthy versus, you know, customers who take this task on themselves and are building together some stack of switches and operating systems and virtualization layers and hardware from suppliers.

Speaker 6:

And a lot of this type of work is somebody else's problem. You know, you're bog you're so bogged down trying to integrate these pieces and accommodate their quirks where, you know, yes, yes, they implement a standard, but, everyone does it in their own way, which is an unfortunate reality of of the industry. And and this you know, the we are we are you know, we don't have a BMC that has to implement CD ROM drive emulation and, you you know, remote USB and all that. You know, we're we're making a fit for purpose appliance that will give you a trustworthy platform so you can run arbitrary workloads. And the part of the value is being able to trust that stack, which puts this work directly in scope.

Speaker 1:

Yeah. That's a very good point. Yeah. Yeah. Because I I think for for so I mean and we certainly talked to so many folks for whom the folks focused on the security system feel beleaguered because it's an afterthought or it's not in scope or it's something they know is important.

Speaker 1:

But

Speaker 6:

fiber energy left to address them. You're just trying to get your network stack to talk to something else. You know? Yeah. But, that's that's part of what we're doing.

Speaker 1:

Yeah. Which is you're right. I mean, it's it's it's a it's a a luxury in a certain regard to be so focused on it. So I do feel, Laura, part of the problem part of the reason that this is not just a very hard technical problem around security secure MCU. Like, economically, it's not like, people also want these MCUs, especially if you've got, like, you know, a Bitcoin wallet or something.

Speaker 1:

It's like, oh, by the way, it needs to be, like, 3ยข. It's like, okay. Well, which is more important? Because it's it's it's, hard when it's not in scope for these folks.

Speaker 2:

Yeah. I I I think that's a very good point about, you know, trying to actually, define a product. Because, I in terms of actually trying to sell a general purpose microcontroller just because ultimately this is something that's the LP65 has security features, but it's still ultimately general purpose. And I mean, you know, when we think about what it looks like long term, I I think we, you know, somewhat joked about what would if we had our, you know, $1,000,000,000, what would be designed for our own ASIC about what we would exactly wanna do for our own, you know, secure processor.

Speaker 1:

And it it does feel like that's just gonna be, when oxide does our own silicon, it does feel like this is gonna be some of the first in part because it it is, is in scope, and as you say. Matt, your your hand is up.

Speaker 7:

Yeah. So I part of me is just wondering how big is this boot ROM because, I mean, it seems to me like the proper size for a boot ROM ought to be about, you know, 512 bytes and not much more than that. But, clearly, there's enough stuff in there that you're able to get all sorts of interesting bugs and behaviors. So I guess on on the small just generically, what is the ROM doing? If it turns out to be kind of yeah.

Speaker 7:

Oh, here. Start with the first one.

Speaker 2:

So the ROM itself is, definitely, more than a couple of kilobytes. I mean, it contains an entire stack for parsing X509 certificates. So that should give you some of the idea about, the size of this ROM in terms of its full feature. But again, this is sort of ends up being a trade off about what exactly it tries to do just because, I I think part of the explanation NXP gave for for some of this is is that it turns out customers were actually ax asking for a lot of these things to be in RAM, especially for things like being able to write to the flash. Because the flip side about not about having things as RAM is that means is that you don't have to take up your your own precious flash space to be able to, be able to write things like flash loading or update mechanism.

Speaker 2:

So that gets stuck just because a flash is often a limiting factor for, some of these things. So so I think there are key reasons why it's there, but you then you have the trade off about making sure then, you know, it starts to grow, too much. And, again, you end up with all sorts of unknown things.

Speaker 1:

Yeah. That's a real tension. You're like, oh, well, the bootstrapping have nothing in it. It's like, okay. Well, now your software, and, Laura, you mentioned the story about the you end you do end up with this bootstrapping problem of secure upgrade becomes really difficult without having some root of

Speaker 3:

the root of trust.

Speaker 2:

Yeah. And I I think we're trying to get that in there and making sure things are out there. And, again, you know, I started out by talking about I was breaking all these trips, and I I think part of the reason for having a Boot ROM is is that ensure that there's always something that can go up and have some way to recover the chip. So if you, flash something on there in some way that's in a very wedged state, you can, still manage to get get in there even, if if, you know, you know, your code is very bad.

Speaker 1:

Yeah. And so, Matt, did you have a do you have a follow-up? Sorry. Go ahead.

Speaker 7:

Well, yeah. I mean, the the follow-up if it turned out to be small was, do you think there's even any source code, or is it just some guy who's right in hand tuned assembly? Clearly, that's not the case for x509.

Speaker 2:

No. I absolutely. I I I think at one point, you could have argued that that some boot rounds would have just been hand tuned assembly, but I think this absolutely looks like it is a c code that is written in into there. I mean, otherwise, you know, someone is out there, writing mem copy by hand. I mean, that's that's certainly possible.

Speaker 2:

But, Yeah.

Speaker 7:

And and the, like, the counterpoint on the large scale side is if you have written a software project of that magnitude with no intention of ever releasing it, I mean, do you think it's just possible that they're in a point legally where they cannot release the code even no matter how much they want to?

Speaker 1:

Definitely possible. I mean, it it it I don't know about the boot ROM, but you see this I mean, I don't know, Laura. Maybe you maybe with the boot ROM, we think this as well. But there is definitely in larger c I mean, as soon as you get, like, off of MCUs and into, like, real CPU like, host CPUs that, you know, where you got, 100 of watts, there's plenty of low level software that is being provided by other vendors, the other IP vendors. And, yeah, you you you can't always open it.

Speaker 1:

So that is a very real problem, unfortunately.

Speaker 2:

Yeah. I mean, I I'd say that's a possibility, but, I mean, I I I think there are certainly a lot of motivations for why a company might not choose to open source something. And I I think this goes to a lot of just general open source philosophy, I I think, in general, which isn't necessarily in respect to the boot rom. I mean, maybe they're concerned about what's actually in in the code either, you know, maybe legally, although I wouldn't wanna speculate there. But I mean, sometimes it's just that they don't just don't wanna risk it just because, I mean, I think especially is that if you if you've ever talked to a lawyer, their job is to, you know, try and get to minimize your risk.

Speaker 2:

And it turns out that in, you know, in looking at that from a certain perspective, you know, don't not opening the code does do you know, possibly reduces risk in that manner. I mean, I'm certainly, I'm not a lawyer and, and, what to do with that. But, I mean, I I think you often find times that, you know, taking the conservative approach is is what's recommended simply because, you know, it reduces the risk. Although as we've again, as as we've mentioned before, that's risk cannot reducing the risk can come up with other kinds of trade offs. So

Speaker 1:

That's right. Yeah. And that's certainly I there's so many people that that, you know, kinda claim to represent legal opinions amongst offer insurance. Right? Is this actually from a lawyer, or are you just is this what you might are are we are we playing pretend lawyer right now?

Speaker 1:

And, like, look. We've all done it. I played pretend lawyer, but, Jason, might get your hand up. I think and maybe that that then we'll, we'll wrap it up.

Speaker 3:

Well, I was just gonna ask then since you mentioned the X509 parsing, is that gonna be the next area you look for vulnerabilities?

Speaker 2:

I I have actually, looked at that, a little bit, and I I I think, that's a pretty big, area to to try and figure out. And I mean Oh, god. I I wanted to try and write an x509 fuzzer, but, I at no point have the time or or expertise to be able to, try and do that. So

Speaker 1:

I hope nxp listened to all the way to the end of the space. They're gonna be someone in there. There's gonna be, no. Replay it at the end. It's like they, it's

Speaker 3:

like, that's right, Dave. She's coming for you next.

Speaker 1:

That's right.

Speaker 7:

I I I can only imagine the amount of shenanigans that you can get up to with x509 certificate parsing on the every microcontroller in the world. Clearly clearly Laura's plot to take over the world is succeeding.

Speaker 1:

Oh, it's absolutely succeeding. It's absolutely succeeding. Alright. So, here, Evan, we're gonna allow you to speak, and then I think we I I do wanna be respectful. Laura's on the East Coast, and I'll get laid over there.

Speaker 1:

So, Evan, did you have a question for Laura? I wanna build your own X509 parser. There are a lot of good fuzzing corpus out there. And if you just shove every mutated X509 file out there, you might find something. Evan, could you hold that thought until, mid December, please?

Speaker 1:

That this is the the winter season is not gonna come to the this early this year. Sorry, Brian. The cat's out

Speaker 4:

of the bag.

Speaker 1:

There you go. And did do you have another question at all or as well? That's a that's definitely a good suggestion for when we wanna discover yet another folder. But I we're gonna run at Laura, at some point, you're gonna run out of of, like, blog titles.

Speaker 2:

I I mean, I I have, you know, a couple of rejected blog titles out there. You know, I I I can certainly back what I mean. I I would honestly prefer not to find any more vulnerabilities in terms of because it's really difficult to try and ship a product when we keep finding vulnerabilities.

Speaker 1:

It it is. And that's as good a note as any to end on. Laura, thank you very much. This is such impressive technical work. I I mean, it was the the previous vulnerability was was remarkable.

Speaker 1:

I thought you did just incredible work on this. It was, a lot of fun to be a part of. Of course, it's easy to say that now because we know we're not vulnerable, so I'm not sure. But just terrific work here, and I think, you know, over and over again through this process, you have been a model for how we want, folks to carry themselves with respect to security disposition and vulnerability and responsible disclosure and everything else. So, kudos to you.

Speaker 1:

Definitely inspiring to to be your colleague, and it's and thank you so much for joining us today for to discuss it.

Speaker 2:

Yeah. Thanks for having me. And once again, thank you for Oxide for, supporting me and being able to, do this type of work and, you know, be both being able to take it seriously and also support the creativity and occasionally, hair braid ideas we come up with. So

Speaker 1:

Absolutely. As Ben says, very much in scope. Alright. Thanks so much, everyone. Hey.

Speaker 1:

A quick plug for next week. We, we know we we're do an update. We're gonna have the the hardware team back. We're gonna be doing tails from the bring up lab. We are gonna be doing, talking about how our, the the rest of bring up of our EBT at the inlet sled, and then we're also gonna talk and bring up of sidecar.

Speaker 1:

So, should be a lot of fun. We wanna get the get get the hardware team around the campfire before we forget some of these traumatic stories. So, look forward to having folks next week. Alright. Take care, everyone.

Speaker 1:

Thanks.

Speaker 3:

Thanks, everyone.

Another LPC55 ROM Vulnerability
Broadcast by