A.I. Frameworks and Databases | Episode 37
Welcome to AI Security Ops, the weekly show where we cut through the hype and dig into how AI is being weaponized and how we can defend against it. This week, we're gonna talk about different databases and resources that you can use to find about find out about vulnerabilities in the artificial intelligence space. This is a a rapidly evolving area, so please don't don't fault us if next week things are totally different. This episode is brought to you by Black Hills Information Security and Antisyphon Training. BHIS helps organizations find real world security gaps before attackers do.
Bronwen Aker:We do this through penetration testing, adversarial emulation, and purple teamwork where red teams and blue teams get together and figure things out. Antisyphon delivers hands on practitioner led training built around real world attacks and real tools so you can use the what you learn immediately. To find out more, go to blackhillsinfosec.com or antisyphontraining.com. Alright. So this week, we've got all this interesting stuff about different vulnerability databases and reference sites.
Bronwen Aker:Anyone wanna start this off?
Brian Fehrman:Yeah. Sure. So.
Derek Banks:Go ahead, Brian.
Brian Fehrman:I'm gonna go for it. So yeah. So we've got, I think we've got here is we've got a couple different frameworks and, as mentioned, a couple different, databases. And, just to mention upfront, you know, although, AI vulnerability tracking does exist today, it's still not quite as mature as the traditional CVE based vulnerability tracking that we've been used to over the years. And we still haven't landed on like a single like CVE kind of tracking system nor have we really landed on like a single framework that we can apply.
Brian Fehrman:But with that said, there are some contenders that are starting to hit to the to the forefront with some of these being some familiar names that you'll probably recognize. And the first one we'll kick off with is the MITRE ATLAS framework, which is brought to you by the same people who brought you the MITRE ATT CK framework. ATLAS being adversarial threat landscape for AI systems. And it's basically the idea of what you would expect from, like, the ATT CK framework, but more geared towards kind of like AI space in in particular.
Derek Banks:You know, the thing that kinda struck me, we're in about four of them. So we have Atlas. We have the the AI incident database, the AI vulnerability database, and then the OWASP top 10. And the thing that kind of struck me about it is that as we were putting some of this stuff together was that AI is still special. It's getting its own thing.
Derek Banks:But at some point, does it just become part of the regular vulnerabilities or just part of what we consider to be normal? Because right now, it's unique and novel and special. So was virtualization when it first came out, or containerization, or the cloud, or anything. And then did they get their own vulnerability database? I don't even remember, but I don't recall that.
Derek Banks:And then there was a really serious n eight n vulnerability last week that is now fixed to the props of the folks at n eight n where, you know, there was essentially a remote code execution. Was that an AI vulnerability or just a standard CVE? Probably just a standard CVE. But anyway, to Brian's point, all this stuff is of muddled together. But I think as cybersecurity folks, you should definitely care because the world is changing and evolving and staying on top of this stuff is is is kind of important.
Brian Fehrman:Yeah. Yeah. I agree. I mean, I honestly, on the point of having it separated out, I mean, really, I think that especially with, like, Atlas, I I think that we can probably just absorb that into the traditional attack framework. I think a lot of this is just that, you know, AI had such, like, a quick ramp up in terms of hype cycle and adoption.
Brian Fehrman:And so I think immediately people were thinking like, oh, hey, like this is something special. We need all these separate vulnerabilities and tracking mechanisms for this. But I feel like as we're getting more and more into this and more comfortable in the space as a whole, we're seeing that it's not really like a whole separate entity. It's it's it's tightly integrated with all the things that all the issues that we we have seen before. And so, you know, you mentioned cloud stuff.
Brian Fehrman:I mean, that that doesn't have its own MITRE framework. I mean, that is integrated. You go and you look in MITRE ATT CK, and there are it's just integrated in the different sections that are already there in terms of initial compromise and persistence and all the other facets that we're also used to. And honestly, mean, I think that at some point, we we might just see, you know, Atlas being merged within the original attack framework. I honestly, I think that would make perfect sense.
Derek Banks:Yeah. For those that oh, sorry. Go ahead, Ron.
Bronwen Aker:Well, I I completely agree with you. And I think that this is actually part of a natural evolution. Like you said, we saw it with virtual spaces. We saw it with cloud. Now we're seeing it with AI.
Derek Banks:It's Oh, it's blockchain. Not Don't forget blockchain.
Bronwen Aker:Oh, God. Well, it's not only that, hey, this is new and shiny and it's different, but then it's also as it becomes more mainstream and becomes more universally implemented. I mean, I remember when cloud was new and novel and not the norm. And then it made sense to have these separate places to go to find out about how it works and what the vulnerabilities are in that. And I think that we're going to see that exact same evolution with the AI space.
Bronwen Aker:Mhmm.
Derek Banks:Well, that that said, I do like MITRE. I mean, I like all four of these actually. I mean, I'm not being critical of the data that they have there. I think this is very useful. And I I kinda I I have a a like a love hate relationship with the kill chain kind of methodology because it's useful for framing how a threat actor thinks about getting into a system.
Derek Banks:However, it is kind of outdated at the moment when you think of, like, you know, how the, you know, the Lockheed Martin kill chain works. It's talking specifically about compromising an internal resource. So that kind of, you know, flavored what MITRE ATT CK then looked like where you had reconnaissance and weaponization and exploitation all the way through to actions on objectives. And that's kind of how MITRE ATLAS was then, is now organized as well. And which is well and good, but I think that threat actors and attackers have kind of moved on a little bit from that kind of traditional kind of take on breaking into a computing system.
Derek Banks:But then there's if you go and look, there's some real good information out there on actual how threat actors are actually using AI to accomplish their tasks.
Bronwen Aker:One of the other things that I like about seeing that there's more than one of these vulnerability databases and resources is that each one brings their own spin. So MITRE has its way of framing things. OWASP has its way of framing framing things, and each has value, especially because people work in different ways. You have different head spaces, different learning styles, and different perspectives. And I think that just as oWASP is is good for pretty much anything web related.
Bronwen Aker:They're they're absolutely a leader in terms of that, not the only resource that you should use by any means. But they've declared themselves in in that space and established a good reputation for good reason. Because so many of these AI tools are implemented in a web based format using API calls and forms and and chatbots and all of those things, we're seeing a lot of the same culprits. So I can see them being able to bring a lot to bear. Same thing with MITRE.
Bronwen Aker:This, Avid database I think is completely new, however. But what I've seen in looking at their website and how they're writing things up, it's they're doing a good job. They're treating it responsibly and that's always nice to see.
Derek Banks:Yeah. I think that each one of them has a slightly different intended use case. Right? And I think that MITRE is probably kind of more for threat modeling, red teaming, defense planning kind of stuff. And then the AI incident database case analysis.
Derek Banks:I like risk justification. Like, we really need to do this because and give me some money, right, and policy kind of things. Where the AI vulnerability database security assessments and standards for pen testers and compliance teams kind of thing. And then like you said, OWASP is development and secure development life cycle, like don't make it secure from the beginning, which as we have seen in the real world, there's probably some good use case for that.
Brian Fehrman:Maybe, just a little bit. So I'm looking at, you know, I'm not I wasn't super familiar with AI incident database and AI vulnerability database before this. So it sounds like AI incident databases, I mean, literally what the name says, are tracking incidents, like actual incidents that occur, not necessarily new vulnerabilities, but it's being crowdsourced out that, like, hey, this, this incident occurred on on this company, just as a heads up versus, AI vulnerability database is more of trying to aim towards like that traditional like CDE style tracking of that it's a specific vulnerability that could have been exploited across multiple incidents that we have seen. Is is that kind of your guys' takeaway on that one too?
Bronwen Aker:Yes. Did you take a look at their news digest? The interface is is very interesting. Do we have a way to screen share on this?
Derek Banks:Yeah. It'll let you screen share. I think we should off road do the screen share. Just make sure you close your email first.
Bronwen Aker:Definitely. Yeah. Alright.
Derek Banks:But, yes, you can screen share as far as I'm aware. But, yeah, I I kinda like the as someone who used to do incident response for a living, I like reading about other people's misfortunes apparently. And I do like the AI incident database. And the the first one I saw on the on the incident roundup for the end of last year, I guess, actually, q three of last year, trending deep fakes in the monetization of attention. And I I think that 2026, the second biggest, you know, AI vulnerability that we're gonna see and and tune in next week for what we think might be the first is gonna be the use of of deepfakes and pretending to be the the CEO or or something like that.
Derek Banks:And in fact, our our continuous pen test group at Black Hills had a customer ask them to create a deepfake of their CEO as like a training kind of thing. And, you know, it I I was only loosely involved, but it it went pretty well in my opinion. But it it was a scary time.
Bronwen Aker:The best of times, the worst of times.
Derek Banks:It's always like that, I think.
Bronwen Aker:So I did share the AI incident database a little bit and clicked on a couple of things. It's an interesting site and I like how Grok is at the top of all of the current news.
Brian Fehrman:Oh, yeah.
Derek Banks:Haven't really been paying attention.
Bronwen Aker:Reasons as usual.
Derek Banks:Did they have a vulnerability? I I really I haven't really been paying attention.
Brian Fehrman:Oh, go ahead, Bronwyn.
Bronwen Aker:No. By all means.
Brian Fehrman:Yeah. Some of the I I just caught a snippet of it on one of the news updates yesterday. It sounds like there people are using it for certain things that other people aren't very happy about, we'll say. They're not happy about the guardrails. Related
Derek Banks:deep Gotcha.
Brian Fehrman:We're related to deepfakes, but let's say for more personal use.
Bronwen Aker:It has to do with nonconsensual alterations of images
Derek Banks:and So people posting like a an image of themselves and then someone taking that image with Grock, like, in line in the chat app and doing Mhmm. Did they did they yeah. Inappropriate things. Well, you know, it's kind of funny. Like, I use Instagram and and x a little bit.
Derek Banks:Right? And, you
Brian Fehrman:know,
Derek Banks:I on Instagram, certainly, they try and, you know, the algorithm knows that, you know, I'm a middle aged male. So it tries to sell me things with scantily clad women. But it never goes past that. I don't ever see anything not safe for work on Instagram. That is not the case on X.
Derek Banks:Definitely. But my algorithm on X is all AI and machine learning and information security. And I don't really go to the algorithm as much as the For You one. But definitely, Instagram is more like music and jujitsu and stuff like that. And I try and actually keep them separate to see how the algorithms behave.
Derek Banks:But that's kind of hard. But I guess that makes sense because but it seemed like that it wouldn't be too terribly difficult for them to put like guardrails on the in line Grok. You know, like Grok do this kind of thing.
Bronwen Aker:Yeah. But
Brian Fehrman:Yeah. And and that's that's what the argument and and outrage is is kind of about. And Elon has basically made it into like a free speech battle of basically like, you can't tell me what to do.
Derek Banks:There's certainly some work there are some, well, countries now that are trying. I did catch earlier this week that I guess they were trying to get X to do are they suing him or something? Or suing X, I guess, for for something along
Bronwen Aker:I think one country actually blocked X completely because
Derek Banks:Oh, yeah. Okay. Or they're threatening to do it or something. I applaud him for the free speech thing. I mean, I guess to a certain extent, I think there should be a forum where people can say what they want.
Derek Banks:Now, don't know that that extends to taking a picture of me and doing inappropriate things. That seems to kind of go down that road. If it and no one's gonna take a picture of me and do inappropriate things. That's just weird. But I mean, it would be kinda fun to see what would happen if you put like some kind of prompt injection in there, you know, like a little bobby tables kind of of thing.
Derek Banks:But I mean, like, at the end of the day, like I mean, you don't have to use x. Right? You could use another platform or just not put the pictures there, I guess. But
Brian Fehrman:Yeah. Well, I think part of it and sorry to for us to off road and dive too deep.
Derek Banks:But I
Brian Fehrman:think it's not just like people posting pictures on there. I think it's people getting stuff from other sources and then feeding it into their algorithm to get get what they want out of it. But that's a whole another debate and I can see I can see arguments from both sides and that's a whole that's a whole other that's a whole
Derek Banks:other thing. To tie it back into what we were talking about, do you consider that a vulnerability? I mean, it kinda is. Right? Like, I can misuse your system for unintended consequences or unattended, like, you know, features, Can I actually remember and maybe it's on one of the incident databases of a story where API keys got stolen and people were using AI?
Derek Banks:I think it was Foundry, I think, to create some inappropriate content on someone's AI bill using, you know, unguard unguardrailed models running in Foundry. But oh, man. What a weird world we live in now.
Bronwen Aker:Yeah. Oh, yeah. Well and it's it's funny because I'm I remember all of the the promise of we'd have lunar colonies and space stations and flying cars, and we don't have any of those things. Well, we sort of have a space station, but we we definitely don't have most of that. But now we do have machines that we can talk to every bit as much as HAL, The Hal 9,000 in 2001.
Derek Banks:And and I I still think that we do live in the future because I can, watch the AFC championship game riding the train back from DC and not really even miss a beat. And that that's pretty cool. Right? Though I'm able with that kind of technology. And so I think people take it for granted.
Derek Banks:Like, my my daughter was lamenting that she has to write an essay, and she only got an hour in class to, like, do the research, and That wasn't enough time. And I didn't say it, but I was thinking, you know, I used to have to go to the library and go do the research. And now you can just do this, and the research shows up. So hush your mouth.
Bronwen Aker:Yeah. So the vulnerabilities are going to continue to expand. We know this, but still a lot of the vulnerabilities we've seen so far and that I know I expect to see in the future are variations on the same usual suspects. And so it's I applaud the fact that these separate AI vulnerability resources and databases exist, and I I agree that eventually they'll be folded into the other ones. Although some of them may stay separate.
Bronwen Aker:Who knows?
Derek Banks:Alright. Well, should we wrap it up?
Brian Fehrman:Yeah. Let's do it.
Bronwen Aker:Alright. Thank you for joining us everybody, and we'll see you around. Keep on prompting!
Episode Video
Creators and Guests