A Conversation with Dr. Colin Shea-Blymyer  | Episode 29
E29

A Conversation with Dr. Colin Shea-Blymyer | Episode 29

Joff Thyer:

Hello, and welcome to another episode of AI security ops with my illustrious panel, Brian doctor Brian Furman, should I say, and Derek Banks. And today, we are super excited to have our guest, Colin Shay Bleimeyer. I hope I said that right, Colin.

Dr. Colin Shea-Blymyer:

You nailed it. First try. Excellent. Well done.

Brian Fehrman:

And doctor Colin as well. Doc Doctor McDonald's.

Joff Thyer:

We're outnumbered by doctors. I only have a lowly master's degree. Sorry.

Derek Banks:

Same here.

Joff Thyer:

Sorry. Not sorry.

Derek Banks:

Fifty fifty split.

Joff Thyer:

Yeah. So Colin's involved in working on AI system. How AI systems can be designed, secured, governed responsibility, you know, ensuring that we have alignment with societal and ethical needs. All the really interesting and difficult problems in this space that we're dealing within this human paradigm shift that we're all living through. So And we're gonna talk about that those things as initially, but we're also going to pivot into some cybersecurity concerns because we are a cybersecurity consulting company, and that's what we do.

Joff Thyer:

So I'm gonna kick it off with a opening question for you, Colin. And that is you've you know, according to to some of the stuff that we've read about you that you've traveled kind of a journey from technical computer science over into sort of AI and and related AI governance policy work. Tell us a little bit about what motivated that transition and how you got into this line of research.

Dr. Colin Shea-Blymyer:

Yeah. So I always knew I was interested in artificial intelligence since before I started college. As high schooler, I was really lucky to have computer science classes. Being taught at my high school, was taking those. I was also taking psychology, AP psychology.

Dr. Colin Shea-Blymyer:

I thought it was just like fascinating subject. Was like gosh what if there were like you could program things that like think and then you had to decide how well they thought at things and you know some mix of psychology and computer science. Wouldn't that be awesome? And then I read Isaac Asimov's I, Robot. And of course, the main character Susan Colvin, she's a robo psychologist and I was like, oh my god, that's it.

Dr. Colin Shea-Blymyer:

That's that's what I wanna be when I grow up. You know, and so high school I'm I'm like I'm I'm learning everything I can, everything that a high schooler in rural Virginia can learn about artificial intelligence. I to college, I study computer science, I'm doing a master's trying to get some real machine learning experience, research experience under my belt. And for a summer internship in 2019, I end up at MITRE, which I'm sure most of you are familiar with. And we are this is 2019 so this is the first Trump administration.

Dr. Colin Shea-Blymyer:

They've just released an executive order about artificial intelligence. MITRE is supporting the National Institute of Standards and Technology, NIST, on developing, like looking into how we could develop standards around robustness in machine learning. We were really concerned about adversarial machine learning at the time And I'm reading a bunch of standards and regulations about AI. So I spent like a summer reading about standards and regulations in AI and they're all really like well intentioned but very like abstract and kind of fluffy. Right?

Dr. Colin Shea-Blymyer:

They're like AI should be responsible. It should be trustworthy. It should be robust. It's like oh yeah. Why didn't we just flip the trustworthy switch when we were training this thing?

Dr. Colin Shea-Blymyer:

That would have made it so much easier.

Derek Banks:

Oh, for that only thought.

Dr. Colin Shea-Blymyer:

Right? Just we just didn't have that kind of foresight as designers and developers of these systems. So I I like identified basically at the time like this really big gulf between what policy makers wanted AI to be and what developers were capable of getting AI to be. Yeah. And Yeah.

Dr. Colin Shea-Blymyer:

Trying to build from one direction from one direction building the the technology that could that could actually give us the the technology levers to pull. To to like align it with with what our democratic systems and values are. And on the other hand, building up like like this policy maker understanding of what levers they can pull to to impact AI from their end. So how do I build that bridge?

Joff Thyer:

Absolutely. Yeah. You I think you've addressed a few of our other questions in in that response, but I'm gonna I'm gonna plow on and talk about history for for just a minute. So those of us who have been heavily sleeves up researching data science, ML, machine learning, artificial intelligence, especially large language models, you know, we all very quickly came to realization and and, you know, doctor Fuhrman here has has a degree in a PhD in data science. Derek has a master's in in data science, and I've just been a a ad hoc mathematical guy researching this stuff.

Joff Thyer:

So but anyway, we all we all came to the realization as as we are studying the history is like, you know, AI is not new. Right? LLMs are are relatively new on the on the on the horizon, but artificial intelligence technology that this was a dream that people were having at Stanford in the mid fifties and a dream that you had too as you were growing up. But so how does that how does that history shape your research mindset? Does it does it impact you in different ways?

Joff Thyer:

How do you how do you sort of bring that into your into your modern context as you're looking at this stuff?

Dr. Colin Shea-Blymyer:

Well, I actually have, like the history of artificial intelligence as part of my first lecture when I'm teaching AI policy and governance at Georgetown. Pitts and McCulloch come up with the artificial neural network a couple of years before the first Turing complete electric computer is developed. Right? So so like from a modern conception, neural networks predate modern computers.

Joff Thyer:

Oh, no. That's fascinating. I didn't realize that time sequence was in place because, you know, that Van Neumann model had always dominated when it when it first hit the scene, and it kind of led us into that, from my perspective, into that AI winter period. But, yeah, that that's a fascinating insight.

Dr. Colin Shea-Blymyer:

And so neural networks, like, hung around. People were Rosenblatt built the first Perceptron for for the US Navy. In fact the the Smithsonian Institution still has one hanging around in their archives somewhere. They've got one of the original Perceptron like these big electric monsters you know that's all analog. But they were like impossible to tune by hand right?

Dr. Colin Shea-Blymyer:

You're sitting there like literally moving wires around by hand. It just wasn't feasible at the time. And so part what I teach my students right is that you can go back in time and you can look at Pitts and McCulloch, can look at Rosenblatt and these really early, the earliest pioneers in neural networks and then it just gets really hard to build these systems and they lose funding, academics lose interest, they get iced out of funding and research and it's not basically somebody figures out that you can use like a GPU to to make these systems like quickly and efficiently that they start coming back onto the scene. And so I like I always have a hard time sort of believing that anything is is stuck in the past. Right?

Dr. Colin Shea-Blymyer:

For for decades people thought AI thought neural networks were stuck in the past. Right? Right. And they were they were exploring expert systems. Cybernetics.

Dr. Colin Shea-Blymyer:

I remember that. Right?

Derek Banks:

Like the nineties. Yeah. Yeah.

Joff Thyer:

I mean,

Derek Banks:

I always thought it was neat. Just like Bayesian statistics has been around forever and since what, the late eighteen hundreds and it's the cornerstone of how we solve mathematical problems. Right? So it's very interesting how some of the math has been around for a while. We just didn't know how to use it.

Dr. Colin Shea-Blymyer:

Yeah. I mean even even you look at like George Bull, right, in Boolean algebra. It predates predates any computer approach to to Boolean algebra by like a century or two. Right? And so so none none of this is stuck in the past.

Dr. Colin Shea-Blymyer:

It's all a matter of it's all a matter of the environment like what is going to be easily implemented. Where where can you get where can you get scale and power out of some technique that that might otherwise be relegated to history books. And then and then like where where do the I mean it's all incentives. Right? It's it's is there can you can you find money to do this?

Dr. Colin Shea-Blymyer:

Can you find can you find like the the pipelines? Right? Like GPUs just happen to be very lucky to be useful for for AI, right? If if that wasn't the case, we would be we would still be working with like support vector machines and you know, then Some linear regression

Derek Banks:

here and there. Yeah.

Dr. Colin Shea-Blymyer:

Right. Exactly.

Joff Thyer:

It's kind of a I don't know that it's a happy accident, but it is it is a design. GPUs are designed that that just by the nature of what they are, you know, the the the processing of vector mathematics just fit the model, you know, if you like, of of of doing a neural network implementation. So it's a it's a fantastic coincidence that there was that realization, but it was almost like a durr moment too. It's like, oh, yeah. Of course.

Joff Thyer:

We can do it with these things. You know? So, yeah, I find that fascinating. What what about right now? What what research are you most directly involved with right now aside from your teaching efforts?

Dr. Colin Shea-Blymyer:

I I a lot of my focus right now is on AI red teaming. So one of the things, I don't know, for for the audience, AI Red Team hopefully you're aware of it already but since you're listening to this podcast.

Derek Banks:

I think you're speaking the language of our people.

Joff Thyer:

Good, good.

Dr. Colin Shea-Blymyer:

Well we have like you know, we're trying to come up with a nice formal definition for everyone that maybe not too formal. I'm a formal methods guy by training, like that's what my PhD was in so I don't wanna claim something being formal when it's not. We have a nice rigorous definition for for AI red teaming. And one of the things we're interested in is in how information from AI red teaming exercises can be shared. Right?

Dr. Colin Shea-Blymyer:

And and reused. Sort of building up a science of AI red teaming. So to put this in perspective, one of the things I would really like to see is, you know, maybe maybe you guys red team like a Lava model and then you red team, you know, a Quinn model. And I wanna know which one like, maybe let's let's assume you have the same like threat model for both and I wanna know like which one is going to be more secure. You might typically write down like, hey, this is like the big vulnerability or maybe we found two big vulnerabilities on this Lava model for your application need.

Dr. Colin Shea-Blymyer:

We found maybe this one for Quen. But that doesn't really tell me how much effort went into either of those. Right? What your methodology was? What are the what are the like the little what are the little hints that I as somebody who knows a lot about red teaming needs to have to be able to say, oh yeah, that's how I would have done that.

Dr. Colin Shea-Blymyer:

Right? I I would have done the same thing or that was actually a really clever way of doing this. And and I think that this report is really valid. Right? They had a really clever methodology.

Dr. Colin Shea-Blymyer:

They did a really good job. They spent the right amount of resources and effort to do this. I can trust what they found. I can trust that these two vulnerabilities were like the big bad ones and they didn't miss anything major.

Joff Thyer:

Yeah. I I think as you know, on on the red teaming topic, one of the things that that sometimes gets us a little derailed but but is still just as valuable is we're finding that the surrounding scaffolding around model deployment in modern corporation and commercial enterprises is actually you know, it it falls back to standard web application testing, but it also has become a problem because for some reason, we're going backwards in the developer mindset of how to build the scaffolding around these things, then it's usually when I say backwards, I I step backwards in the in the security mitigations that are that are built in. So so our tests have tended to fall back into finding vulnerabilities and scaffolding and less less as focused on the models. I don't know. Brian or Derek, do you have a thought on that?

Derek Banks:

I'm gonna let Brian Yeah.

Brian Fehrman:

I would Yeah. No. I I would agree. And I think that I think that part of that probably comes from the the AI boom and and the hype surrounding it. And these companies just wanting to get a product out as quickly as they possibly can.

Brian Fehrman:

They see that the demand is there right now and being first to market means a lot when it comes to generating interest and generating revenue. And so then these teams, you know, they might come up with an idea or management comes to them with an idea and they're like, hey, we want this now. And as we've seen before over and over throughout the years when things get rushed to market, security is often kind of an afterthought and I think that that's kind of what we're seeing again with this.

Derek Banks:

Yeah. I like I like to break it down into two categories. Right? I think you have AI safety and you have AI security. Right?

Derek Banks:

And I think that from, you know, what you were saying about, like, you have these two different models and we want, like, a baseline to, like, test, like, the models themselves, a lot of that ends up being safety oriented, like bias and can you get past internal guardrails and stuff like that. But then like Brian was saying, have web app stuff around it where we're finding that some of the stuff that is maybe we haven't seen recently in web app tests are being repeated as flaws as people are implementing stuff. And we have certainly examples from tests that we've done this year, but something else you said really made me think about the data sharing aspect. Historically, penetration testers and red teamers, folks in the information security community, do not share data with each other on the red team side. Now on the blue team side, completely different paradigm.

Derek Banks:

Right? Like, you have, you know, like, frameworks in place, you know, for sharing cyber threat intelligence, right, where you have different TLPs. Right? And so, you know, different colors. And and so I think that starting with some kind of, like, framework for folks to be able to share information, like, I just don't see that that exists.

Derek Banks:

And then one other thing that a colleague of ours mentioned, he works for Palo Alto now that he's working on. And it's more around writing the reports, but essentially having all of your command lines recorded and then have an AI summary of all everything you did at the command line goes into the data. But the data sharing aspect is huge, and that's a barrier that I'm not really entirely sure how to get past because people I mean, we're actually fighting at Black Hills just folks using frontier models because they don't trust OpenAI or Anthropic and really cool. But you trust Microsoft and Amazon. What's the difference?

Derek Banks:

Right? And so, you know, there's a lot of a lot of work to be done on on Red Team or sharing data and techniques, I think.

Dr. Colin Shea-Blymyer:

Yeah. I wanna, if I can, respond to to this, like, security versus safety because I think there's something there. I I would argue that there is there is safety on models. There's also security on models. So so like you can you can expand your scope right from from the model itself to the system around it.

Dr. Colin Shea-Blymyer:

And I think that there are really interesting security problems right on the interface of right like like oh gosh I just saw an article today about there are all these new like AI powered browsers and they're reading text off of images and you can put prompt injections in images and then it reads it off of the image. That's like Health security folks are like, ah. Right. And it's like yeah, that's not great. That's like obviously some sort of security vulnerability.

Dr. Colin Shea-Blymyer:

But but that is only a security vulnerability because the way that we treat AI models and the way that we treat the information that goes into them is so general. Right? We we would never be like, hey, like we're gonna throw SQL queries into our database. We're we're just gonna like scrape we're gonna scrape a website and throw whatever's in there into our in as a query into our

Derek Banks:

unless the user input dictate what we put in.

Dr. Colin Shea-Blymyer:

Right? Like that's insane. Like obviously we're gonna put some we're gonna we're gonna put some guardrails on that. Yeah. It's just that like the guardrails what we want AI systems to do are way more like flexible and wavy and Yeah.

Dr. Colin Shea-Blymyer:

Diffuse and foggy. And so that's where security and safety start to meet in my mind. And that's where that really exciting frontier is for me.

Derek Banks:

Yeah. There's definitely a Venn diagram there. But I think that our take on it or my take, our take is that there are things that are clearly safety and clearly security. But there's definitely overlap, and that seems to be where the implementation meets, right, where you have agency and, you know, user standardization, all these things that we need to be concerned about. And like Brian was saying, well, you can't be concerned about those things and get the that that that product out next month.

Derek Banks:

Right?

Joff Thyer:

Yeah. Well, we're we're gonna try to carefully avoid the agency topic because we'll be here for the rest of the day.

Derek Banks:

Well, that's true. Yeah. So

Joff Thyer:

but I do wanna ask a follow on with regard to your research focus. Have you started to look into as a part of that research effort any of the ML ops pipeline, the data scientists side of this as they're building models and how that impacts your your, rigorous definitions for for for different security risks?

Dr. Colin Shea-Blymyer:

We're mostly focused on the reporting sides of things right now. So it's less it's less on yeah it's it's less on on what the risks are. I I have done a little bit of work with like OWASP on on some of their like their risk scoring their their AI risk scoring stuff only a little bit. But but my my main research focus is on is on how AI gets reported. Right?

Dr. Colin Shea-Blymyer:

So so if that vulnerability is from a from like, you know, from a specific part of the part of the pipeline. Right? A part of the AI development pipeline. Like that should definitely be reported and I think that's part of let's see I'm trying to think here real quick. I would guess that that's part of how you're defining the scope of your red teaming exercise right?

Dr. Colin Shea-Blymyer:

If if the the pipeline is is game, then then attack away, you know, and and see if you can see if you can really screw things up that way.

Joff Thyer:

Yeah. I I get the sense it's been less of a focus of our community as well just Yeah. Mainly because it it would presume that there are a lot of organizations out there developing their own models, and I just don't think that's actually occurring because of the the the cost for for the large scale efforts as, you know, these vendors are bringing forth the solutions for them. You know?

Dr. Colin Shea-Blymyer:

That's a really good point. You know, I I have been thinking a lot actually about the relationship between mitigations that you can do after the blue team side. You've AR red teaming and you're like yeah well like you've got these vulnerabilities we'll slap this guardrail on, we'll tweak the system prompt in in this other way but like we're not gonna retrain the thing. Like we you know we don't have the resources to do that here. And in a similar way like as as a as a red teamer like it's really hard.

Dr. Colin Shea-Blymyer:

Some of these things are really hard to have the resources to do, like to attack, like the data sources. Right? Like, I mean, turns out that you only need like about 250 poisoned pieces of data and a and a training data in order to in order to like inject backdoors. This was some some research that came out of like The UK AI Security Institute but which is like pretty scary. But you know, like it's there there are different different part right like it's gonna be hard to I don't know like put a vulnerability in in PyTorch for for for the average red teamer.

Derek Banks:

I mean I would argue that the average red teamer would be afraid that they're going to commit a felony if they're going to do that. Right? Well, that's right. How supply chain risk side of Yeah. Like

Joff Thyer:

Yeah. The supply chain risk side of this exists. There's no doubt.

Derek Banks:

And it's no different than other things too. Like, I'm not gonna compromise Verizon so I can, you know you know, basically do a SIM swap on somebody, but a nation state threat actor might do something like that. So you can't ignore it, but it's it's hard to test. Right? And that that's probably why, practically speaking, we kinda default back to, well, like, we should really look around, like, the web at you know, like, the vulnerabilities that surround it too because for a company, for their stuff, that matters a lot more.

Derek Banks:

But, I mean, also, you know, some of the other things also matter to them. Like, if if if there was a backdoor implement you know, implemented by a nation state threat actor, well, like, you know, look at, was it SolarWinds? It could definitely make a bad day for you. Right? So it's still a problem that we collectively need to worry about.

Derek Banks:

But, you know, in in the average assessment for one of our customers, it's just not something that we can do anything about.

Joff Thyer:

Yeah. Yeah. Hey. So I'm gonna pivot a little bit here really quick. I I went on a little bit of a dive down into the legislative efforts that were going on in The United States a while back, and we actually talked about it on a couple episodes on the show.

Joff Thyer:

And you mentioned early on as we were just starting out today that that that you'd sort of done a similar thing. And and the the one thing I found in The United States is that there's this patchwork of attempted legislation legislation efforts that are very lightweight, that kind of fluffy, and they're and they typically are focused on sort of a specific concern like, you know, protecting from harmful bias or something like that or that's probably not a good example. But but on the other hand, there's not been any really strong federal efforts that I that I'm aware of. But, you know, the flip side of that is in in Europe, there has been some very forward thinking kinda heavy ever efforts on the regulatory side of of AI. So can you kinda compare and contrast the the differences between The EU and The US and kinda where we sit with the the regulatory efforts.

Dr. Colin Shea-Blymyer:

Yeah. Yeah. Gosh. I think one of the first one of the first like AI regulations that I was at all aware of so I did my PhD at Oregon State University, go bees. Part being in Oregon is being aware of what's happening in Portland even if you don't live there.

Dr. Colin Shea-Blymyer:

And in Portland they had just passed a law, this was years ago, they had passed a law saying that they wouldn't allow facial recognition for law enforcement. You know and this was like pretty early, this was like probably twenty nineteen twenty twenty when when they did this. And and like I don't like this makes sense for Portland. Like that's that that is this is like a pretty like representative like kind of effort for the the kind of person who lives in Portland. That makes sense for a thing that like the the average Portlandian would want.

Dr. Colin Shea-Blymyer:

New York City a couple years ago passed a law saying that all hiring algorithms, Any any algorithms involved in hiring would have to pass bias audits.

Joff Thyer:

Interesting. Yeah.

Dr. Colin Shea-Blymyer:

Yeah. And so there there is I mean, I I think it's like unclear what the teeth to some of these laws. Some of these like release, I would say like hyper local laws are. They they typically don't have like the enforcement mechanisms to to really especially like the New York City one to really like crack down or even investigate the any any violations of that law. But they're they're on the books.

Dr. Colin Shea-Blymyer:

Right? And and it it is an expression of of like, you know, the the demos, the people, and their their attitudes towards how AI should operate around them, which I think is I think that's pretty valuable. At the state level, I think the the biggest news has come out of California. Right? I I think the failure of the last AI bill and the the most recent passage of the the sort of updated version was the SB 53 I think.

Dr. Colin Shea-Blymyer:

This shows like again the willingness for larger populations to start to regulate. Now of course the California effect is I think what California is going for here. Right? They they want right? They they have sort of outsized leverage and influence on not only on like the the political landscape as a whole of The US since they're a huge state, but also on software development.

Dr. Colin Shea-Blymyer:

Right? Because most of the most of the model developers, I think everyone ex ex like out of the big out of the big like five, if you include Microsoft, like they're in Washington, Grok, x AI is in Texas, everybody else is in California I think. They have a lot of leverage over what happens when it comes to AI development. But I think that it is like, I don't know, I think it's one of key benefits of living in The United States is that we do have these laboratories of democracy scattered all over, know? And I think that we should really take advantage of that when we are in such an uncertain time as with AI.

Dr. Colin Shea-Blymyer:

I I think, like, I I'm not a fan of the idea that has been floated in The US that we should put a moratorium on state level regulations of AI. I mean, patchworks of AI regulations like aren't ideal. It's not it's not great but it's also the right of each state to be able to do that and I think to violate that right

Derek Banks:

is And say good luck telling a state that they can't

Dr. Colin Shea-Blymyer:

do something. Right exactly.

Joff Thyer:

Well and and I agree with you on this that it access kind of a development source with everybody's different perspective of of different regulatory approaches that might be able to be eventually amalgamated nationally. Right? Yeah. So it's not necessarily a bad thing.

Brian Fehrman:

Mhmm.

Joff Thyer:

The the tendency in The United States, though, has been to innovate first, regulate second. Whereas the EU has this tendency to regulate first and just see what happens. If you have thoughts on that that aspect, what's better, what's worse, pros and cons, that sort of thing.

Dr. Colin Shea-Blymyer:

Yeah. Yeah. I mean, I I think it actually I think it actually really depends on the kind of regulation that's happening. I think that in many ways GDPR, the Data Rights Act in the EU has been seen by its citizens as a success. They have a lot more control.

Dr. Colin Shea-Blymyer:

I mean a lot of it is tied up in dark patterns and which button you click to say which cookies you accept. Know at the of whole they get that choice and it's afforded to Europeans and they like that. One of the criticisms I hear about the EU AI act is that it is not rights based. It is not designed to protect any specific rights. It is to impose procedures on AI developers.

Dr. Colin Shea-Blymyer:

Right? And so the you know, this is like unfortunately, the nuance here is really important that that just just enforcing procedures when we don't know what rights they are that we are trying to protect can be really tough. But it might be more palatable than saying like, yeah, we don't want systems that are going to cause financial damage or bodily harm to people. And we can't just say make sure your AI never causes financial damage because when it does, what are we going to do? Right?

Dr. Colin Shea-Blymyer:

It's it's too late. It's too late at that point. It's hard to predict what might cause this violation that we're trying to avoid. And so that's why they turn to one of these procedure based regulations. Trying to understand those procedures in and of themselves I think is a huge area of research and concern in AI governance.

Dr. Colin Shea-Blymyer:

Right? And I don't think that we have the impetus. I don't think we have the incentives in The States to really dig into that very much outside of academia. Right now in the EU, there's going to be a debate between Brussels and the AI developers on what are the right procedures. Right?

Dr. Colin Shea-Blymyer:

The like the fire like the the starting gun has been fired. Now they're going to now they're going to innovate about what are the right procedures to avoid the kinds of harms that the populace is worried about.

Joff Thyer:

So doesn't that ultimately have the effect of of actually repressing innovation? I mean, feels like it would.

Dr. Colin Shea-Blymyer:

It it'll it'll repress a certain kind of innovation. It'll it'll repress, like, the the raw capacities of AI systems, raw capabilities, but it'll also inspire innovation around what kinds of testing and evaluations should be required of AI developers in order to guarantee that there is some base risk of harm or base non risk of harm. So it depends it really depends on what is a populace's risk tolerance to these things. One of the curious things I think that is a result of this is that The US is like at least by I think Pew came out with with a poll fairly recently that said The US is one of the most pessimistic about AI like in the world. I like it it we should be thinking about you know, if we if our risk tolerance is that low, we should really be thinking about what are the procedures that we care about?

Dr. Colin Shea-Blymyer:

What are the horrors we want to avoid? But that doesn't seem to be reflected in policy land in DC where I am speaking from right now.

Derek Banks:

Yeah. That's because everybody's seen the Terminator movies in The US and they're like, no no,

Joff Thyer:

we've seen

Derek Banks:

how this ends.

Dr. Colin Shea-Blymyer:

Yep. As soon as I moved to DC I was in the DMV, know, was the lady behind the counter is like, you know, she's doing the paperwork, she's asking me why I moved to DC. It's like, oh well, I'm doing research on AI and she looks up at me, looks over her glasses and is like, you know they make a horror movie about how AI goes wrong every year? I was like, yeah, well, I figured I'd get this under my belt then I do an internship at the Ouija board factory. Then I heard there's a dinosaur park that needs a few hands, so just trying to make the rounds.

Dr. Colin Shea-Blymyer:

Yeah.

Joff Thyer:

It's it's fascinating. I mean, you know, I think policymakers and citizenry alike, there's actually no true understanding of what's really happening, and that is that is a difficult gap that's gonna take a while. And you've you've got all that these different vendors that are fighting for market dominance and and to give their version of what they think it's gonna be to to to people that are gonna use their services. But it it it you know, I think we sit in a a bit of a privileged position because all of us are deeply involved, and we got sleeves up, and we have this knowledge that that frankly well, for me, sometimes keeps me up at night and scares the living crap

Derek Banks:

out of me. Like The pace of it is really what's terrifying. Right? Because I I hear people, you know, talking about even if you look at things that revolutionize the Internet, like cloud computing or virtualization or, you know, containerization, blockchain, whatever, it seemed like it took a little bit longer for the ramp up. I mean, you know, I I got my master's in 2020 and 2021 from from UVA, and we looked at GPT as kinda like a toy.

Derek Banks:

Like, it wasn't a focus. Right? I imagine now the curriculum is probably a lot different. And even with it just in the last year, just the, like, the the the how much better the frontier models are than they were a year ago, it's really astounding in my opinion. So where I mean, I'm not even sure where it'll be in a year or two or three.

Derek Banks:

So to to try and sit down and make legislation about it, wow. I mean, if I had to make a choice, it would be, like, to it would be not necessarily legislation as much as it would be kind of like a Manhattan style project to protect the big tech companies from, you know, model theft and things of that nature. But, hey, I'm not in charge. So

Joff Thyer:

Yeah. Well and that's a great segue to our next question, which is and I'm gonna try to get this and and kinda wrap it wrap it up towards the end here. But really, it's I'm gonna sort of combine this into one long question, and that's, you know, given today's environment, what are

Dr. Colin Shea-Blymyer:

the top

Joff Thyer:

emerging risks here that we really have that intersect AI and the cybersecurity? That's that's part one of the question. And then per the open model and the open source community, a sort of tangentially related question is, how do you feel about the concepts of open weight versus the idea of maybe becoming a lot more open and publishing the actual neural network architecture, perhaps the data sources used? So I'll let you start up with just sort of a generalized emerging risks and then we'll go go to that one.

Dr. Colin Shea-Blymyer:

Yeah. There there is a connection here. I think that a lot of this depends on your personal appetite. Right? And I think that this is the fight that we continue to see borne out both in the technical debates and in the policy debates around AI.

Dr. Colin Shea-Blymyer:

Right? What what is it what is it you're afraid of? Are you afraid of AI is, you know, we're plateauing and what matters is like biased fairness in AI systems? Or what we really need is like a few more years of really intense innovation and we'll have systems that are solving or curing cancer. Or is that going to go too far and we're going to have AI systems that like Mark Milley said are going to be positioned to be making military decisions.

Dr. Colin Shea-Blymyer:

You know, if if you're in the latter camp, you want to outcompete China and that is the number one thing that you care about in the world. And and like models being open, you don't want any of that. You you want you want to you want to batten down the hatches, you want to pour money in, you want to make sure that The US beats China and we come away with the most powerful like you know, super super computer, super intelligence that we can that we can dominate the world with. If you're in the middle camp, right, and this is where I think people like Marc Andreessen is and Horowitz, right? Like these are are people who are saying like, yeah, we just need to innovate, right?

Dr. Colin Shea-Blymyer:

Everything's gonna be great. We're gonna end up with these like engines of the economy and we want basically to have the new industrial revolution based off of this. In which case, you know, you're you're like, yeah, we'll we'll we'll regulate later. We'll regulate when we've got like the the golden goose. We'll figure out who gets the eggs then.

Dr. Colin Shea-Blymyer:

And then if you're in the first camp, right, you maybe you're thinking a little more like Europe and you're saying look like like the harms are real, people are getting hurt by these things now, we need to we need to get procedures and policies in place. Right? And in that case, like open models are great because what you're able to do there is do tons of really interesting research from open weights all the way to more open models on that spectrum of openness. Those are all going to give academic researchers the ability to play with things that you really can't do otherwise. There's a great report coming out by my colleague Kyle Miller on some of the research that you can do with open weights that you can't do with with closed models.

Dr. Colin Shea-Blymyer:

And and really like teasing apart this argument that like the that that you're actually you you are actually giving research a gift when you are when you're opening models.

Joff Thyer:

Yeah. You your some of your comments reminded me of a couple of books I've read. One of them was The Coming Wave, Mustafa Silemon, formerly of DeepMind. He was very, I would say, full of regrets. It was my impression from that book about what he had wrought, right, over time and was really in that that camp of we we we gotta protect everything.

Joff Thyer:

We gotta regulate everything. You know? So there there it is interesting. I think the the big the big beautiful advantage of The United States is we do tend to push innovation over these other safety concerns, but that has its costs as well. I'm gonna pitch it over to doctor Furman.

Joff Thyer:

Brian, if you would just ask the the final couple of questions and close us out, that would be fantastic.

Brian Fehrman:

Yeah. Sure. Let's do that. I'm just gonna go ahead skip down to the the bottom one here and then we can go ahead and close it out, which is what's one practical takeaway you'd like listeners to remember from this whole conversation?

Dr. Colin Shea-Blymyer:

Yeah. I think I think the best takeaway given this conversation the way it's gone, is to be really aware of what it is you want from AI. I think that it's really easy to get caught up in the hype believe all the marketing and all the doomsaying, right? To figure out what it is your expectations are and what your risk appetite is. Spend some actual time thinking about where you think your AI application is going to be used, where AI is going to develop in the next five, ten, fifteen years, and what that means for your organization, for your community, for your country, and act on those beliefs if they are certain enough.

Dr. Colin Shea-Blymyer:

Or maybe you have really high uncertainty like I do and you just need to keep digging. Right? Then you join the the masses of researchers who are still trying to figure all this out.

Derek Banks:

I was gonna say welcome. Right.

Joff Thyer:

Yeah. I I I do think you have to if you're in this space, you certainly have to be comfortable with nondeterminism and comfortable with uncertainty because that's

Derek Banks:

that's the nature of it. Recently. It's like my whole, like, day is like, I don't really know what's gonna happen.

Joff Thyer:

Yeah. According to Ray Kurzweil, the singularity is nearer, and he predicted that the merging of artificial intelligence and biotechnology is going to occur, and we'll end up with the superhuman advanced merging of machines and humans, if you like, somewhere in the twenty thirties. I wanna hear from everybody. What do you think about that that notion? Do you think we're really heading there?

Derek Banks:

I'll start. No. Not by 2030, or the twenty thirties. I I I think it's gonna be fast. I think the world's gonna be a different place very soon, but I think we're gonna have much more societal issues to tackle before we have, you know, basically neurological implants that everyone adopts that allows us to think at the speed of thought and become a collective hive mind or whatever the singularity really means.

Derek Banks:

Now is that, like, the ultimate, like, what ends up happening? Yeah. Probably. Right? I mean, I I think that eventually, but I don't think it'll be and I think that particular thing will will take a little bit more time.

Derek Banks:

If I had to pick, like, a a, you know, a sci fi series to think is probably closest to the truth right now, I wouldn't go with The Matrix actually. I'm gonna go with Dune where we actually, you know, get really tired of all this AI stuff when it tries to kill us and we wipe it out. And then we make humans better. It takes ten thousand years or something like that.

Joff Thyer:

But Alright.

Derek Banks:

But, yeah, I don't I don't know about the singularity, but I do think that there are some real interesting, like, concrete problems that we're gonna have a lot sooner than we all really want to admit. And I think that that has to do with, you know, knowledge workers being replaced by by AI type stuff. And I think that's coming first.

Joff Thyer:

Well, I think arguably we're already seeing that.

Derek Banks:

We already are seeing it. Absolutely. Yeah. Yeah.

Joff Thyer:

So to that silly notion of the merging of humans and machines, doctor Furman, what do

Derek Banks:

have to still answer.

Brian Fehrman:

Yeah. No. I agree. I I think that maybe at some point, but not not by the twenty thirties. I mean, yeah, we've seen huge advances in AI technology.

Brian Fehrman:

I mean, in particular with large language models within the last few years, last decade. However, you'd like to put that span out there. But in terms of fully integrating that into our normal, you know, being of what of what makes us, I still think we're we're probably a bit off from that. I mean, we are already seeing some of the technology that can kinda lay the groundwork for that, right, with like the Neuralink implants Yeah. Which they're having amazing results with, which I don't know if you guys have looked at any of that or But basically, people who are, you know, quadriplegic being able to play video games just by thinking about it and training about it or, know, training their mind to do it.

Brian Fehrman:

Outside of that, I've seen people play with like less intrusive technology with like brave way brave brainwave scanners where, you know, people will come up with all these different self imposed challenges to beat video games. And one of them that I've seen is a person who has one of these brainwave analyzer things and they train themselves to be able to play the video game with it. And so, you know, we've got we've got already kind of that meshing between, like, our thoughts and computer processing, but I do still think it's gonna be a little bit off before it's widely adopted and really integrated into everyday life.

Joff Thyer:

Oh, man. I was hoping to give you the answers that I can upload my brain right now. Colin, I'll let you have the final word.

Dr. Colin Shea-Blymyer:

Alright. Yeah. To to like really engage with the prompt here, I would say that that, you know, I a. I 2030 sounds like it's a little too soon by by my calendar. But I think that there is a like a scary fine line to to walk between having AI systems that are advanced and helpful enough for us to to like integrate with them and having AI systems that are advanced and helpful enough that they get misused to to do great harm to humanity or or like you know control risks and they go off by themselves and destroy humanity without humans helping them.

Dr. Colin Shea-Blymyer:

And and finding finding a way down that that narrow path as it were to to to get to a place where we are we are helping ourselves with our technology not hurting ourselves with our technology which has always been a difficult thing for mankind. Yes. Walking that line, I think that is that'll be the true test.

Joff Thyer:

Absolutely. That's great, great comments. Alright. Colin, thanks for sharing your insights and coming on the show today.

Dr. Colin Shea-Blymyer:

Thanks for having me.

Joff Thyer:

It's been a lot of fun.

Derek Banks:

Oh, yeah. I really appreciate it, man. It was awesome. Yeah.

Brian Fehrman:

Thank you so much.

Dr. Colin Shea-Blymyer:

Absolutely. Thanks for the invite.

Joff Thyer:

It's been great. So if my I'm I'm trying to invent another tagline right here, but just so everybody out there, just remember that we are living in a world of the intersection of nondeterministic probabilistic technology and prior deterministic technology, and this is going to continue to be a bumpy ride. So be safe out there. Keep on prompting, and we'll see you next time.

Episode Video

Creators and Guests

Brian Fehrman
Host
Brian Fehrman
Brian Fehrman is a long-time BHIS Security Researcher and Consultant with extensive academic credentials and industry certifications who specializes in AI, hardware hacking, and red teaming, and outside of work is an avid Brazilian Jiu-Jitsu practitioner, big-game hunter, and home-improvement enthusiast.
Derek Banks
Host
Derek Banks
Derek is a BHIS Security Consultant, Penetration Tester, and Red Teamer with advanced degrees, industry certifications, and broad experience across forensics, incident response, monitoring, and offensive security, who enjoys learning from colleagues, helping clients improve their security, and spending his free time with family, fitness, and playing bass guitar.
Joff Thyer
Host
Joff Thyer
Joff Thyer is a BHIS Security Consultant with advanced degrees, multiple GIAC certifications, and deep expertise in offensive security and exploit development, who enjoys crafting sophisticated malware for penetration tests and, outside of work, making music and woodworking.