The Hallucination Problem | Episode 20
🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits –
Episode 20 - The Hallucination Problem
In this episode of AI Security Ops, Joff Thyer and Brian Fehrman from Black Hills Information Security dive into the hallucination problem in AI large language models and generative AI.
They explain what hallucinations are, why they happen, and the risks they create in real-world AI deployments. The discussion covers security implications, practical examples, and strategies organizations can use to mitigate these issues through stronger design, monitoring, and testing.
A must-watch for cybersecurity professionals, AI researchers, and anyone curious about the limitations and challenges of modern AI systems.
----------------------------------------------------------------------------------------------
Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/
Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/
Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/
Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/
Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/
Episode Video
Creators and Guests
Host
Brian Fehrman
Brian Fehrman is a long-time BHIS Security Researcher and Consultant with extensive academic credentials and industry certifications who specializes in AI, hardware hacking, and red teaming, and outside of work is an avid Brazilian Jiu-Jitsu practitioner, big-game hunter, and home-improvement enthusiast.
Host
Joff Thyer
Joff Thyer is a BHIS Security Consultant with advanced degrees, multiple GIAC certifications, and deep expertise in offensive security and exploit development, who enjoys crafting sophisticated malware for penetration tests and, outside of work, making music and woodworking.