San Diego is home to year-round sunshine and surfing. It is also where the first modern triathlon was held in 1974. Just like endurance athletes push their physical limits, cybersecurity professionals push the boundaries of how we approach risk management in today’s threat landscape. Thankfully, San Diego also gives these practitioners a place to gather, swap stories, and connect at BSides San Diego.
Over 700 members of Southern California's vibrant security community gathered at a sold-out BSides San Diego to attend 32 expert-led sessions and three multi-hour workshops. 14 of the speakers giving their first security talk ever. There were also villages where people learned physical security skills, did some hardware hacking, and competed in a lively day-long CTF. Not only did they sell out of attendee tickets, but this year was the first time they sold out sponsorship opportunities, a rare accomplishment for any BSides and speaks to the strength of the SD community.
Against the backdrop of 80s nostalgia, complete with synth-pop vibes and slide projectors with a mind of their own, the speakers at BSidesSD challenged us to rethink risk, not as a checklist or checkbox, but as a living, breathing part of our organizations’ decision-making process.
Good GRC means challenging the way you have always done it
Opening with a Bon Jovi-inspired keynote titled “Keynote: Your love is like bad medicine, but GRC doesn't have to be that way. Symptoms and diagnosis,” Kelli Tarala, Principal Consultant at Black Hills Information Security, didn’t hold back. She examined the deep, often misunderstood world of Governance, Risk, and Compliance (GRC), framing it through a lens that was both critical and compassionate. Her talk diagnosed the symptoms of "bad medicine" GRC, surfacing when recommendations are too rigid, disconnected from business reality, or simply outdated. She told us that any cure needs to be rooted in relevance, flexibility, and communication.
Kelli drew parallels between over-prescribed frameworks and outdated treatments—audits that no longer serve, pen tests that uncover issues no one has the resources to address, and compliance checklists that create the illusion of control. Her remedy? Risk management that’s real, responsive, and integrated into decision-making. Think dashboards, not spreadsheets. Think AI-assisted context, not static numbers.
Her advice: “Shake it up.” Rethink those dusty charters and out-of-touch processes. And most of all, use GRC to serve the business, not bury it in bureaucracy.
Poisoned data, twisted outcomes
One of the more visually captivating talks came from Maria Khodak, a Penetration Tester at The Cigna Group, who showed us what happens when AI models get fed manipulated data, and how graph theory can help make the invisible visible.
In her talk "Good Models Gone Bad: Visualizing Data Poisoning," Maria used real-world examples to show how even small changes to a dataset can corrupt a model’s output. One story involved a friend who successfully convinced a conversational AI that he was an alpaca. It was funny at first, but it underscored a dangerous truth that these systems can be misled with surprisingly little effort.
Using the network visualization tool Gephi, Maria demonstrated how data poisoning can be hidden in complex graph structures. She showed how adding just a few extra edges to a Java dependency map created artificial relationships between packages. These relationships could distort predictions or open the door to exploitation. The more nodes and links in the system, the easier it is to hide tampering.
Maria argued that one of the biggest problems with risk in machine learning is that it often goes unseen. When a threat isn’t visible in the interface or logs, it doesn’t feel like a threat at all. But once trust in your training data is broken, every downstream decision becomes questionable.
Her recommendation was clear. Treat data like code. Review it, version it, and audit it regularly. Use visualization tools to spot anomalies and involve multiple teams in validation. Risk begins with the inputs. If those are compromised, everything that follows is at risk too.
Metrics that earn their keep
In his talk "DeepFry Your Insights: Turning InfoSec Metrics into a Culture of Security Excellence," Chris Julio, Manager of Security Operations at Jack in the Box, brought a practical perspective from an industry that rarely makes headlines in security circles. At Jack in the Box, he manages security in a world where uptime is sacred, and any delay is measured in lost sales, not lost packets.
Chris outlined how his team had to completely rethink how they measured and communicated risk. Early efforts focused on traditional metrics like phishing email blocks or malware detections. But none of those landed with the executive team. What mattered to leadership were metrics tied to store performance, growth, and efficiency.
He explained that in quick service restaurants, success is measured by average unit volume, digital sales, and how fast a new location reaches profitability. If security can’t show its impact on those goals, it won’t be seen as strategic. It will be seen as overhead.
One of the most powerful parts of his talk was a warning about vanity metrics. These are the numbers that look good but mean nothing. Chris said if your metrics don’t drive decisions, they’re not helping. They’re just decoration.
His bottom line was this. Risk conversations should not be about technical wins. They should be about business outcomes. That shift in framing makes security not just easier to understand, but harder to ignore.
Building blocks for cryptographic clarity
Matthew Olmsted, Staff Software Engineer at BD, took a different approach to the risk conversation in his talk, "Understanding Cryptographic Primitives." Instead of diving deep into math or algorithms, he focused on building mental models. His goal was to give attendees a way to think about cryptographic risk that would help them in everyday decision-making.
He began with simple concepts like Caesar ciphers and decoder rings, then gradually layered in modern techniques like stream ciphers, block encryption, and asymmetric key exchanges. Matthew stressed that most cryptographic failures are not due to broken algorithms. They come from bad implementations. Reusing keys. Poor randomness. Misconfigured libraries. Risk, in this context, is not theoretical. It is operational.
He challenged attendees to stop asking if something is encrypted and start asking how. What algorithm was used? What mode? How is the key stored? Who has access to it? These questions reveal the real risks behind the security labels that too often go unquestioned. He concluded that better conversations lead to better architecture, better choices, and fewer surprises later on.
Curiosity over code coverage
"The Art of Vulnerability Hunting: Key Indicators to Look for in Any Application," from Yash Shahani, Security Researcher at Qwiet AI and co-organizer of BSidesSF, was a fast-paced session that reminded the room of something too often forgotten. The best tools in the world still miss things. Human curiosity is irreplaceable.
Yash walked through his personal process for finding vulnerabilities in large codebases. He emphasized that step one is always understanding the application. What does it do? What are its inputs and outputs? What assumptions are baked into its design?
From there, he mapped inputs to potential sinks. Functions like exec, eval, and system calls immediately drew scrutiny. He also looked for signs of improper input validation, file uploads, and raw database queries.
Yash insisted that vulnerability hunting is not about tools, as they can only scan for what they are designed to scan for, not truly checking for security overall. It is about intuition. It is about seeing something weird and asking why. It is about noticing a function name that feels off or a flow that doesn’t quite make sense. He encouraged us to keep digging when something feels off; that is how you find the real risk.
Changing the risk conversation
Across every session at BSides San Diego 2025, the message was clear. If we want to better manage risk, we have to talk about it more and with updated language so that people outside of our team will understand. That means discarding outdated narratives, pushing back on vanity metrics, and making room for more honest, context-driven conversations.
For some teams, that might mean revisiting their GRC processes. For others, it could be building clearer models for cryptographic systems, tuning their machine learning pipelines, or simply encouraging more curiosity in code review.
Your author was able to give a talk about the risk conversation around the future of non-human identities (NHIs). And if secrets sprawl is part of your risk story, tools and platforms like GitGuardian can help bring that conversation into focus. From detection to remediation, visibility makes better decisions possible. But the real shift begins with people. The risk conversation isn’t a technical challenge. It is a human one.
