Meaning In the Signal: What Five Talks at [un]Prompted Taught Me About the Future of Cybersecurity
[un]Prompted is a brand new AI security practitioners conference, and its inaugural edition, held in San Francisco on the 3rd and 4th of March 2026, announced itself as something rather different from the events that typically populate the cybersecurity calendar. No vendor booths, no sponsored talks, no carefully stage-managed product launches. Just researchers, practitioners, and thinkers from across the security ecosystem, brought together to share work that is genuinely new, genuinely challenging, and, in several cases, genuinely uncomfortable for the incumbents in the room. For a first outing, it set an admirably high bar. I want to talk about something that happened there which I don’t think was planned, which nobody orchestrated, and which I haven’t been able to stop thinking about since.
I had something of a light-bulb moment sitting in the audience, watching five very different presentations from five very different corners of the cybersecurity world and realising, with a growing sense of conviction, that every single one of them was circling the same fundamental problem. Not coordinated, not themed, not briefed against a shared narrative. Just five sharp practitioners, working independently in domains as varied as planetary-scale threat intelligence, static code analysis, reverse engineering, AI-assisted productivity, and agentic security systems, all arriving by entirely different routes at the same uncomfortable question: now that we can see everything, how do we know what any of it means?
Bob Rudis and Glenn Thorpe from GreyNoise stood up and talked about the challenge of finding “truly meaningful new signal in all the noise” across massive observation grids that span, quite literally, the planet. Peter Girnus and Derek Chen from TrendAI walked through the architecture of progressive signal refinement in code analysis pipelines, building cascading layers of traditional static analysis tools like YaraX, SemGrep, and CodeQL specifically to filter and refine raw findings before handing anything to a human analyst or an LLM for deeper semantic evaluation. Olivia Gallucci from Datadog described how, when working inside Apple’s partially documented ecosystem, meaning must be painstakingly triangulated from fragmentary signals: OS logs, binary analysis, subsystem labels, message strings, each individually unremarkable, collectively revelatory. Joe Sullivan in discussing AI Notetakers explored the subtler question of how AI models interpret human intent, demonstrating that “high-signal phrases” deterministically cause models to weight certain statements more heavily, and that even the structure of a conversation, its beginnings and its transition points, shapes what meaning survives the transmission. And Matt Rittinghouse and Millie Huang argued that generative AI security systems, operating on fuzzy natural language rather than deterministic rules, require continuous human-in-the-loop feedback to produce outputs that are genuinely meaningful rather than merely voluminous.
Five talks. Five different domains. And the same question, echoing through every one of them.
I believe we are witnessing a tectonic shift in cybersecurity, one that the industry has been slow to name and slower still to respond to. This is my attempt to name it.
We Won the Signal War. We’re Losing the Meaning War.
The cybersecurity industry has spent the better part of two decades solving for detection, pouring extraordinary talent and capital into the pursuit of more signal: more sensors, more signatures, more correlation rules, more threat feeds, more automated scanning, more AI-powered discovery. And, to be fair, we got genuinely, remarkably good at it. The arc from those early days of flying blind, when a breach could dwell undetected for months or even years, to today’s world of continuous monitoring and machine-speed vulnerability discovery represents one of the great technical achievements of our field.
What nobody quite anticipated, however, is that we would reach an inflection point where even the true positives alone would exceed human capacity to act on them meaningfully. We spent so long fixated on the false positive problem, and rightly so for a time, that we failed to notice a rather different problem building quietly beneath our feet: the sheer volume of legitimate, valid, accurate signal has itself become the bottleneck. The constraint moved, as it were, and most of the industry hasn’t yet caught up.
Put simply, we built an industry optimised for the discovery of problems and structurally incapable of discerning which ones matter.
This is not merely a scaling challenge that can be addressed by hiring more analysts or buying faster tooling. It is, I consider, a conceptual challenge, one that demands a fundamentally different orientation. The next era of cybersecurity is not about finding more. It is about understanding what matters.
What Meaning Actually Requires
If signal is data that has been distinguished from noise, then meaning is signal that has been enriched with context. And context, it turns out, is not a single thing but a convergence of several things, each of which the signal itself does not carry.
The [un]Prompted presentations illustrated this beautifully, even if none of them framed it in quite these terms.
When Rudis and Thorpe built their AI-powered threat intelligence analyst, the internal tool they call “Orbi”, they didn’t build it to find more threats. They built it to make sense of the threats already found, with customisable preference tuning that allows security teams to isolate specific, high-priority signals: separating conflict-related intelligence from background ransomware activity, for instance, to make the data immediately actionable for a particular team in a particular context. That isn’t filtering. That is an act of interpretation, the application of organisational priority and strategic intent to raw signal, transforming it from data into something a human being can meaningfully act upon.
When Girnus and Chen designed their code analysis pipeline, they made an architectural choice that speaks volumes: rather than passing raw, noisy findings directly to an expensive LLM, they constructed a cascade of progressively refined analysis, each layer adding a degree of contextual understanding before the next. The insight is that meaning doesn’t emerge from throwing more compute at raw findings; it emerges from structured reduction, each stage stripping away what is irrelevant and enriching what remains. And their FENRIR system, which routes findings to human analysts based on confidence thresholds, is an explicit acknowledgement that meaning, real and actionable and consequential meaning, ultimately requires human judgement. The machine refines the signal. The human interprets it.
When Gallucci described her reverse engineering methodology, she demonstrated something I consider essential to understanding what meaning looks like in practice: it is always an act of assembly, of placing signals in relationship to one another. A single OS log entry means nothing. The same entry, cross-referenced against a code path, a parsing assumption, and a data flow, tells you where the attack surface actually lives. Meaning doesn’t reside in any individual signal; it emerges from the relationships between them.
If the first four talks showed how meaning is constructed, Sullivan revealed how easily it can be lost. His demonstration that AI models weight information differently based on linguistic cues, that a phrase like “the most important thing to remember is” deterministically elevates what follows, exposed a dimension of the meaning problem that most of the security industry hasn’t begun to reckon with. We can do all the hard work of interpretation, reduction, and contextual assembly, and still lose the meaning in the handoff. How we communicate findings to machines, and how machines communicate them back to us, determines whether the meaning we’ve worked so hard to construct actually reaches the person who needs to act on it.
And when Rittinghouse and Huang described their experience building agentic security systems, they articulated what may be the most uncomfortable truth of all: that generative AI, for all its remarkable capability, operates on fuzzy natural language rather than deterministic rules, and that its outputs require continuous human-in-the-loop calibration to remain meaningful. The temptation to treat AI-generated findings as authoritative is immense, particularly when they arrive at volume and speed. But authority without verification is merely confidence, and confidence without grounding is precisely the kind of noise that meaning-making is supposed to cut through.
From Detection-Centric to Comprehension-Centric Security
If I were to distil the [un]Prompted conversations into a single thesis, it would be this: the organisations, tools, and leaders that thrive in the next era of cybersecurity will be those that shift from a detection-centric model, which asks “what can we see?”, to a comprehension-centric model, which asks “what does it mean, and what should we do about it?”
This is not a subtle distinction. It changes what you build, who you hire, how you spend, and what you measure.
A detection-centric organisation optimises for coverage: more sensors, broader attack surface visibility, higher volumes of findings, faster scan times. These things are necessary, and I am not for a moment suggesting otherwise, but they are no longer sufficient. Coverage without comprehension is, at best, expensive noise and, at worst, a form of organisational self-deception in which the volume of activity creates a comforting illusion of security.
A comprehension-centric organisation, by contrast, optimises for meaning. It asks: what are the crown jewels that matter most to this business, and which findings actually threaten them? What are the realistic, exploitable attack paths in this specific environment, not in some abstract theoretical model? Which threat actors possess both the capability and the intent to target us, and what does their tradecraft tell us about where the real risk concentrates? What is the business impact, not the technical severity but the business impact, if a given finding is exploited?
These are harder questions. They require security teams that can speak the language of business value, threat intelligence that is integrated into operational decision-making rather than consumed as a feed, and tooling that enriches signal with environmental and strategic context rather than simply triaging it by severity score. But they are the right questions, and I believe the industry is overdue in asking them.
The Wider Aperture
I have focussed here on the vulnerability management domain, partly because it is the world I inhabit daily and partly because the [un]Prompted conversations provided such rich material. But the meaning deficit extends well beyond vulnerability findings.
Security Operations Centres face precisely the same challenge: analysts drowning in alerts, SOAR playbooks that automate response without understanding context, and a psychological toll, a kind of learned helplessness, that sets in when every alert is technically important but practically unactionable. Incident response teams grapple with it when they must decide, in the fog of an unfolding breach where initial compromise may have occurred hours or days before detection, which signals warrant escalation and which are artefacts of the noise. Boards and CFOs encounter it when they review security investment proposals that quantify coverage but cannot articulate what, precisely, that coverage means for the risk profile of the business.
The meaning deficit is not a niche problem within vulnerability management. It is, I consider, the defining challenge of modern cybersecurity, the thread that connects signal generation to operational effectiveness to strategic decision-making to, ultimately, whether the enormous sums the world spends on security are actually making anyone safer.
An Invitation
This is the first in what I intend to be a regular series of essays and commentaries under the banner of Meaning In the Signal. The ambition is not to provide easy answers, I don’t have them and I’d be wary of anyone who claims they do, but to think carefully and seriously about what meaning looks like in cybersecurity, how we build it into our operations and our organisations, and why it matters now more than it ever has.
I want this to be a space for the kind of thinking that doesn’t fit neatly into a vendor blog or a conference keynote. Long-form when the argument demands it, sharp and opinionated when brevity serves better. Grounded in practice, informed by evidence, and unafraid to state a position.
The signal is abundant. The meaning is scarce. Let’s go find it.