Subscribe to the show in Apple Podcasts, Spotify, or anywhere else you find your favorite podcasts!

Identity in the AI Era: Managing Enterprise Risk in the Age of AI with Jasson Casey • Cyber Sentries • Episode 210

Identity in the AI Era: Managing Enterprise Risk in the Age of AI with Jasson Casey

The Evolution of Identity Security in the Age of AI

In this episode of Cyber Sentries, John Richards sits down with Jasson Casey, CEO and co-founder of Beyond Identity, to explore the intersection of identity security, AI, and enterprise risk management. As organizations rapidly adopt AI tools and agents, the fundamental challenges of identity security are evolving—requiring both new approaches and a return to core principles.

Identity: The Foundation of Modern Security

Jasson explains how identity has become the root cause of most security incidents, with identity-based failures accounting for 80% of security tickets. The conversation explores how AI is transforming every role in modern organizations, while highlighting the security implications of this rapid adoption.

Key Takeaways:

  • Identity security is fundamental to managing AI risk in enterprises
  • Traditional security concepts still apply but require new implementation approaches
  • Organizations need to track data flow and permissions across AI systems

Looking Ahead

As AI adoption accelerates, organizations must balance innovation with security. Through proper identity management and understanding of data flow, enterprises can prevent most security incidents while embracing the transformative potential of AI technologies.

Links & Notes

John Richards:
Welcome to Cyber Sentries from CyberProof on TruStory FM. I’m your host, John Richards. Here, we explore the transformative potential of AI, cloud, and cybersecurity, where rapid innovation meets the need for continuous vigilance. This episode is brought to you by CyberProof, a leading managed security services provider. Learn more at cyberproof.com.
On this episode, I’m joined by Jasson Casey, CEO and co-founder at Beyond Identity. We discuss how it’s both the best time to be a builder or creator and also one of the riskiest times. But how do you deal with that risk? Jasson shares the fundamentals of agent and LLM security and how identity is the through line across them. Let’s dive in.
Hello, everyone, and welcome to this episode of Cyber Sentries. I’m joined today by Jasson Casey, CEO and co-founder at Beyond Identity. Jasson, thank you so much for coming on the program.

Jasson Casey:
Thanks for having me.

John Richards:
Now, before we dive into the topics at hand, how did you end up being a co-founder and CEO at Beyond Identity? What was that journey like to start a company and really tackle identity as a challenge?

Jasson Casey:
So let’s see. I would say it was pretty unique. I was formerly the CTO of a company called SecurityScorecard, and so I’ve been in the security space for a while. And I left the company in late summer of 2019 with the idea that I wanted to start a cybersecurity analytics company.
And about that time, I happened to run into Jim Clark, who is the founder of Silicon Graphics, the founder of Netscape, done a couple other things that you may have heard of, like WebMD, Shutterfly. And he was really obsessed with this idea of password elimination, like making it possible for people to log into services without having to deal with the hassle of, A, terrible user experience and B, fundamentally a piece of security theater.
There’s a lot of back story there, but fundamentally we ran into each other and he convinced me to join forces and he had a team with them that had built out an initial prototype. And we started fast and furious in September of 2019. As technically we were called Zero PW back then. We eventually changed it to Beyond Identity. But I actually started as the co-founder and CTO, and I just became the CEO about two years ago.

John Richards:
Wow. Well, congrats on that. And that’s a big jump up. Do you have any regrets? Because the landscape has shifted a lot. And I assume no, but playfully, because with the advent of AI and agents and everything, identity has become a very hot button topic here, become even more important, but also far more challenging. So yeah, what’s that look like for you?

Jasson Casey:
Do I have any regrets? No. I would argue, look, right now is probably one of the best times to be alive if you actually enjoy building, if you actually enjoy the act of creation, because I don’t think it’s ever been easier for you to kind of achieve your vision or to make a thought into something physical and tangible, whether it’s in the space that we do or in all of these other spaces. So yeah, I think the world has become really interesting in the last couple of years, but I think it’s become really interesting in favor of the people who create.
With that said, the theme of your comment, identity being really important. Identity is actually the root cause of most of your trouble. If you run a SOC, if you’re a CISO, or if you’re responsible for ID. And the numbers don’t lie. You can look at CrowdStrike’s report, you can look at Mandiant’s report. I’m talking about their annual threat reports. You can look at Verizon’s annual database of incident response. In all three cases, identity-based failure is the cause of 80% of your ticket load and 80% of your security incidents.
So that’s kind of a superficial or top level smoke. So if we sniff around, where’s the fire? And when you actually look at what are these incidents and where are they actually coming from? They’re coming from credential theft, credential misuse, over provisioning, like all of these identity-based concepts. And you can even double click it a little bit further. And I would argue they’re actually coming from things that are preventable. And a lot of the industry right now is still focused on detection and response, and they’re doing it in a patchwork-based manner. But fundamentally they can actually eliminate the cause of 80% of these incidents.
So for instance, whether we’re talking about a user credential, whether we’re talking about an API credential or an API token, whether we’re talking about a transactional credential, like a session cookie or a JWT, all these things from a legacy perspective, they’re symmetric secrets that move. And anything that moves is something that’s going to create or accrete an attack surface every time it touches a computer in the world, that it’s almost impossible for you to track and defend.
And so our fundamental insight, and this goes back to 2019, is we now live in a world where it’s possible to create credentials that don’t move. And it’s very visible when you talk about it from like passwordless authentication, because clearly you’re using an asymmetric cryptography algorithm to not prove you have a password, but to essentially sign a challenge using a private key that you and only you have. And modern electronics can store that key in a way to where it’s never in memory, it’s never in your processor, it physically can’t be stolen, it can’t physically leave that device.
But this also translates to the other credentials, when I mentioned transactional credentials for your browser. It’s actually possible to essentially use credentials that don’t move, that produce a fresh signing for every transaction. It’s possible to do this with APIs. It’s possible to do this with access tokens. So it really is possible to move to a world where all of these credentials that create the surface area physically don’t move and give you strong identity and strong attestation back to not just the user, whether the user is a human or a robot or an automated process, but the device it’s happening on. And not just the device it’s happening on, but the posture of the device it’s actually happening on.
And so we saw back in 2019, the idea that you could actually unify user identity, device identity, device posture and workload, and tie all four of those things into the decision of, A, should I allow this to proceed and then B, after the fact, should I continuously allow it to proceed? That gets enhanced into concepts of like the continuous off.
So fundamentally, identity underlies anything that matters in a modern enterprise. Whether you’re talking about employees getting to work, whether you’re talking about your workload satisfying customers, they all have identities, they all operate on machines that you have to be able to track security incidents, or fundamentally about building a tree or a graph of related events where essentially the edges of the relations and the nodes are identities. What are the identities of the people and the devices and the tokens and the processes and whatnot?
And so we see a world where security and identity kind of unifies and joins those things in a concrete way that are essentially provable, that are tamper resistant. And this plays really, really well into the why it’s exciting to be alive today. Today we live in this world where with the explosion of AI tooling, you can build faster than anything. You can run faster than anyone. AI tooling is going to take these problems that you’ve been solving in a legacy way and essentially make you have to chase them at a thousand times the speed that you’re currently chasing them, or you can rethink your foundations.

John Richards:
What’s the challenge then that you see teams and enterprises dealing with to keep them from moving to this more non-moving identity so that you don’t have that broader attack surface? What’s the friction point there that we don’t just see, “Hey, everybody’s jumped on board”?

Jasson Casey:
I think that honestly, the friction point really boils down to knowledge. People still don’t really understand this. I think the markets largely still think of passwordless as a usability experience. Like, “It makes it easier on my employees.” But if it’s done correctly in this kind of immovable identity and device posture sort of way that I described before, there’s a huge boon from a security perspective. You literally have changed your architecture to where you no longer have to worry about pipes freezing in the winter because we’re not actually carrying heat through pipes. We’re carrying heat through wires. It’s fundamentally changing the equation to where a set of problems just go away. I think the fundamental thing that I see is really still a market awareness and an understanding of what are the actual benefits that I get from this and why do I actually get it?
I’d say a second thing is an association of, “Well, hey, I have all of these controls. I spend so much money on all of these controls. This sounds like just another thing.” But it’s kind of like a failure to recognize that there’s two types of controls in the world. There’s one control that either lowers the risk of a thing happening. There’s another type of control that actually eliminates the thing from happening. And there’s not a lot of things in the security world that fall into that second bucket. And so I think that people forget that that second bucket actually can exist, but that second bucket is also what’s responsible for massive improvements in things like service availability and uptime and horizontal scalability. Just rethinking these classic assumptions to, “Rather than making the thing go faster, rather than getting a faster horse, is there something that’s fundamentally different that opens the world up in a new way?” Whether it be a car or whether it be an AI coding assistant agent.

John Richards:
Okay. So with that, as we look at just the rise of AI usage, just proliferating everywhere and the use of agents everywhere, what are you seeing? How does this layer onto that? Where should folks be looking for attack vectors? What are the dangers, especially related to identity that folks need to be paying attention to as they roll out new programs and things like that?

Jasson Casey:
Let’s bucket it in kind of two parts. So first, AI is going to change the world, and I still think we’re in the early innings. I’ll use my own company as a little experiment. There’s no job in my company that does not benefit from using an AI coding assistant, even though not all of my employees code. So just to let that statement land, these AI coding assistants, they’re tuned to help write code, but they do a hell of a lot more than that. And what they do in a much, much better way than a chat agent is deal with structured documents across a file system. And when you think about strategic planning, when you think about financial planning, when you think about sales campaigns, when you think about all of these things, you’re still working very much like a developer. You have a workspace, you have a set of libraries of different copy and messages for different verticals.
So it is this organization of data and this process and these code assistant agents, they understand the sort of layout very, very well and they understand human language very, very well. And the models that they’re invoking have already trained on the best that McKinsey has to offer in product strategy and market strategy and sales approaches. So it’s actually quite easy to start going a little AI native, not just in your developer organization, but across your entire organization using these tools.
So that’s number one. Number two is, “Oh crap, I’m scared of that.” You’re telling me all of my data is now going to be slingshotted all over the place. And my answer is yes, absolutely. This is the future, and you can either figure out how to deal with it and like actually stand your employees up or you will be relegated to obsolescence. And I’m not going to actually debate the relegation to obsolescence. Let’s just kind of assume that or you can disagree in tune out.
So then the question is, all right, well, how do I embrace this and how do I actually track it? And I think we could probably enumerate a list of like thousands of ways in which this could go wrong and leak intellectual property, leak confidential data, cause outages in your platform, give the wrong things to the bad guys, et cetera. But there’s actually kind of one underlying or cross-cutting function where you actually have high, high leverage. And I would argue that’s identity.
So these code assistant agents, I would argue they have an identity. “What? What do you mean by that?” Well, first of all, any sort of software executable has an identity, which is really just its cryptographic checksum. And in theory, if any of you have to fall under compliance regimes, maybe you don’t know this, but someone on your team already knows this because they’re ensuring that you’re running SBOMs that are signed by OEMs and only from the right list and whatnot. That’s not specific to that world. It turns out all software has a unique signature at the time of load that can actually be signed. Agents are no different.
Agents execute tools. Those tools are both local and remote. So the developers that are listening to this will say, “Yeah, of course,” but maybe other people don’t quite know what I mean. With a code assistant agent, you can basically ask it… Let’s talk about a finance task. I want a code assistant to ingest a list of, let’s say my customers, the revenues to us, all of their trouble tickets, and give me an analysis on how to actually plan maintenance budgets versus new development, new work.
What the LLM’s going to want to do is first it’s going to want to grab the data and it’s going to send some commands down to the agent saying, “Hey, go grab the data.” And maybe the data’s on my local file system, so it’s going to want to interact using Bash tools or file system tools. And so those are going to execute locally. Those tools are just programs. Those programs have identities. It’s important for me to know that those are the tools that I actually allow my people to use and not some sort of renamed script. They may execute remote tools.
So again, on those remote tools, those remote tools have identity. Anytime you sign up a remote tool, either through an API key or some sort of IDP delegation, identity delegation, you’re establishing an identity of this remote tool. So you have approved remote tools and you have unapproved remote tools. And if you’re a sophisticated organization, you probably have the ones that pass SOC 2 at the lowest level and then the ones that passed FedRAMP High at the highest level, depending on what part of your business. So you’ve got to track that.
So again, agent has an identity, agent runs on a machine, a machine has an identity, a machine has a posture. All of these operate at the behest of a user. A user has an identity with associated authorizations. Services coming back have data that exist at certain kind of tiered levels. And so identity is where you can, A, understand how all these things come together, B, understand the different privilege levels of the users, whether the users are humans or robots or agents, but also understand the protection levels of the data and make decisions on what’s valid and invalid mixings. Where should I allow and where should I disallow things?
So all of this is actually manageable at the identity layer. And by the way, these agents generally can run locally. The LLMs are not local. Most of you don’t have the capacity to run these things. Some of the you do, but if you do, you can kind of see how that works. If you don’t, there’s things like Amazon Bedrock, Google’s… I always forget Google’s product suite name, but Google has a way of essentially giving you these private hosted models, Microsoft does as well, where you can literally point Claude code, you can point Codex, you can point Gemini at the FedRAMP version and actually preserve all of these compliance problems that you have.
I just went through a lot. If I were to bump it back up, you really need to embrace these tools and there’s a hard way and there’s an easy way of embracing these tools and not losing your compliance, not creating the next set of security incidents. The hard way is having someone just enumerate the list and go down the line of wood chopping of like, “How do I buy down this risk?” The easier way is what’s my highest leverage point in actually answering all of these questions and what is actually preventable versus reducible? And let’s actually focus on that prevention. And identity is actually the answer to that.

John Richards:
What about the adoption of this? So in my mind, it feels a little bit like we’re having to relearn a lot of basic security and identity concepts as we move into this new world. And it’s like, “Oh, we’ve been through this before,” but what about it is resetting it? Is it just it’s so new and people are trying it or is there something fundamentally different about how we’re interacting now that’s forcing us to kind of go back all the way to the beginning and rebuild our structure around how we do security here?

Jasson Casey:
I think it actually obeys the hallmarks of a good sequel movie, like give me a little familiarity and give me a little new. So the benefit, for everyone who retained their fundamentals, it’s actually still the fundamentals. We’re still executing programs on computers. We still obey the laws, the axioms of how operating systems work. So the fundamentals are still in play. What is the application? What’s the application identity? Who loaded it? What’s the checksum of the loader? Is it actually a trusted loader? Was the executable that it loaded it from signed? Was it signed from an OEM that I trust? What’s the current posture of the device that all of this is taking place on? So those are all classic concepts. We could go wake somebody up from the 1980s and they would understand everything that we just said, right?

John Richards:
Yeah.

Jasson Casey:
So what’s new? What’s new is the operating system has actually got a new layer. When we talk about an agent, what we’re really talking about is a program. It’s still a buying area. It still gets run on a computer by an operating system. It’s still loaded from an executable. What’s different about this program is it has a different set of resources it has access to.
So we think about classic programs, a classic program… The first time we learn programming, we’ll learn about all these interesting structures, like loops and conditionals and function calls and structures and whatnot. And we make all these toy programs that like teach us the fundamentals of programming. They don’t do anything useful because until you can speak to the outside world, like read data, read from a network, that sort of thing, you’re still in the lab. You’re still just kind of playing with toys.
Well, what is an agent? An agent is kind of the same thing. An agent is a program. It still has to obey all of these rules, but has a new set of resources it can talk to in the world. And the most interesting resource it can talk to in the world is the LLM. It can send a thing up to the LLM saying, “Hey, I don’t know what to do with my life. What should I do with my life?” And it will send you an answer and then you get to choose if you’re going to follow that answer or not.
And obviously that’s a very naive description, but I would argue that that’s actually the minimalist definition of an agent that lets you actually start making security decisions around how are you actually going to discover the agents in your environment, understand what they’re doing, understand what tools they’re executing locally and remotely, understand the risk level associated with them, really build identity graphs from the users to the agents, to the devices, to the posture, to the authorizations that they’re getting, even though these things can be very ephemeral. They’re fireflies, they come and they go. And kind of plot a course that’s high leverage. You don’t want to chase all the symptoms, you want to understand the systemics and kind of work at that level.

John Richards:
That makes sense. Yeah, because you’ve got so many more pieces coming in, but yeah, those core fundamentals are still critical to understanding how you address the more complicated piece.

Jasson Casey:
Yeah. If you think about an agent, you’ve got the LLM. The LLM is fundamentally still a question and answer machine. You give it a question, it gives you an answer. If you want it to remember stuff, you literally just make sure that you’re prepending all your previous questions and answers. There’s a recent enhancement that lets you also tell it, “Oh, by the way, here’s some tools I have access to and here’s some things they can do.” And in addition to giving you an answer, it can give you some execution instructions on those tools and saying, “Hey, let me know what you find out and I can help you more.”
That’s really the new thing. When we talk about MCP, when we talk about API, when we talk about vector databases, what we’re really talking about there is bringing all of our legacy data, bringing all of our legacy APIs to bear so that this LLM can tell us to do something with it. That part is actually not new. Even MCP is not necessarily that new. MCP is just a way of trying to take these legacy APIs and adapt them, or if you’re a gang of foreperson, bridge them into the interface that the LLM actually expects.
So they’re actual very simple models, mental models that you can adopt to understanding how these agents work, regardless what the purpose of these agents are, whether they’re code assistants or trying to provide a better customer experience for your customers, interacting with support or what have you. And understanding identity and tracking data flow through this system is actually what’s most important and what is high leverage.

John Richards:
Early on in my career when I was developing, I remember getting a message from the security team. I’d built out a code and they had found a SQL injection vulnerability of what I had written. I hadn’t properly escaped it. So it’s like, “Oh no, I need to be doing this. I need to be worried about this.” But the answer was, “Hey, I just need to learn how to escape this data better.”
Now, we’re dealing with prompt injection and we’re hearing a lot about this. It doesn’t seem so simple as, “Oh, I forgot to properly escape all the sequences in here.” And so there’s some similarity of, “Hey, you’ve got to watch out for this vector of threat, but it’s a lot harder now than it was back then.” What’s your thoughts?

Jasson Casey:
So yeah, that’s a great observation. So let’s drill into that. I would argue the hallmark of computer security, really network security, as well as network operations, is this thing called control plane/data plane separation. And we learned over a long time that you really want to separate these two concepts, right? So control is what should the computer do and data is, what is it doing it over? And so your explanation of prompt injection, or not prompt injection, but SQL injection is, I think people would call it input sanitization, but the basic idea is, “Hey, if you don’t clean up your data, your data can be crafted to essentially become instructions and end up in the part of the machine where it’s eventually going to get executed and do bad stuff.”
And so control plane data/plane separation is all about how do I ensure that I have deterministic, repeatable rules to where I never mix data ever in a provable way? One more thing on that before we kind of go to the new world, if you think about buffer overflows, that’s like the classic example. A more modern example is like return oriented programming, but it’s a great example of how if I don’t manage control and data plane separation, my data can turn into control and essentially take control.
All right. In the world of agents, the executor, unless you’re building a more sophisticated agent, is actually the LLM. The LLM operates on this thing called the context. And so you think of the context as the packet of information you’re giving up to the LLM asking it to tell you how to live your life. And there’s lots of things in that context. There’s a system prompt, there’s your question answer history, or at least what it can fit in there. There’s the tools that you can execute, your current product, your current question. But there’s no separation in that machine of control and data. It’s all in there together and the LLM is going to come back and the LLM is going to say what to do.
So these problems are similar but different. They’re similar in that it is a control and data separation problem, but they’re different in our previous world because SQL can be defined syntactically in a deterministic way, because program structure can be defined in a deterministic way. There are deterministic solutions to input sanitization. There are deterministic solutions to essentially a lot of these control and data problems. In the current architecture that people run with agents, there is no solution that is deterministic. There are only probabilistic solutions.
Now, if we look to the future, I actually do have hope that we can kind of maybe introduce tiers and multi LLM tiers where we are kind of separating these things out a bit more, but that’s longer than we have the time to talk through about today. I would argue the problem is the same, but the way you’re going to be able to solve it right now is very, very different. And again, I would say the first step to solving this problem and your highest leverage is through understanding, and identity is where you establish that understanding. What user on what machine is act running with what agent, on what machine with what posture, with what authorizations? And if you’ve established this level of agent identity, then this now will allow you to actually track the data flow. And so whether you’re coming at this from a cybersecurity perspective or a defense perspective or whatnot, I think everybody would recognize the first step in any sort of system or operations is understanding, just understand what’s happening, develop some sort of mental model.
Well, if I’ve established this identity cross-cutting layer on my agent, I can actually now track data and I can track the data permissions and I can metaphorically, I can think of it as like a paint mixing exercise. I have certain colors of paints that are high risk. I have certain colors of paint that are low risk. I may allow paint mixing, but when paint mixing happens, I have to assume essentially the worst case of those things that then become mixed and then kind of treat them appropriately.
This is also why I feel optimistic of like future agent architectures that start to separate these things out and maybe LLMs that are fine-tuned specifically for some of these controlled data separation style problems. But the first step in what everyone can do right now is actually understand what are your identities and permissions actually flowing through your agents. And then number two, to contain them, right? So it’s absolutely possible to contain them. If you don’t trust the LLM provider, you can actually run private models either on your own or you have your things like Bedrock and Google and Amazon’s versions of those services.
Well, if you’re concerned about data mixing, then essentially you get to decide what this actually gets to do. You can actually enforce policy on tool execution. If you have any question about certain tools, handling of some of these datas, you can actually disallow them. And shameless plug, but like there are solutions that actually help you do this and even inject preferred tools. Like maybe I don’t want you to use this tool, I want you to use this tool instead, but I don’t want to disrupt you, the developer or you the finance person. I want you to be able to kind of have at your day.

John Richards:
I like this paint analogy. I’m sitting there thinking through… So if I’m understanding this kind of correctly, as the data transforms, let’s say red indicates a higher risk for some… And you’ve got some blue data and as they come together and you’re like, your finance analogy, okay, and now I’ve got a new report. But the idea would be like if any, I guess red, any high risk data is in there, that new report is now classified as that high risk. It’s colored with that, if you will, to use the metaphor. So that way you’re always knowing what’s the most sensitive piece that this piece of data, even it’s a completely new type of data or content, it’s still aligned to where it came from.

Jasson Casey:
Exactly. So think about it like in biology or chemistry. Well, maybe we don’t have to go there. Maybe we just talk about the concept of like one-way doors and two-way doors. A one-way door is a decision that you can go through that’s really, really hard to back out of, whereas a two-way door is a decision’s really, really easy. And LLM is you’re feeding it control and data essentially in one context and it’s giving you things back and it’s going to summarize stuff based on everything that you’ve given it.
And so you can’t track precisely the way you used to be able to. All of the unique threads that went into the answer that it’s giving you back, it’s just not how these systems work, but you can remember all of the unique threads of authorizations and entitlements that you actually sent in.
And so you can actually compute, just like in chemistry, we don’t pour back into the reagent bottle. We only pour from the reagent bottle into the beaker. Same thing, you could treat this LLM in a reaction as like, I only go in one direction, I don’t go back in the other if I’m actually focused on the risk that data one could inform a decision even though data two is lower and I’m asking it to answer it on data two. And the decision that comes back may reveal something of data one. Whether I want to reveal that or not is a decision to be deferred to later, but understanding that data one contributed to that decision and that someone needs to exercise judgment on this at some point later, you’ve got to remember that. You’ve got to track that. That needs to be auditable. And that’s something that you can actually do today.
And I also believe, by the way, that this is going to be the bedrock for us coming up with the next generation agent architecture where we are kind of reestablishing some sort of controlled data separation. By the way, there’s a field in computer science and compilers called data flow analysis where they do this. This is kind of how compilers work in the backend to do kind of optimizations. They do it deterministically. So again, similar but different. But this is one of the exciting areas of development that I think is actually happening in advanced agents. But for people using agents today, establishing agent identity and establishing this basic visibility and tracking authorizations and entitlements, that kind of understand mixing, that is the most impactful, powerful thing that you can do today to understand what’s going on, to keep your business on the right side of your customers and your compliance regimes and not slow down.

John Richards:
Now do you see the main threats coming from the people, like the users interacting with this and ejecting there? Or is it from third parties or assets that you’re pulling in that’s like a risk as well?

Jasson Casey:
I think it’s going to come from everywhere. So look, you’ve got adversaries using the same tools to look better, to sound better, to impersonate better, et cetera. The tools themselves are imperfect. So like if you use Claude Code, Claude Code uses something called the device code flow to actually authorize a terminal. It’s inherently phishable and it’s trivial to actually steal the access token from that process. And so I don’t know of any documented cases in the wild. I have to believe that’s actually happening because there’s so many phish kits that are just focused on that sort of thing, like Evilginx.
We know prompt injection is happening in the wild. We’ve got all these examples that are hilarious and funny. The guy updating his LinkedIn with Unicode that basically says, send me cookie recipes. And he starts getting emails with cookie recipes. So if that’s what we’re seeing, what are we not seeing that’s following the same sort of template?
A lot of people, when they think about prompt injection, they think about… You were talking about sanitizing input earlier with the SQL example. The reality is prompt injection can happen directly like that, but it can also happen indirectly. Remember, prompt injection is about something arriving in that context that you’re sending to the LLM in that agent loop. But remember, the agent is packing that context with all kinds of things. It’s packing that context with what it pulled back from APIs, from MCP services, from local and remote tool execution. So prompt injection can come not just from the user prompt, but from any of these data sources.
And also remember, when you’re looking, that prompt can be distributed. So it’s early, so I’m forgetting the precise word, but there’s a technical word for sometimes when I actually assemble and exploit, I actually assemble it in parts and it’s not even recognizable as the exploit until I’ve got everything together.
Prompt injection can work in the same way. If I can time and assemble my prompt by just engineering a part to return from the rag and to engineer a part to return from the API or over time, remember, it’s going to accumulate in that context window. And that accumulation may or may not trigger the weights and the model on the next execution or the next LLM execution. So prompt injection is not a simple problem and it’s not trivially solved by just sanitizing your user inputs.

John Richards:
Yeah. No, that was a great analysis of that. I hadn’t considered that growing building force you can have, but you could see that in your own, “Oh, I’ve done something I didn’t even realize. The way I was responding impacted things.”

Jasson Casey:
One of the things I want to do, it’s not really related to our product, but I think it would be really, really useful for people to visualize this concept. We’re going to produce a Claude plugin that basically does visualization of that concept that I just described so you can kind of see the color mixing. Because again, the first step to a lot of these solutions in my mind is just understanding. If you and I were to start building our own agent, these things start to become obvious to us pretty quickly, but there’s a lot of people that are administering agents or using agents or running agents and they don’t necessarily have that time to experience the fundamentals. And so I really think trying to visualize this for folks to where they can kind of see how their own data accretes in the context over time would be a powerful way to show that it’s not really just about user input to the system. It really is about that full cycle.

John Richards:
Yeah. And education’s always been such a huge part of security and building a culture around that. Now, you teed this up a little earlier mentioning like, “Hey, there’s tools out here to help you with this, not specifically just objections, but across agent identity and all of this.” Tell me about what Beyond Identity is doing in this space. How can people get some help here who are feeling overwhelmed?

Jasson Casey:
So Beyond Identity, we’re an identity security company. We’re here to basically solve your identity security problems. We plug into the back of your existing identity providers. It doesn’t matter if it’s Okta, Intra, King, OneLogin, you name it, Shibboleth. And what we do is we kind of attack that device identity, user identity, device posture, and agent identity problem.
So historically, before the big AI rush, we were mostly known for essentially stopping incidents and breaches in enterprises. Like you plug us into your Okta, you plug us into your Intra, and we basically eliminate 80% of those security incidents. Actually, I’m doing a podcast later with a customer today where he’s literally just walking through his security incidents and talking about the before and after.

John Richards:
Wow.

Jasson Casey:
What we’ve moved into over the last year is a lot more non-human identity use cases. And at the very early part of the podcast, I was talking about, “Look, the fundamentals are the fundamentals.” A user identity can be for a human, but it can also be for an automated process. It can be for a service. Identities always execute on machines. Machines always have operating systems. And so if you solve these things right, your solution ends up being multi-use and it works really well, not just for user access, but it can actually work for API access.
Turns out it can also work for a lot of non-human cases, and that’s where the AI and the drone use cases started to get developed this year as well. Turns out the problems are actually very, very similar in all of these cases. How do I know the right software with the right user on the right machine with the right posture has the right permissions for the moment in time that they’re allowed to have that and nothing else?

John Richards:
Yeah. So where is the best place for somebody to go who wants to learn more, follow up on this? What’s the best place?

Jasson Casey:
Beyondidentity.com is we’re the best place to learn about our products. If you’re really just interested in the AI products, I would go to beyondidentity.ai. That’s where we’re actually talking about some of the more advanced stuff that I was describing today. You can hit me up on LinkedIn or on X. I’m on both of those as just my name, Jasson Casey. And yeah, we’re pretty easy to get ahold of.

John Richards:
Well, awesome. Jasson, thank you so much for coming on here. This has been so informative. I learned a ton, so thank you very much.

Jasson Casey:
Yeah, thanks for having me.

John Richards:
This podcast is made possible by CyberProof, a leading, co-managed security services provider, helping organizations manage cyber risk through advanced threat intelligence, exposure management, and cloud security. From proactive threat hunting to manage detection and response, CyberProof helps enterprises reduce risk, improve resilience, and stay ahead of emerging threats. Learn more at cyberproof.com.
Thank you for tuning in to Cyber Sentries. I’m your host, John Richards. This has been a production of TruStory FM, audio engineering by Andy Nelson, music by Ahmed Seghi. You can find all the links in the show notes. We appreciate you downloading and listening to this show. Take a moment and leave a like and review. It helps us to get the word out. We’ll be back March 4th right here on Cyber Sentries.

Dive deep into AI’s accelerating role in securing cloud environments to protect applications and data.