Every Component Passed Review — So How Did the Agent Exfiltrate Everything?

Organizations are rolling out Copilot, custom agents, and MCP-based tool integrations. Their security teams keep doing what they’ve always done: decompose the system into components, assess each one, check the boxes. The problem is that agentic AI attacks don’t stay inside those boxes. A retrieved document biases the planner, the planner picks the wrong tool, the tool acts on stale permissions, a second agent trusts the output without verification. We’ve seen this play out in real incidents: zero-click prompt injection in enterprise copilots, indirect data exfiltration through tool chains. Every component passes its security review. The attack path between them does not.

This talk introduces a five-zone decomposition for agentic AI architectures: input surfaces, planning and reasoning, tool execution, memory and state, and inter-agent communication. These five zones describe where attacks enter the agent loop and how they cross trust boundaries that traditional threat models treat as separate concerns.

I walk through three scenarios: RAG pipeline poisoning, tool-integration supply-chain attacks via MCP (Model Context Protocol), and multi-agent goal cascades. For each one, I show how to trace cross-zone attack paths and build attack trees that capture the propagation your current reviews miss. Each scenario maps to OWASP Top 10 for LLM and Agentic AI Applications controls with concrete mitigations.

You leave with a seven-step methodology, a threat-zone mapping template, a cross-zone attack-path checklist, and worked attack trees. Artifacts your team can apply to your own agentic AI deployments the following week.

Standard security reviews look at agentic AI components one at a time. Real attacks chain across trust boundaries between retrieval, planning, tool execution, memory, and inter-agent communication. This talk presents a five-zone decomposition and a seven-step methodology for tracing cross-boundary attack chains in agentic AI systems. Three worked scenarios (RAG poisoning, MCP tool-integration supply-chain attacks, multi-agent cascades) with attack trees, mapping templates, and OWASP-aligned mitigations you can apply to your own deployments.

Key takeaways:

  • A five-zone decomposition that extends existing threat modeling practice to agentic AI architectures
  • Worked cross-zone attack paths grounded in real-world attack patterns
  • A seven-step methodology and ready-to-use templates to find attack chains your current reviews miss
  • Agentic AI attack patterns mapped to OWASP controls with concrete mitigations

Target audience: Security architects, blue team leads, and security managers evaluating or deploying agentic AI systems

Level: Intermediate–Advanced

About the Speaker