Talk

Your RAG System Has a Backdoor: Security for LLM Applications

Lisa Carpenter | Friday, May 8, 2026 | Gurten, Bern

Microphone

Description

You’ve built a slick AI assistant. It answers questions, searches your documents, maybe even takes actions on behalf of users. But have you considered what happens when someone types „ignore your instructions and…“ into that friendly chat box?

LLM applications have an entirely new attack surface that most developers aren’t thinking about. Prompt injection can turn your helpful assistant into a data exfiltration tool. Your carefully curated knowledge base might be leaking sensitive documents. And that „harmless“ chatbot could be manipulated into saying things that land your company in the headlines.
In this talk, we’ll walk through the OWASP Top 10 for LLM Applications with real examples — some funny, some terrifying. You’ll see live demos of attacks against RAG systems, learn why traditional security thinking doesn’t quite apply, and leave with practical defence patterns you can implement today. No security background required, just a healthy sense of paranoia.

Key takeaways

  • The unique attack surface of LLM applications (and why input validation isn’t enough)
  • Live demos: prompt injection, indirect injection via retrieved documents, and data exfiltration
  • Practical guardrails: input filtering, output validation, and architectural patterns that limit blast radius
  • A security checklist for your next code review