Conf42 Large Language Models (LLMs) 2025 - Online

- premiere 5PM GMT

Prompt Injection Attacks: Understanding and Mitigating Risks in LLM-Powered Web Apps

Abstract

AI assistants are everywhere and are a potential security nightmare. That’s the reality we’re facing with prompt injection attacks. With live coding, my talk will arm developers with the knowledge to defend against these AI-era vulnerabilities. It’s a must-see for any dev working with AI.

...

Jorrik Klijnsma

Senior Front-end Engineer @ Sopra Steria

Jorrik Klijnsma's LinkedIn account Jorrik Klijnsma's twitter account



Join the community!

Learn for free, join the best tech learning community for a price of a pumpkin latte.

Annual
Monthly
Newsletter
$ 0 /mo

Event notifications, weekly newsletter

Delayed access to all content

Immediate access to Keynotes & Panels

Community
$ 8.34 /mo

Immediate access to all content

Courses, quizes & certificates

Community chats

Join the community (7 day free trial)