25 au 27 février, 2026
Montréal, Canada

Rogue LLMs: Securing Prompts and Ensuring Persona Fidelity

This session demonstrates how to detect, analyze, and prevent prompt leaks and persona failures in large language models. Participants will learn proven techniques and tools for securing AI prompts, enforcing consistent bot behaviour, and integrating evaluation and threat modelling into real-world engineering workflows.

Voir les 193 présentations

Ben Dechrai

Ben Dechrai is a technologist with a strong focus on security and privacy. Known for his ability to distil complex technical concepts into engaging, digestible portions, Ben empowers developers through a deep understanding of design principles, security considerations, and coding practices. With over two decades of experience in software engineering, security, and architecture, Ben is a published author and has consulted for companies and investors across numerous industries.

Read More