25 au 27 février, 2026
Montréal, Canada

Instrument and Govern AI Agents Before They Go Rogue

Agents need more than prompts, they need supervision. As teams move from experiments to production, they learn that agents are unpredictable, opaque, and prone to silent failure. Observability isn’t optional; it keeps agents aligned and users safe. This talk explores LLM observability: what to track, how, and why. We’ll compare various observability tools, and show how GenAI semantic conventions enable structured tracing at scale.

Voir les 193 présentations

Carl Lapierre

Osedea

Carl is a Lead AI Engineer driving innovation through intelligent systems across healthcare, manufacturing, and mining at Osedea. With over a decade of experience in software development, he leads high-impact AI initiatives focused on agentic systems, context engineering architectures, and large language model (LLM) integration, building solutions that enhance decision-making, automation, and adaptability in complex environments.

Read More