About
Behind tools like GitHub Copilot are Large Language Models (LLMs), prompts, and tokens — but what do these terms actually mean in practice?
“LLMs, Prompts & Tokens: How Copilot Actually Works” breaks down the core building blocks that power AI coding assistants. This session explains how LLMs understand code, how prompts shape their behavior, and how tokens influence context, cost, and performance. Instead of treating Copilot as a black box, we’ll explore what’s really happening under the hood.
Developers will gain a clear mental model of how Copilot generates suggestions, why it sometimes gets things wrong, and how to work with it more effectively. By understanding these fundamentals, you’ll be better equipped to write smarter prompts, interpret AI output, and use Copilot as a true coding partner — not just an autocomplete tool.