LLMs, Prompts & Tokens: How Copilot Actually Works
14 February 2026

LLMs, Prompts & Tokens: How Copilot Actually Works

Location

offline
Microsoft Sovereign Office, Noida

Speaker

2 Professional
Speakers

Days:

1 Day

About

Behind tools like GitHub Copilot are Large Language Models (LLMs), prompts, and tokens — but what do these terms actually mean in practice?

“LLMs, Prompts & Tokens: How Copilot Actually Works” breaks down the core building blocks that power AI coding assistants. This session explains how LLMs understand code, how prompts shape their behavior, and how tokens influence context, cost, and performance. Instead of treating Copilot as a black box, we’ll explore what’s really happening under the hood.

Developers will gain a clear mental model of how Copilot generates suggestions, why it sometimes gets things wrong, and how to work with it more effectively. By understanding these fundamentals, you’ll be better equipped to write smarter prompts, interpret AI output, and use Copilot as a true coding partner — not just an autocomplete tool.

Benefits

  • Learning
  • Mentorship
  • Networking
  • Industry connect

LLMs, Prompts & Tokens: How Copilot Actually Works

Date: 14 February
Timing: 11:00 AM - 4:00 PM
Location: offline
Microsoft Sovereign Office, Noida
Speaker: 2 Professional
Speakers
Days: 1 Day

Speakers

Unnati Chhabra
Unnati Chhabra
AI Engineer, Grid Dynamics
Sairam Kaushik
Sairam Kaushik
Software Engineer, Goolluck Consulting

Schedule

11:00 AM - 11:20 AM
Copilot as an interface to LLMs

11:20 AM - 12:15 PM
Tokens, context & limits seen in-editor

12:15 PM - 1:15 PM
Prompt patterns applied inside Copilot

1:15 PM - 2:00 PM
Hands-on: Improve Copilot outputs with better prompts

2:00 PM - 3:00 PM
Fix hallucinations & ambiguity

3:00 PM - 4:00 PM
Best practices