LearnHow LLMs WorkTokenization
How LLMs Work

Tokenization

Why your LLM sees 'ChatGPT' as 3 tokens, not 1 word

Your LLM doesn't read words. It reads tokens — subword fragments that determine what your model can understand, how much it costs per query, and why it sometimes misspells simple words.

Why this matters

Token count directly controls your inference costs. A 4x increase in input tokens ≈ 4x cost. Understanding tokenization is understanding your AI budget.