logo

Why do output tokens cost 5x more than input tokens?

Posted by ani17 |2 hours ago |2 comments

ani17 2 hours ago

Author here. I wanted to understand what vLLM and llama.cpp are actually doing under the hood, but the codebases are massive. So I wrote a stripped down version from scratch to see the core ideas without the production complexity.

Code: https://github.com/Anirudh171202/WhiteLotus

lazyMonkey69 2 hours ago

I think the paged attention part is a bit oversimplified. Nice read otherwise!