llm token generation speed simulator.

visualize how different tokens per second (t/s) rates feel when streaming text from an ai model. this is purely client-side, so everything happens right in your browser.

pick a speed and watch it stream.

choose a preset or dial in your own rate, then start the simulation to feel the rhythm of token pacing.

note: very large numbers may not be precise in the browser.