I got tired of having 5 different AI subscriptions and no transparency into how tokens were actually being used. So I built VividLLM, the transparent command center for 35+ frontier models. It shows you exactly which model is responding, lets you watch its reasoning stream live and you can see the tokens counter right in the footer real time. Text only response as of now.
What makes it different:
📍35+ Frontier Models: No tab-switching. Toggle between the latest GPTs, Claude Sonnet 4.5, Llama 4 Scout, DeepSeek, Grok, Gemini, Mistral and more, all in one click.
📍Token Pool Separation: Tokens are separated into Casual and Pro Token pools. Casual models use Casual tokens, Pro and Web Search models use Pro tokens. This allows you to optimize your token usage based on model type. The Tokens are further Divided into Input and Output for each pool.
📍8M Monthly Tokens for $15/mo: 8M tokens per month, split into : 5M Casual Input / 1.5M Casual Output, 1M Pro Input / 500k Pro Output, 100 Web Searches (tokens will be deducted from pro pool)
📍Model Weight: Each model will have a model weight. Use efficient models (0.5x) to double your token mileage or switch to heavyweights (2x) when you need pure power.
📍Token Transfer System: You can transfer tokens between Input and Output within same pool after a conversion rate is applied, i.e., between Casual Input and Output, and between Pro Input and Output.
📍Real-time Reasoning: Watch the model's full thought process unfold alongside the answer (when supported).
📍Midchat Model Change: Start a conversation with a fast model to brainstorm, then switch to a reasoning-heavy model (like GPT-5.2) to finalize the code, all in the same thread.
📍Context Window: We have context windows ranging from 32k till 128k tokens depending on the model in use.
🔒Data Encryption: We encrypt Prompt Text, AI Response and AI reasoning using AES 256 CBC before they are saved in database.
🗑Data Deletion: Hard delete policy is followed once you click on delete chat option.
The Solo Dev Promise:
No marketing team, no fancy office. Just me, my laptop, and a real commitment to building something useful.
What's next:
Token rollover system + a BYOK (Bring Your Own Key) tier so you can plug in your own OpenRouter API key for an even lower monthly fee. Soon, you'll be able to use VividLLM's UI and reasoning stream with your own API keys for a flat platform fee.
Comments (1)
Hey Peer Push! I’m the solo developer behind VividLLM. 🛠️ I am ready to answer questions about many things, be it context windows, backend, BYOK roadmap, whatever you are willing to ask me 😄