We just shipped Custom AI for CodeCritic on Pro and Enterprise plans.
If you want AI code review on your own LLM endpoint instead of our default platform models, you can now connect a public HTTPS API that speaks the familiar OpenAI-compatible contract (/v1/chat/completions). That matters for teams that care about vendor choice, predictable token economics, or routing reviews through an approved corporate model.
What you get
BYO LLM for code review - point CodeCritic at your provider’s base URL, pick a model id, and store your API key securely in Settings → Integrations (encrypted on our side).
Same product surface - paste code, GitHub PR flows, API and automation keep working; when Custom AI is active and valid, reviews can run against your endpoint without consuming the same platform review quota pattern you’re used to from the dashboard (see live plan details in-app).
Enterprise-friendly story - good fit when policy says “self-hosted or contracted LLM only,” as long as the endpoint is reachable over HTTPS and compatible with the OpenAI-style chat API.
Who it’s for
Developers evaluating AI-powered code review tools with flexible LLM backends
Teams on Pro or Enterprise who already standardize on OpenAI-compatible gateways (many hosted and cloud providers expose this shape)
Learn more
Product overview and plan comparison: Features
How the workflow fits together: How it works
App home: code-critic.com
Quick note on scope (so expectations stay clear)
Custom LLM targets public HTTPS OpenAI-compatible endpoints. Private networks, raw HTTP, and localhost are out of scope for this release - that keeps the integration safe and supportable at scale.
Comments (0)
No comments yet. Be the first to share your thoughts!