Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

But do you use it locally? It seems to be more of a server-side product


Personally, I absolutely use it locally. I’m always trying different editors and tech. It saves me from entering a multitude of API keys into each different software, in addition to the other reasons supplied for being able to specify your own limits and avoid surprise charges if you want.

When I want to try a new editor, vs code plugin or software, I only have to point it at my litellm proxy and immediately have access to all of my providers and models I’ve configured, no extra setup. It’s like a locally hosted openrouter that doesn’t charge you for routing. I can just select a different provider as easy as choosing the model in the software; switching from “openai/gpt-4o” to “groq/moonshotai/kimi-k2-instruct”, for example.

You can use litellm or OpenAI protocols which makes it compatible with most software. Add on ollama proxy and you can proxy ollama requests from software that doesn’t support specifying OpenAI’s base address but that does support ollama (a not uncommon situation). That combo covers most software.

So yes, to me it is absolutely worth running locally and as easy as editing a config file and starting a docker (or a shell script to open a venv and start litellm, if you prefer).

The only drawbacks I’ve found so far is that not all providers accurately respond with their model information so you sometimes have to configure models/pricing/limits manually in the config (about 5 lines of text that be copy/pasted and edited). All the SOTA models are pre-configured and kept relatively up to date, but one can expect updates to lag behind real pricing changes.

The UI is necessary if you want to set up api key/billing restrictions which requires a db, but that is rather trivial with docker as well.


Its a server-side proxy, so instead of ie. OpenAI url you point your AI tool to url of LiteLLM proxy, and use its virtual keys with budget limits or LLM models restrictions etc... - the features LLM providers will not give you, because it might save you money ;)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: