The LLM infrastructure stack.
From scanning to access control to runtime — everything you need to build and scale AI systems.
llmhut is an open ecosystem for building production-grade LLM systems.
It brings together everything you need across the lifecycle:
- Scan your codebases for AI usage and risks
- Manage API keys and access securely
- Control how requests are routed and executed
| Project | Description |
|---|---|
| ai-scanner | Audit your codebase for LLM usage, frameworks, and exposed keys |
| KeyGate | Per-developer API key management with vendor-level control |
| gateway | High-performance LLM routing and execution layer |
Building with LLMs today is fragmented:
- No visibility into what’s being used
- Shared API keys across teams
- Lack of control over usage and cost
llmhut brings structure to this chaos.
| Layer | Capability |
|---|---|
| Discovery | Scan codebases and detect LLM usage |
| Access | Secure, scoped API key management |
| Runtime | Fast, provider-agnostic request execution |
From development to production — one cohesive stack.
Minimal setup. Works with existing SDKs. No lock-in.
OpenAI, Anthropic, Gemini, Groq, Bedrock, and more.
Fully open-source, extensible, and community-driven.
To build the open infrastructure layer for LLM-native systems.
Just like:
- Nginx for web traffic
- Redis for caching
- Kafka for streaming
llmhut aims to become the standard stack for AI applications.
We’re building this in the open.
- ⭐ Star the repos
- 🐛 Report issues
- 🔧 Raise PRs and extend it to your use cases
Built with ❤️ for the AI developer community
