Releases: Stackbilt-dev/llm-providers
Releases · Stackbilt-dev/llm-providers
v1.1.0 — Multi-Modal: Image Generation
Image Generation Provider
@stackbilt/llm-providers is now multi-modal — text + image inference under one package.
New: ImageProvider
import { ImageProvider } from '@stackbilt/llm-providers';
const img = new ImageProvider({
cloudflareAi: env.AI,
geminiApiKey: env.GEMINI_API_KEY,
});
const result = await img.generateImage({
prompt: 'a mountain landscape at sunset',
model: 'flux-dev',
});
// result.image: ArrayBuffer, result.responseTime, result.providerBuilt-in Models
| Model | Provider | Use Case |
|---|---|---|
sdxl-lightning |
Cloudflare | Fast drafts, free tier |
flux-klein |
Cloudflare | Balanced quality/speed |
flux-dev |
Cloudflare | Highest CF quality |
gemini-flash-image |
Text rendering capable | |
gemini-flash-image-preview |
Latest preview model |
Extracted from img-forge production codebase. Battle-tested response normalization handles all Workers AI return formats.
Full changelog: CHANGELOG.md
v1.0.0 — Production Release
First stable release. Production-tested in AEGIS cognitive kernel since v1.72.0.
Highlights
- Zero runtime dependencies — supply chain security by design
- 5 providers: OpenAI, Anthropic, Cloudflare Workers AI, Cerebras, Groq
LLMProviders.fromEnv()— one-line multi-provider setup- Graduated circuit breakers — automatic failover with half-open probe recovery
- CreditLedger — per-provider budget tracking with threshold alerts + burn rate projection
- npm provenance — every version cryptographically linked to its source commit
Install
npm install @stackbilt/llm-providersQuick Start
import { LLMProviders } from '@stackbilt/llm-providers';
const llm = LLMProviders.fromEnv(process.env);
const response = await llm.generateResponse({
messages: [{ role: 'user', content: 'Hello!' }],
});See README for full documentation.
See SECURITY.md for supply chain security policy.