Skip to content

Releases: Stackbilt-dev/llm-providers

v1.1.0 — Multi-Modal: Image Generation

01 Apr 15:54

Choose a tag to compare

Image Generation Provider

@stackbilt/llm-providers is now multi-modal — text + image inference under one package.

New: ImageProvider

import { ImageProvider } from '@stackbilt/llm-providers';

const img = new ImageProvider({
  cloudflareAi: env.AI,
  geminiApiKey: env.GEMINI_API_KEY,
});

const result = await img.generateImage({
  prompt: 'a mountain landscape at sunset',
  model: 'flux-dev',
});
// result.image: ArrayBuffer, result.responseTime, result.provider

Built-in Models

Model Provider Use Case
sdxl-lightning Cloudflare Fast drafts, free tier
flux-klein Cloudflare Balanced quality/speed
flux-dev Cloudflare Highest CF quality
gemini-flash-image Google Text rendering capable
gemini-flash-image-preview Google Latest preview model

Extracted from img-forge production codebase. Battle-tested response normalization handles all Workers AI return formats.

Full changelog: CHANGELOG.md

v1.0.0 — Production Release

01 Apr 14:12

Choose a tag to compare

First stable release. Production-tested in AEGIS cognitive kernel since v1.72.0.

Highlights

  • Zero runtime dependencies — supply chain security by design
  • 5 providers: OpenAI, Anthropic, Cloudflare Workers AI, Cerebras, Groq
  • LLMProviders.fromEnv() — one-line multi-provider setup
  • Graduated circuit breakers — automatic failover with half-open probe recovery
  • CreditLedger — per-provider budget tracking with threshold alerts + burn rate projection
  • npm provenance — every version cryptographically linked to its source commit

Install

npm install @stackbilt/llm-providers

Quick Start

import { LLMProviders } from '@stackbilt/llm-providers';

const llm = LLMProviders.fromEnv(process.env);
const response = await llm.generateResponse({
  messages: [{ role: 'user', content: 'Hello!' }],
});

See README for full documentation.
See SECURITY.md for supply chain security policy.