Skip to content

feat: add MiniMax as LLM provider#226

Open
octo-patch wants to merge 2 commits intollm-tools:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as LLM provider#226
octo-patch wants to merge 2 commits intollm-tools:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

Summary

Add @llm-tools/embedjs-minimax package providing MiniMax model support via OpenAI-compatible API.

What's included

  • MiniMax LLM model class extending BaseModel, using @langchain/openai ChatOpenAI with MiniMax base URL
  • Models supported: MiniMax-M2.7 (default, 204K context) and MiniMax-M2.7-highspeed (204K context)
  • Temperature clamping to MiniMax's valid range (0.0, 1.0]
  • Think-tag stripping — automatically removes <think>...</think> reasoning blocks from M2.7 responses
  • Token usage reporting via usage_metadata
  • Env var support — reads MINIMAX_API_KEY when apiKey not passed
  • 14 tests (4 integration + 6 temperature + 4 constructor)

Usage

import { RAGApplicationBuilder } from '@llm-tools/embedjs';
import { MiniMax } from '@llm-tools/embedjs-minimax';
import { OpenAiEmbeddings } from '@llm-tools/embedjs-openai';

const app = await new RAGApplicationBuilder()
    .setModel(new MiniMax({ modelName: 'MiniMax-M2.7' }))
    .setEmbeddingModel(new OpenAiEmbeddings())
    .build();

Files changed

File Description
models/embedjs-minimax/src/minimax-model.ts MiniMax model class
models/embedjs-minimax/src/index.ts Package exports
models/embedjs-minimax/package.json Package manifest
models/embedjs-minimax/project.json Nx project config
models/embedjs-minimax/tsconfig.json TypeScript config
models/embedjs-minimax/eslint.config.js ESLint config
models/embedjs-minimax/README.md Usage documentation
models/embedjs-minimax/tests/integration.test.mjs Tests
tsconfig.base.json Added path alias

Test plan

  • All 14 tests pass (node --test models/embedjs-minimax/tests/integration.test.mjs)
  • npx nx build embedjs-minimax compiles successfully
  • Verified API responses from both M2.7 and M2.7-highspeed models
  • Verified token usage metadata is returned
  • Verified temperature clamping behavior

PR Bot added 2 commits March 29, 2026 00:34
Add @llm-tools/embedjs-minimax package providing MiniMax model support
via OpenAI-compatible API. Includes MiniMax-M2.7 and M2.7-highspeed
models with 204K context, temperature clamping to (0, 1.0], and
automatic think-tag stripping from reasoning responses.

- New package: models/embedjs-minimax with MiniMax class extending BaseModel
- Uses @langchain/openai ChatOpenAI with MiniMax base URL
- Supports MINIMAX_API_KEY env var or direct apiKey constructor param
- Token usage reporting via usage_metadata
- 14 tests (4 integration + 6 unit + 4 constructor)
- Updated tsconfig.base.json with path alias
@sonarqubecloud
Copy link
Copy Markdown

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant