Skip to content

feat: add MiniMax as first-class LLM provider#74

Open
octo-patch wants to merge 1 commit intodtyq:masterfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as first-class LLM provider#74
octo-patch wants to merge 1 commit intodtyq:masterfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link

@octo-patch octo-patch commented Mar 22, 2026

Summary

Add MiniMax as a built-in LLM service provider for Magic. MiniMax offers high-performance AI models (M2.7, M2.5 series) with million-token context windows, tool calling, and deep thinking capabilities, all served via an OpenAI-compatible API at https://api.minimax.io/v1.

Changes

Backend (PHP)

  • ProviderCode.php: Register MiniMax enum case with OpenAIModel implementation
  • ProviderTemplateId.php: Add MiniMaxLlm = '23' template ID for provider-category mapping
  • ServiceProviderInitializer.php: Add MiniMax provider initialization data with bilingual (EN/CN) descriptions
  • LLMMiniMaxProvider.php: Connectivity test class following existing DeepSeek pattern

Frontend (TypeScript)

  • aiModel.ts: Add MiniMax to ServiceProvider enum with default API URL

Tests

  • ProviderCodeMiniMaxTest.php: 12 unit tests for enum, implementation, sort order, template ID mapping
  • LLMMiniMaxProviderTest.php: 5 unit tests for connectivity test with mock HTTP (success, auth error, network error)
  • ServiceProviderInitializerMiniMaxTest.php: 4 unit tests for provider data, translations, sort order uniqueness

Integration Notes

  • MiniMax uses the standard OpenAI-compatible API format, so it naturally falls through to the default case in ProviderConfigFactory and getImplementationConfig() - no special adapter needed
  • Temperature=0 is accepted by the MiniMax API
  • Available models: MiniMax-M2.7, MiniMax-M2.7-highspeed, MiniMax-M2.5, MiniMax-M2.5-highspeed

Test Plan

  • Unit tests for ProviderCode enum (implementation, sort order, template mapping)
  • Unit tests for LLMMiniMaxProvider connectivity test (mock HTTP)
  • Unit tests for ServiceProviderInitializer data integrity
  • Manual: Configure MiniMax provider in admin UI with API key
  • Manual: Verify chat completion through MiniMax models

Note

Medium Risk
Adds a new third-party LLM provider option end-to-end (backend enums/templates/seed data plus connectivity check and frontend defaults), which could affect provider selection/order and configuration flows. Risk is mitigated by being largely additive and covered by new unit tests.

Overview
Adds MiniMax as a first-class llm service provider.

Backend registers ProviderCode::MiniMax (mapped to the existing OpenAIModel implementation), introduces a new ProviderTemplateId::MiniMaxLlm, seeds MiniMax into default provider initialization data (with EN/CN translations), and adds an LLMMiniMaxProvider connectivity test that validates the API key by calling GET https://api.minimax.io/v1/models.

Frontend updates AiModel.ServiceProvider and ServiceProviderUrl to include MiniMax with the default base URL, and the PR adds unit tests covering the new enum/template mappings, seeded provider data, and connectivity test behavior.

Written by Cursor Bugbot for commit 04f6291. This will update automatically on new commits. Configure here.

Add MiniMax AI as a built-in LLM service provider, leveraging its
OpenAI-compatible API. MiniMax offers high-performance M2.7 and M2.5
series models with million-token context, tool calling and deep
thinking capabilities.

Changes:
- Register MiniMax in ProviderCode enum with OpenAIModel implementation
- Add MiniMaxLlm template ID for provider-category mapping
- Add MiniMax provider initialization data with bilingual descriptions
- Add MiniMax to frontend ServiceProvider enum with default API URL
- Create LLMMiniMaxProvider connectivity test class
- Add unit tests for enum, template ID, initializer and connectivity
Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

Fix All in Cursor

Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.


return Json::decode($response->getBody()->getContents());
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

New connectivity test class duplicates existing DeepSeek provider

Low Severity

LLMMiniMaxProvider is an exact duplicate of LLMDeepSeekProvider — the only difference is the $apiBase URL string. All methods (connectivityTestByModel, fetchModels) have identical logic. Additionally, a grep for both class names shows neither is referenced anywhere in application code outside their own definitions and test files, suggesting these connectivity test classes may be unused dead code. A single parameterized class (or a base class accepting the API URL) would eliminate the duplication.

Fix in Cursor Fix in Web

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant