Skip to content

Add MiniMax as LLM provider#187

Open
octo-patch wants to merge 1 commit intovideo-db:mainfrom
octo-patch:feature/add-minimax-provider
Open

Add MiniMax as LLM provider#187
octo-patch wants to merge 1 commit intovideo-db:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

@octo-patch octo-patch commented Mar 24, 2026

Summary

Adds MiniMax as a 5th LLM provider alongside OpenAI, Anthropic, Google AI, and VideoDB Proxy.

MiniMax provides an OpenAI-compatible API, making integration seamless with the existing provider architecture. This PR adds support for:

  • MiniMax-M2.7 (latest, 1M context window) - default model
  • MiniMax-M2.5 (204K context window)
  • MiniMax-M2.5-highspeed (204K context, optimized for speed)

Changes

  • backend/director/llm/minimax.py — New MiniMax provider with MiniMaxConfig, MiniMaxChatModel enum, and MiniMax class extending BaseLLM. Includes temperature clamping to [0, 1] and think-tag stripping for M2.7 reasoning output.
  • backend/director/constants.py — Add MINIMAX to LLMType enum and MINIMAX_ to EnvPrefix enum.
  • backend/director/llm/__init__.py — Add MiniMax to get_default_llm() factory with MINIMAX_API_KEY auto-detection.
  • backend/.env.sample — Add MINIMAX_API_KEY configuration.
  • README.md — Mention MiniMax in supported LLM providers list.
  • backend/tests/test_minimax.py — 35 unit tests covering config, model enum, message/tool formatting, chat completions, think-tag stripping, factory detection, and error handling.
  • backend/tests/test_minimax_integration.py — 3 integration tests (simple chat, JSON response format, tool calling) that run against the real MiniMax API.

Configuration

# Set MiniMax API key (auto-detected by get_default_llm)
MINIMAX_API_KEY=your-api-key

# Optional: override default model
MINIMAX_CHAT_MODEL=MiniMax-M2.5-highspeed

# Optional: force MiniMax as default provider
DEFAULT_LLM=minimax

Test plan

  • 35 unit tests pass (pytest tests/test_minimax.py)
  • 3 integration tests pass against real MiniMax API
  • Verify existing OpenAI/Anthropic/GoogleAI providers still work
  • Test with DEFAULT_LLM=minimax in a local Director instance

No new dependencies required — MiniMax uses the existing openai SDK via its OpenAI-compatible API endpoint.

Summary by CodeRabbit

Release Notes

  • New Features

    • MiniMax added as a supported LLM provider with full feature support including tool calling and structured responses.
  • Documentation

    • README updated to explicitly list all supported LLM providers: OpenAI, Anthropic, Google Gemini, and MiniMax.

Add MiniMax (MiniMax-M2.7, M2.5, M2.5-highspeed) as a 5th LLM provider
alongside OpenAI, Anthropic, Google AI, and VideoDB Proxy. MiniMax uses
an OpenAI-compatible API at api.minimax.io/v1.

- Add MiniMax provider with config, model enum, and chat completions
- Support tool/function calling, JSON response format, think-tag stripping
- Temperature clamping to [0, 1] range
- Auto-detection via MINIMAX_API_KEY env var
- Add to get_default_llm() factory and LLMType/EnvPrefix enums
- 35 unit tests + 3 integration tests (simple chat, JSON, tool calling)
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 24, 2026

📝 Walkthrough

Walkthrough

A new MiniMax LLM provider integration is added to enable MiniMax as an LLM service option alongside existing providers. The implementation includes environment configuration, enum constants, client initialization with OpenAI-compatible API, message and tool formatting helpers, comprehensive unit tests, and integration test coverage.

Changes

Cohort / File(s) Summary
Documentation & Configuration
README.md, backend/.env.sample, backend/director/constants.py
Updated README to list MiniMax as a supported provider; added MINIMAX_API_KEY environment variable placeholder; introduced LLMType.MINIMAX and EnvPrefix.MINIMAX_ enum members.
LLM Provider Implementation
backend/director/llm/minimax.py, backend/director/llm/__init__.py
Implemented MiniMax LLM class with MiniMaxConfig, MiniMaxChatModel enum, and chat_completions() method supporting tool calling and JSON response format. Added message/tool formatting helpers and think-tag stripping. Extended get_default_llm() to conditionally return MiniMax() based on MINIMAX_API_KEY availability and DEFAULT_LLM setting.
Unit & Integration Tests
backend/tests/test_minimax.py, backend/tests/test_minimax_integration.py
Added 373 lines of unit tests covering config validation, temperature clamping, message/tool formatting, think-tag stripping, chat completions with mocking, and provider selection logic. Added 85 lines of integration tests for simple chat, JSON responses, and tool calling (conditionally skipped without MINIMAX_API_KEY).

Sequence Diagram

sequenceDiagram
    participant App as Application
    participant LLM as MiniMax LLM
    participant Client as OpenAI Client
    participant API as MiniMax API
    
    App->>LLM: chat_completions(messages, tools, response_format)
    LLM->>LLM: _format_messages(messages)
    LLM->>LLM: _format_tools(tools)
    LLM->>Client: client.chat.completions.create()
    Client->>API: POST /v1/chat/completions
    API-->>Client: Response (content, tool_calls, usage)
    Client-->>LLM: ChatCompletion object
    LLM->>LLM: _strip_think_tags(content)
    LLM->>LLM: Parse tool_calls JSON arguments
    LLM-->>App: LLMResponse (status, content, tool_calls, usage)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Suggested reviewers

  • ankit-v-3
  • ashish-spext

Poem

🐰 A MiniMax hops into the warren,
With OpenAI's compatible call,
Temperature clamped, think-tags swept clean,
Tools shaped just right for all,
Our LLM family grows strong! ✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 24.49% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: adding MiniMax as a new LLM provider, which is the primary objective and focus of all file modifications.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
backend/director/llm/__init__.py (1)

20-29: ⚠️ Potential issue | 🟠 Major

Honor DEFAULT_LLM before key-based auto-detection.

With the current ordering, DEFAULT_LLM=minimax is still bypassed whenever an earlier provider key like OPENAI_API_KEY is present, so every get_default_llm() call site keeps using OpenAI. Check the explicit override first, then fall back to the existing auto-detect order, and add a regression test for DEFAULT_LLM=minimax plus OPENAI_API_KEY.

🧭 Suggested fix
-    if openai or default_llm == LLMType.OPENAI:
+    if default_llm == LLMType.OPENAI:
         return OpenAI()
-    elif anthropic or default_llm == LLMType.ANTHROPIC:
+    elif default_llm == LLMType.ANTHROPIC:
         return AnthropicAI()
-    elif googleai or default_llm == LLMType.GOOGLEAI:
+    elif default_llm == LLMType.GOOGLEAI:
         return GoogleAI()
-    elif minimax or default_llm == LLMType.MINIMAX:
+    elif default_llm == LLMType.MINIMAX:
         return MiniMax()
+    elif openai:
+        return OpenAI()
+    elif anthropic:
+        return AnthropicAI()
+    elif googleai:
+        return GoogleAI()
+    elif minimax:
+        return MiniMax()
     else:
         return VideoDBProxy()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/director/llm/__init__.py` around lines 20 - 29, The get_default_llm
logic currently checks provider keys before honoring the DEFAULT_LLM env
override, so DEFAULT_LLM=minimax is ignored when e.g. OPENAI_API_KEY exists;
change the branching in the function that sets default_llm (references:
default_llm variable and the return branches returning OpenAI, AnthropicAI,
GoogleAI, MiniMax) to first check if default_llm is set and return the
corresponding class (e.g., if default_llm == LLMType.MINIMAX return MiniMax())
before running the key-based auto-detection, and add a regression test asserting
that DEFAULT_LLM=minimax plus OPENAI_API_KEY results in MiniMax being selected.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@backend/director/llm/minimax.py`:
- Around line 147-169: The chat_completions method currently ignores the stop
parameter; update the method (chat_completions in minimax.py) to include the
caller-supplied stop sequences in the request params when provided (e.g., set
params["stop"] = stop or the appropriate MiniMax key) so MiniMax receives and
respects the stop sequences; add this check alongside the existing
response_format/tools handling to only add the stop entry when stop is not None.
- Around line 171-199: The tool-call JSON parsing must be moved into the API
error-guard so malformed tool payloads don't bubble up; wrap access to
response.choices[0].message.tool_calls and the
json.loads(tool_call.function.arguments) parsing in the same try/except that
surrounds self.client.chat.completions.create (or add an inner try that catches
JSON/ValueError and returns an LLMResponse error), and ensure the method (the
block building the LLMResponse with tool_calls, finish_reason, usage fields)
returns a graceful LLMResponse on parse errors instead of letting exceptions
escape from response or tool_call.function.arguments.

In `@backend/tests/test_minimax_integration.py`:
- Around line 12-15: The module-level pytest mark currently only skips when
MINIMAX_API_KEY is missing; update the pytestmark so the module is also marked
as integration (e.g., change pytestmark to include pytest.mark.integration in
addition to the existing pytest.mark.skipif), referencing the pytestmark symbol
and the MINIMAX_API_KEY check so these tests run only when explicitly selected
with -m integration and when the env var is set.

---

Outside diff comments:
In `@backend/director/llm/__init__.py`:
- Around line 20-29: The get_default_llm logic currently checks provider keys
before honoring the DEFAULT_LLM env override, so DEFAULT_LLM=minimax is ignored
when e.g. OPENAI_API_KEY exists; change the branching in the function that sets
default_llm (references: default_llm variable and the return branches returning
OpenAI, AnthropicAI, GoogleAI, MiniMax) to first check if default_llm is set and
return the corresponding class (e.g., if default_llm == LLMType.MINIMAX return
MiniMax()) before running the key-based auto-detection, and add a regression
test asserting that DEFAULT_LLM=minimax plus OPENAI_API_KEY results in MiniMax
being selected.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: f5b034c2-b2fb-4a0e-bc9d-a1ee2f4cb983

📥 Commits

Reviewing files that changed from the base of the PR and between 70e0b3d and 6a6bbef.

📒 Files selected for processing (8)
  • README.md
  • backend/.env.sample
  • backend/director/constants.py
  • backend/director/llm/__init__.py
  • backend/director/llm/minimax.py
  • backend/tests/__init__.py
  • backend/tests/test_minimax.py
  • backend/tests/test_minimax_integration.py

Comment on lines +147 to +169
def chat_completions(
self, messages: list, tools: list = [], stop=None, response_format=None
):
"""Get chat completions using MiniMax.

MiniMax provides an OpenAI-compatible API at https://api.minimax.io/v1.
docs: https://platform.minimaxi.com/document/ChatCompletion%20v2
"""
params = {
"model": self.chat_model,
"messages": self._format_messages(messages),
"temperature": self.temperature,
"max_tokens": self.max_tokens,
"top_p": self.top_p,
"timeout": self.timeout,
}

if tools:
params["tools"] = self._format_tools(tools)
params["tool_choice"] = "auto"

if response_format:
params["response_format"] = response_format
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Don't silently ignore stop.

chat_completions(..., stop=...) accepts the parameter but never uses it, so MiniMax will ignore caller-supplied stop sequences.

🛑 Suggested fix
         if tools:
             params["tools"] = self._format_tools(tools)
             params["tool_choice"] = "auto"

+        if stop is not None:
+            params["stop"] = stop
+
         if response_format:
             params["response_format"] = response_format
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def chat_completions(
self, messages: list, tools: list = [], stop=None, response_format=None
):
"""Get chat completions using MiniMax.
MiniMax provides an OpenAI-compatible API at https://api.minimax.io/v1.
docs: https://platform.minimaxi.com/document/ChatCompletion%20v2
"""
params = {
"model": self.chat_model,
"messages": self._format_messages(messages),
"temperature": self.temperature,
"max_tokens": self.max_tokens,
"top_p": self.top_p,
"timeout": self.timeout,
}
if tools:
params["tools"] = self._format_tools(tools)
params["tool_choice"] = "auto"
if response_format:
params["response_format"] = response_format
def chat_completions(
self, messages: list, tools: list = [], stop=None, response_format=None
):
"""Get chat completions using MiniMax.
MiniMax provides an OpenAI-compatible API at https://api.minimax.io/v1.
docs: https://platform.minimaxi.com/document/ChatCompletion%20v2
"""
params = {
"model": self.chat_model,
"messages": self._format_messages(messages),
"temperature": self.temperature,
"max_tokens": self.max_tokens,
"top_p": self.top_p,
"timeout": self.timeout,
}
if tools:
params["tools"] = self._format_tools(tools)
params["tool_choice"] = "auto"
if stop is not None:
params["stop"] = stop
if response_format:
params["response_format"] = response_format
🧰 Tools
🪛 Ruff (0.15.6)

[warning] 148-148: Do not use mutable data structures for argument defaults

Replace with None; initialize within function

(B006)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/director/llm/minimax.py` around lines 147 - 169, The chat_completions
method currently ignores the stop parameter; update the method (chat_completions
in minimax.py) to include the caller-supplied stop sequences in the request
params when provided (e.g., set params["stop"] = stop or the appropriate MiniMax
key) so MiniMax receives and respects the stop sequences; add this check
alongside the existing response_format/tools handling to only add the stop entry
when stop is not None.

Comment on lines +171 to +199
try:
response = self.client.chat.completions.create(**params)
except Exception as e:
print(f"Error: {e}")
return LLMResponse(content=f"Error: {e}")

content = response.choices[0].message.content or ""
content = self._strip_think_tags(content)

return LLMResponse(
content=content,
tool_calls=[
{
"id": tool_call.id,
"tool": {
"name": tool_call.function.name,
"arguments": json.loads(tool_call.function.arguments),
},
"type": tool_call.type,
}
for tool_call in response.choices[0].message.tool_calls
]
if response.choices[0].message.tool_calls
else [],
finish_reason=response.choices[0].finish_reason,
send_tokens=response.usage.prompt_tokens,
recv_tokens=response.usage.completion_tokens,
total_tokens=response.usage.total_tokens,
status=LLMResponseStatus.SUCCESS,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Keep tool-call parsing inside the guarded error path.

json.loads(tool_call.function.arguments) runs after the API-call try/except, so one malformed tool payload will raise out of chat_completions() instead of returning an LLMResponse. That turns a recoverable provider error into an agent crash.

🧰 Suggested fix
         try:
             response = self.client.chat.completions.create(**params)
+            content = self._strip_think_tags(response.choices[0].message.content or "")
+            tool_calls = [
+                {
+                    "id": tool_call.id,
+                    "tool": {
+                        "name": tool_call.function.name,
+                        "arguments": json.loads(tool_call.function.arguments),
+                    },
+                    "type": tool_call.type,
+                }
+                for tool_call in response.choices[0].message.tool_calls or []
+            ]
         except Exception as e:
             print(f"Error: {e}")
-            return LLMResponse(content=f"Error: {e}")
-
-        content = response.choices[0].message.content or ""
-        content = self._strip_think_tags(content)
+            return LLMResponse(content=f"Error: {e}", status=LLMResponseStatus.ERROR)

         return LLMResponse(
             content=content,
-            tool_calls=[
-                {
-                    "id": tool_call.id,
-                    "tool": {
-                        "name": tool_call.function.name,
-                        "arguments": json.loads(tool_call.function.arguments),
-                    },
-                    "type": tool_call.type,
-                }
-                for tool_call in response.choices[0].message.tool_calls
-            ]
-            if response.choices[0].message.tool_calls
-            else [],
+            tool_calls=tool_calls,
             finish_reason=response.choices[0].finish_reason,
             send_tokens=response.usage.prompt_tokens,
             recv_tokens=response.usage.completion_tokens,
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
try:
response = self.client.chat.completions.create(**params)
except Exception as e:
print(f"Error: {e}")
return LLMResponse(content=f"Error: {e}")
content = response.choices[0].message.content or ""
content = self._strip_think_tags(content)
return LLMResponse(
content=content,
tool_calls=[
{
"id": tool_call.id,
"tool": {
"name": tool_call.function.name,
"arguments": json.loads(tool_call.function.arguments),
},
"type": tool_call.type,
}
for tool_call in response.choices[0].message.tool_calls
]
if response.choices[0].message.tool_calls
else [],
finish_reason=response.choices[0].finish_reason,
send_tokens=response.usage.prompt_tokens,
recv_tokens=response.usage.completion_tokens,
total_tokens=response.usage.total_tokens,
status=LLMResponseStatus.SUCCESS,
try:
response = self.client.chat.completions.create(**params)
content = self._strip_think_tags(response.choices[0].message.content or "")
tool_calls = [
{
"id": tool_call.id,
"tool": {
"name": tool_call.function.name,
"arguments": json.loads(tool_call.function.arguments),
},
"type": tool_call.type,
}
for tool_call in response.choices[0].message.tool_calls or []
]
except Exception as e:
print(f"Error: {e}")
return LLMResponse(content=f"Error: {e}", status=LLMResponseStatus.ERROR)
return LLMResponse(
content=content,
tool_calls=tool_calls,
finish_reason=response.choices[0].finish_reason,
send_tokens=response.usage.prompt_tokens,
recv_tokens=response.usage.completion_tokens,
total_tokens=response.usage.total_tokens,
status=LLMResponseStatus.SUCCESS,
🧰 Tools
🪛 Ruff (0.15.6)

[warning] 173-173: Do not catch blind exception: Exception

(BLE001)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/director/llm/minimax.py` around lines 171 - 199, The tool-call JSON
parsing must be moved into the API error-guard so malformed tool payloads don't
bubble up; wrap access to response.choices[0].message.tool_calls and the
json.loads(tool_call.function.arguments) parsing in the same try/except that
surrounds self.client.chat.completions.create (or add an inner try that catches
JSON/ValueError and returns an LLMResponse error), and ensure the method (the
block building the LLMResponse with tool_calls, finish_reason, usage fields)
returns a graceful LLMResponse on parse errors instead of letting exceptions
escape from response or tool_call.function.arguments.

Comment on lines +12 to +15
pytestmark = pytest.mark.skipif(
not os.getenv("MINIMAX_API_KEY"),
reason="MINIMAX_API_KEY not set",
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -e
rg -n 'pytest\.mark\.integration|pytestmark\s*=' backend/tests/test_minimax_integration.py
fd '^(pyproject\.toml|pytest\.ini|setup\.cfg|tox\.ini)$' | xargs -r rg -n 'markers|integration'

Repository: video-db/Director

Length of output: 98


🏁 Script executed:

head -n 30 backend/tests/test_minimax_integration.py

Repository: video-db/Director

Length of output: 755


🏁 Script executed:

fd 'conftest.py' -x cat {} \;

Repository: video-db/Director

Length of output: 43


🏁 Script executed:

find . -name "pyproject.toml" -o -name "pytest.ini" -o -name "setup.cfg" -o -name "tox.ini" | head -5

Repository: video-db/Director

Length of output: 85


🏁 Script executed:

cat backend/pyproject.toml | grep -A 20 "markers\|pytest"

Repository: video-db/Director

Length of output: 43


🏁 Script executed:

rg "pytest\.mark\.integration" backend/tests/

Repository: video-db/Director

Length of output: 43


🏁 Script executed:

cat backend/pyproject.toml

Repository: video-db/Director

Length of output: 530


🏁 Script executed:

find . -type f -name "conftest.py" | head -3 | xargs cat

Repository: video-db/Director

Length of output: 43


🏁 Script executed:

rg "addopts|markers" backend/ --type toml --type ini --type cfg

Repository: video-db/Director

Length of output: 88


🏁 Script executed:

rg "class.*integration\|def.*integration" backend/tests/ -A 2 | head -20

Repository: video-db/Director

Length of output: 43


🏁 Script executed:

find backend -type f \( -name "pytest.ini" -o -name "setup.cfg" \)

Repository: video-db/Director

Length of output: 43


🏁 Script executed:

rg "\[tool\.pytest" backend/pyproject.toml -A 10

Repository: video-db/Director

Length of output: 43


🏁 Script executed:

rg "pytest\s+-.*integration|-m" backend/ | head -20

Repository: video-db/Director

Length of output: 1614


🏁 Script executed:

git log --oneline backend/tests/test_minimax_integration.py 2>/dev/null | head -5

Repository: video-db/Director

Length of output: 96


Add the missing pytest.mark.integration marker to match documented behavior.

The module docstring says these tests can be skipped with pytest -m "not integration", but the module lacks the integration marker. Without it, anyone with MINIMAX_API_KEY set will execute live API calls during normal test runs, regardless of the -m flag.

🧪 Suggested fix
-pytestmark = pytest.mark.skipif(
-    not os.getenv("MINIMAX_API_KEY"),
-    reason="MINIMAX_API_KEY not set",
-)
+pytestmark = [
+    pytest.mark.integration,
+    pytest.mark.skipif(
+        not os.getenv("MINIMAX_API_KEY"),
+        reason="MINIMAX_API_KEY not set",
+    ),
+]
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
pytestmark = pytest.mark.skipif(
not os.getenv("MINIMAX_API_KEY"),
reason="MINIMAX_API_KEY not set",
)
pytestmark = [
pytest.mark.integration,
pytest.mark.skipif(
not os.getenv("MINIMAX_API_KEY"),
reason="MINIMAX_API_KEY not set",
),
]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/tests/test_minimax_integration.py` around lines 12 - 15, The
module-level pytest mark currently only skips when MINIMAX_API_KEY is missing;
update the pytestmark so the module is also marked as integration (e.g., change
pytestmark to include pytest.mark.integration in addition to the existing
pytest.mark.skipif), referencing the pytestmark symbol and the MINIMAX_API_KEY
check so these tests run only when explicitly selected with -m integration and
when the env var is set.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant