From aef31ab786d3412b32793d22b2913827b70a6da9 Mon Sep 17 00:00:00 2001 From: Tobias Fenster Date: Sat, 14 Mar 2026 20:42:14 +0100 Subject: [PATCH 1/3] add note about repackaging --- content/manuals/ai/model-runner/ide-integrations.md | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/content/manuals/ai/model-runner/ide-integrations.md b/content/manuals/ai/model-runner/ide-integrations.md index 1a247c45ab7..a41df49726b 100644 --- a/content/manuals/ai/model-runner/ide-integrations.md +++ b/content/manuals/ai/model-runner/ide-integrations.md @@ -25,6 +25,18 @@ Before configuring any tool: $ docker model pull ai/qwen2.5-coder ``` +> [!TIP] +> +> The default context size for many models, e.g. `gpt-oss` is 4,096 tokens, which is limiting for coding tasks. +> You can repackage it with a larger context window: +> +> ``` bash +> $ docker model pull gpt-oss +> $ docker model package --from ai/gpt-oss --context-size 32000 gpt-oss:32k +> ``` +> Alternatively, models like ai/glm-4.7-flash, ai/qwen2.5-coder, and ai/devstral-small-2 +> come with 128K context by default and work without repackaging. + ## Cline (VS Code) [Cline](https://github.com/cline/cline) is an AI coding assistant for VS Code. From c7baab93b34686cd4725ccf99148cab9c915b712 Mon Sep 17 00:00:00 2001 From: Tobias Fenster Date: Sat, 14 Mar 2026 20:49:15 +0100 Subject: [PATCH 2/3] switch language Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com> --- content/manuals/ai/model-runner/ide-integrations.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/manuals/ai/model-runner/ide-integrations.md b/content/manuals/ai/model-runner/ide-integrations.md index a41df49726b..1f0f3b776d0 100644 --- a/content/manuals/ai/model-runner/ide-integrations.md +++ b/content/manuals/ai/model-runner/ide-integrations.md @@ -30,7 +30,7 @@ Before configuring any tool: > The default context size for many models, e.g. `gpt-oss` is 4,096 tokens, which is limiting for coding tasks. > You can repackage it with a larger context window: > -> ``` bash +> ```console > $ docker model pull gpt-oss > $ docker model package --from ai/gpt-oss --context-size 32000 gpt-oss:32k > ``` From 2c04028e96202a2740212ff50ed91008b926716d Mon Sep 17 00:00:00 2001 From: Tobias Fenster Date: Sat, 14 Mar 2026 20:50:45 +0100 Subject: [PATCH 3/3] grammar fix --- content/manuals/ai/model-runner/ide-integrations.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/manuals/ai/model-runner/ide-integrations.md b/content/manuals/ai/model-runner/ide-integrations.md index 1f0f3b776d0..219eaf1e9c7 100644 --- a/content/manuals/ai/model-runner/ide-integrations.md +++ b/content/manuals/ai/model-runner/ide-integrations.md @@ -27,7 +27,7 @@ Before configuring any tool: > [!TIP] > -> The default context size for many models, e.g. `gpt-oss` is 4,096 tokens, which is limiting for coding tasks. +> The default context size for many models (such as `gpt-oss`) is 4,096 tokens, which is limiting for coding tasks. > You can repackage it with a larger context window: > > ```console