This repository provides the Lagoon-facing deployment for AnythingLLM. It uses the published anythingllm-lagoon-base image so deployments can reuse a shared runtime image from GHCR instead of rebuilding the same application layer on every deploy.
The reusable runtime image is maintained in the anythingllm-lagoon-base repository and published as ghcr.io/amazeeio/anythingllm-lagoon-base:latest.
- Pulls the published base image in Lagoon and local Docker Compose
- Supplies deployment-specific environment variables and persistent volume mounts
- Keeps the Lagoon-facing service definition stable for downstream consumers
anythingllm-lagoon-base owns the shared runtime image used by this repository. That includes:
- The pinned upstream AnythingLLM image version
- Lagoon-compatible file permissions for arbitrary runtime UIDs
- Build-time Prisma client generation
- The custom entrypoint that preserves the current startup behavior
If you need to change the runtime image itself, make that change in anythingllm-lagoon-base, publish a new image, and then update this repository if needed.
- A Lagoon account with access to create repositories
- Access to an LLM API provider such as amazee.ai
- Access to the published image
ghcr.io/amazeeio/anythingllm-lagoon-base:latest
Add this repository as a Lagoon project. Lagoon will pull the published base image defined in docker-compose.yml.
Before deploying, configure these variables in your Lagoon project:
| Variable | Description | Example |
|---|---|---|
JWT_SECRET |
Secret used for AnythingLLM authentication | replace-with-long-random-value |
LLM_URL |
Base URL for your Generic OpenAI-compatible provider | https://llm.us103.amazee.ai |
LLM_AI_KEY |
API key for the configured provider | your-api-key-here |
EMBEDDING_PROVIDER |
Embedding backend | native |
Optional database variables when using external Postgres:
| Variable | Description |
|---|---|
DB_HOST |
Database host |
DB_USER |
Database user |
DB_PASS |
Database password |
DB_NAME |
Database name |
DB_PORT |
Database port |
This repository maps LLM_URL and LLM_AI_KEY onto the internal AnythingLLM OPEN_AI_BASE_PATH and OPEN_AI_KEY variables for you.
Deploy the project. Lagoon will pull the published image and start AnythingLLM.
After deployment:
- Open the Lagoon-provided route
- Complete the onboarding wizard
- Confirm the LLM provider settings if you did not set them entirely through environment variables
- Create the admin account
- Upload content and begin using the workspace
Local development uses the same published image as Lagoon.
docker compose pull
docker compose up -dThe UI will be available at http://localhost:3000.
docker compose logs -f anythingllmdocker compose exec anythingllm bashdocker compose downlagoon logs -p <project-name> -e prodlagoon ssh -p <project-name> -e prod
ls -la /app/server/storage- Check that
LLM_URLandLLM_AI_KEYare set correctly - Confirm the provider is reachable from the deployed environment
- Review AnythingLLM logs for provider-specific errors
- Check that the persistent volume has free space
- Verify the uploaded file format is supported
- Review AnythingLLM logs for processing failures
- Runtime state is stored in Lagoon at
/app/server/storage - The service runs as UID
10000in local Compose to match Lagoon expectations - The shared image preserves the current custom startup behavior, including skipping runtime
prisma generate