NixOS instances running in hardware-isolated microVMs. Write a NixOS module, push, and it boots on seed.loom.farm with automatic TLS, DNS, persistent storage, and encrypted secrets.
Each instance is a full NixOS system — services.nginx, services.postgresql, services.openssh, whatever you'd put in a NixOS config. Seed adds a thin seed.* module for platform glue.
If you're an AI agent deploying to Seed (or a human pointing one at it), skip to the technical reference.
You write a nix flake that exports seeds.<name> for each instance. The platform evaluates your flake, builds the NixOS closure, and boots it in a Kata Containers microVM. Every instance gets:
- DNS:
<instance>.<namespace>.seed.loom.farm— resolves immediately - TLS: Automatic Let's Encrypt certificates via the platform's embedded ACME server
- Storage: Persistent volumes that survive restarts and redeployments
- Secrets: A virtual TPM device for encrypted secrets via sops-nix
- Git hosting: Push to Silo — no GitHub account needed
- Logs & management:
ssh seed.loom.farm logs <instance>,status,restart
There's no Docker, no image registry, no Helm, no YAML. NixOS is the abstraction.
nix flake init -t github:loomtex/seed#instance # nginx static site
nix flake init -t github:loomtex/seed#instance-caddy # Caddy reverse proxy with TLS
nix flake init -t github:loomtex/seed#instance-api # API server with sops secrets
nix flake init -t github:loomtex/seed#multi # web frontend + API backendThe basic instance template creates two files:
# flake.nix
{
inputs.seed.url = "github:loomtex/seed";
inputs.nixpkgs.follows = "seed/nixpkgs";
outputs = { seed, ... }: {
seeds.web = seed.lib.mkSeed {
name = "web";
module = ./web.nix;
};
};
}# web.nix
{ pkgs, ... }:
{
seed.size = "xs";
seed.expose.http.enable = true;
seed.storage.data = "1Gi";
services.nginx.enable = true;
services.nginx.virtualHosts.default = {
listen = [{ addr = "0.0.0.0"; port = 80; }];
root = "/seed/storage/data/www";
};
}Create an .authorized_keys file in the repo root containing the SSH public keys that should have access. Standard authorized_keys format:
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAA... you@machine
This is how the platform identifies you. Your SSH key proves ownership of the repo — there are no passwords or API tokens.
Push your flake to a git remote. You can use Seed's built-in git hosting (Silo) or GitHub:
# Option A: Silo (built-in, no account needed)
git remote add origin silo.loom.farm:my-app.git
git push -u origin master
# Option B: GitHub
git remote add origin git@github.com:you/my-app.git
git push -u origin masterThen register your repo with an invite code:
# Silo-hosted repo
ssh seed.loom.farm plant silo:my-app <invite-code>
# GitHub-hosted repo
ssh seed.loom.farm plant github:you/my-app <invite-code>The controller evaluates your flake, builds the NixOS closure, and boots the instance. Check status:
ssh seed.loom.farm status
ssh seed.loom.farm logs webAfter the initial plant, every git push triggers automatic redeployment via webhook.
Before pushing, validate your instance config:
nix eval .#seeds.web.meta --jsonThis type-checks the full NixOS evaluation and returns controller metadata without building anything. Option mismatches, missing values, and module conflicts surface here — not at deploy time.
VM sizing tier. Defaults to "xs".
| Tier | vCPUs | Memory |
|---|---|---|
xs |
1 | 512 MB |
s |
1 | 1 GB |
m |
2 | 2 GB |
l |
4 | 4 GB |
xl |
8 | 8 GB |
Ports to expose. Entry names are looked up in a well-known service table (derived from /etc/services) for default port and protocol, so common services need no configuration:
seed.expose.https.enable = true; # 443/tcp, ACME-enabled
seed.expose.ssh.enable = true; # 22/tcp
seed.expose.dns.enable = true; # 53, TCP+UDP
seed.expose.postgresql.enable = true; # 5432/tcpOverride defaults or define custom services:
seed.expose.https.port = 8443; # override default port
seed.expose.myapp = { port = 9090; protocol = "tcp"; }; # not well-known, specify both
seed.expose.http = 8080; # bare port shorthandProtocols: tcp, udp, dns (both TCP+UDP), http (ACME-enabled), grpc (ACME-enabled).
When the protocol is http or grpc, the platform injects SEED_ACME_URL — an ACME directory endpoint that proxies to Let's Encrypt. Your instance's web server (e.g. Caddy) uses its built-in ACME client to request certificates through this endpoint.
Persistent volumes. Accepts a size string (mounted at /seed/storage/<name>) or an attrset with size and mountPoint.
seed.storage.data = "1Gi"; # /seed/storage/data
seed.storage.cache = { size = "500Mi"; mountPoint = "/tmp/cache"; }; # custom mountStorage survives pod restarts and redeployments. PVCs are never garbage-collected.
Deployment strategy. "recreate" (default) stops the old instance before starting the new one — safe for stateful services. "rolling" starts the new instance first for zero-downtime updates.
Instances with http or grpc protocol in seed.expose get access to the platform's ACME facade — an RFC 8555 endpoint that proxies DNS-01 validation to Let's Encrypt. Your instance's web server requests certificates through it.
Your instance receives two environment variables:
SEED_ACME_URL— the platform's ACME directory endpointSEED_FQDN— your instance's hostname (e.g.web.s-gaydazldmnsg.seed.loom.farm)
Point your web server's ACME client at SEED_ACME_URL. Caddy is the easiest option — it handles ACME natively:
{ pkgs, ... }:
{
seed.expose.https.enable = true;
seed.storage.caddy = { size = "100Mi"; mountPoint = "/var/lib/caddy"; };
services.caddy = {
enable = true;
dataDir = "/var/lib/caddy";
configFile = pkgs.writeText "Caddyfile" ''
{
acme_ca {$SEED_ACME_URL}
}
{$SEED_FQDN} {
root * /seed/storage/data/www
file_server
}
'';
};
systemd.services.caddy.serviceConfig.EnvironmentFile = "/run/seed/env";
}Caddy automatically obtains and renews TLS certificates from the platform ACME endpoint. The {$SEED_ACME_URL} and {$SEED_FQDN} variables are expanded from the environment at startup.
For nginx, use NixOS's security.acme module (which uses lego under the hood):
{ config, ... }:
let
acmeServer = "http://seed-controller.seed-system.svc.cluster.local:9876/acme/directory";
in {
seed.expose.http.enable = true;
seed.expose.https.enable = true;
seed.storage.acme = { size = "100Mi"; mountPoint = "/var/lib/acme"; };
security.acme = {
acceptTerms = true;
defaults.server = acmeServer;
defaults.email = "you@example.com";
};
services.nginx = {
enable = true;
virtualHosts."my-app.example.com" = {
enableACME = true;
forceSSL = true;
root = "/seed/storage/data/www";
};
};
}Certificates are real Let's Encrypt certs, browser-trusted. With nginx, persist /var/lib/acme via seed.storage to avoid hitting rate limits on redeployment. Caddy manages its own cert storage internally.
Every instance is reachable at <instance>.<namespace>.seed.loom.farm. The namespace is derived deterministically from your flake URI — you don't choose it, but it's stable.
DNS records are created automatically when the instance deploys. No configuration needed.
Instances get a virtual TPM device backed by swtpm. On first boot, a TPM-backed age identity is generated at /seed/tpm/age-identity. Use this with sops-nix for encrypted secrets:
{ config, ... }:
{
sops.defaultSopsFile = ./secrets/myapp.yaml;
sops.secrets.api-key = {};
services.myapp.environmentFile = config.sops.secrets.api-key.path;
}sops.age.keyFile defaults to /seed/tpm/age-identity — no extra configuration needed.
- Deploy the instance without secrets. It boots and generates a TPM identity.
- Read the public key:
ssh seed.loom.farm keys web— outputs theage1tpm1q...recipient. - Encrypt your secrets:
sops --age 'age1tpm1q...' secrets/myapp.yaml - Redeploy. sops-nix decrypts via the vTPM automatically.
A flake can export any number of instances. They share a namespace.
{
inputs.seed.url = "github:loomtex/seed";
inputs.nixpkgs.follows = "seed/nixpkgs";
outputs = { seed, ... }: {
seeds.web = seed.lib.mkSeed { name = "web"; module = ./web.nix; };
seeds.api = seed.lib.mkSeed { name = "api"; module = ./api.nix; };
seeds.db = seed.lib.mkSeed { name = "db"; module = ./db.nix; };
};
}Seed includes built-in git hosting at silo.loom.farm. No account needed — your SSH key is your identity.
git clone silo.loom.farm:my-app.git # clone (anyone)
git push silo.loom.farm:my-app.git # push (requires key in .authorized_keys)Repos are created automatically on first push. The key that creates the repo becomes the owner. Collaborators are managed via the .authorized_keys file in the repo root — push a new key there to grant access.
Read access is public. Write access requires a key listed in .authorized_keys.
When registering with plant, use the silo: shorthand:
ssh seed.loom.farm plant silo:my-app <invite-code>Silo also has a web interface at https://silo.loom.farm for browsing repos, with syntax highlighting and tarball downloads.
All management happens over SSH at seed.loom.farm:
ssh seed.loom.farm status # instance status across all your repos
ssh seed.loom.farm status my-repo # status for a specific repo
ssh seed.loom.farm logs web # last 100 log lines
ssh seed.loom.farm logs web -f # stream logs
ssh seed.loom.farm logs web --lines 500
ssh seed.loom.farm logs my-repo/web # disambiguate repo/instance
ssh seed.loom.farm restart web # restart an instance
ssh seed.loom.farm help # show all commandsAll commands support --json for machine-readable output.
Any SSH key can connect. Your key identity determines which repos you can manage — if your key is in a repo's .authorized_keys, you see that repo.
Shoots are ephemeral VMs that share the parent instance's nix closure and persistent storage — like fork() for seed instances. Enable them with:
seed.shoot.enable = true;This gives the instance a seed-shoot command and a SEED_SHOOT_URL env var pointing to the node-local pool manager.
seed-shoot echo "hello from shoot" # run in isolated VM
seed-shoot sha256sum /seed/storage/data/in.bin # access parent's storage
seed-shoot --timeout 60000 long-running-task # timeout in msEach shoot runs in its own hardware-isolated microVM. No network interface — communication is via shared storage and stdout/stderr only.
- Parallel computation: Fan out work across shoots, each gets its own CPU/memory
- Sandboxed execution: Run untrusted code — if it crashes, only the ephemeral VM is affected
- Batch processing: Queue work to shared storage, fork shoots to process items
- No network inside shoots
- No vTPM — pass secrets via shared storage if needed
- Nix store is read-only (can run binaries, can't build)
- Same physical node as parent
Instances run NixOS inside Kata VMs with boot.isContainer = true. This keeps closures small but has some side effects.
RuntimeDirectory: Some services expect /run/<name>/ to exist. Since boot.isContainer skips some tmpfiles setup, add it explicitly:
systemd.services.myapp.serviceConfig.RuntimeDirectory = "myapp";Storage ownership: PVC filesystems are root-owned. If your service runs as a non-root user, chown the mount point:
systemd.tmpfiles.rules = [ "d /seed/storage/data 0755 myapp myapp -" ];No kubectl exec: Kata VMs don't support kubectl exec. Debug via service APIs, port-forward, or write diagnostics to storage.
Environment variables: k8s-injected env vars are captured at /run/seed/env during activation. Use EnvironmentFile in systemd services:
systemd.services.myapp.serviceConfig.EnvironmentFile = "/run/seed/env";Firewall: The NixOS firewall is active inside the VM. seed.expose automatically opens declared ports. If you expose additional ports outside of seed.expose, open them manually:
networking.firewall.allowedTCPPorts = [ 9090 ];Each example is available as a template (nix flake init -t github:loomtex/seed#<name>). All use this flake.nix — change the module path and seed name as needed:
{
inputs.seed.url = "github:loomtex/seed";
inputs.nixpkgs.follows = "seed/nixpkgs";
outputs = { seed, ... }: {
seeds.web = seed.lib.mkSeed { name = "web"; module = ./web.nix; };
};
}Caddy proxies HTTPS to a Node.js backend. The platform ACME endpoint provides Let's Encrypt certificates automatically. Note the {$VAR} syntax — this is Caddy's env var expansion, not nix interpolation.
# web.nix
{ pkgs, ... }:
let
app = pkgs.writeShellScript "app" ''
while true; do
echo -e "HTTP/1.1 200 OK\r\nContent-Type: text/plain\r\n\r\nHello from Seed!" | \
${pkgs.busybox}/bin/nc -l -p 3000 -q 0
done
'';
in {
seed.expose.https.enable = true;
seed.storage.caddy = { size = "100Mi"; mountPoint = "/var/lib/caddy"; };
services.caddy = {
enable = true;
dataDir = "/var/lib/caddy";
configFile = pkgs.writeText "Caddyfile" ''
{
acme_ca {$SEED_ACME_URL}
}
{$SEED_FQDN} {
reverse_proxy localhost:3000
}
'';
};
systemd.services.caddy.serviceConfig.EnvironmentFile = "/run/seed/env";
systemd.services.app = {
wantedBy = [ "multi-user.target" ];
serviceConfig.ExecStart = app;
serviceConfig.Restart = "always";
};
}No TLS — serves plain HTTP on port 80. Good for behind-a-proxy setups or internal services.
# web.nix
{ pkgs, ... }:
{
seed.expose.http.enable = true;
seed.storage.data = "1Gi";
services.nginx.enable = true;
services.nginx.virtualHosts.default = {
listen = [{ addr = "0.0.0.0"; port = 80; }];
root = "/seed/storage/data/www";
};
}PowerDNS authoritative nameserver. The dns protocol exposes both TCP and UDP on port 53 automatically.
# dns.nix
{ config, pkgs, ... }:
{
seed.size = "s";
seed.expose.dns.enable = true;
seed.expose.api = { port = 8081; };
seed.storage.data = "1Gi";
sops.defaultSopsFile = ./secrets/dns.yaml;
sops.secrets.pdns-api-key = {};
services.powerdns = {
enable = true;
extraConfig = ''
launch=gsqlite3
gsqlite3-database=/seed/storage/data/pdns.db
local-address=0.0.0.0, ::
local-port=53
api=yes
api-key-file=${config.sops.secrets.pdns-api-key.path}
webserver=yes
webserver-address=0.0.0.0
webserver-port=8081
webserver-allow-from=0.0.0.0/0
socket-dir=/run/pdns
'';
};
systemd.services.pdns.serviceConfig.RuntimeDirectory = "pdns";
systemd.tmpfiles.rules = [ "d /seed/storage/data 0755 pdns pdns -" ];
}A Node.js app that reads an API key from sops-nix. Secrets are encrypted with the instance's TPM-backed age key — see Secrets for the provisioning flow.
# api.nix
{ config, pkgs, ... }:
let
app = pkgs.writeShellScript "api-server" ''
API_KEY=$(cat /run/secrets/api-key)
${pkgs.nodejs}/bin/node -e "
const http = require('http');
const key = process.env.API_KEY || require('fs').readFileSync('/run/secrets/api-key', 'utf8').trim();
http.createServer((req, res) => {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('ok');
}).listen(3000);
"
'';
in {
seed.expose.myapp = { port = 3000; };
seed.storage.data = "1Gi";
sops.defaultSopsFile = ./secrets/api.yaml;
sops.secrets.api-key = {};
systemd.services.api = {
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
serviceConfig = {
ExecStart = app;
Restart = "always";
};
};
}A web frontend and API backend sharing a namespace. Each instance is a separate VM with its own resources.
# flake.nix
{
inputs.seed.url = "github:loomtex/seed";
inputs.nixpkgs.follows = "seed/nixpkgs";
outputs = { seed, ... }: {
seeds.web = seed.lib.mkSeed { name = "web"; module = ./web.nix; };
seeds.api = seed.lib.mkSeed { name = "api"; module = ./api.nix; };
};
}# web.nix — Caddy frontend, proxies /api to the api instance
{ pkgs, ... }:
{
seed.expose.https.enable = true;
seed.storage.caddy = { size = "100Mi"; mountPoint = "/var/lib/caddy"; };
services.caddy = {
enable = true;
dataDir = "/var/lib/caddy";
configFile = pkgs.writeText "Caddyfile" ''
{
acme_ca {$SEED_ACME_URL}
}
{$SEED_FQDN} {
handle /api/* {
reverse_proxy api:3000
}
handle {
root * /seed/storage/data/www
file_server
}
}
'';
};
systemd.services.caddy.serviceConfig.EnvironmentFile = "/run/seed/env";
seed.storage.data = "1Gi";
}# api.nix — Node.js API backend
{ pkgs, ... }:
let
server = pkgs.writeShellScript "api" ''
${pkgs.nodejs}/bin/node -e "
const http = require('http');
http.createServer((req, res) => {
res.writeHead(200, {'Content-Type': 'application/json'});
res.end(JSON.stringify({status: 'ok'}));
}).listen(3000);
"
'';
in {
seed.expose.myapi = { port = 3000; };
systemd.services.api = {
wantedBy = [ "multi-user.target" ];
serviceConfig.ExecStart = server;
serviceConfig.Restart = "always";
};
}Optimized for agents. Everything needed to deploy an instance from scratch.
1. nix flake init -t github:loomtex/seed#instance-caddy (or #instance, #instance-api, #multi)
2. Edit web.nix (NixOS config with seed.* options)
3. Create .authorized_keys in repo root (your SSH public key)
4. nix eval .#seeds.web.meta --json # validate
5. git init && git add -A && git commit -m "initial"
6. git remote add origin silo.loom.farm:my-app.git
7. git push -u origin master # creates repo on silo
8. ssh seed.loom.farm plant silo:my-app <invite> # register with platform
9. ssh seed.loom.farm status # verify
10. ssh seed.loom.farm logs web # check logs
Subsequent deploys: git push triggers automatic reconciliation via webhook.
| Variable | When | Value |
|---|---|---|
SEED_FQDN |
always | <instance>.<namespace>.seed.loom.farm |
SEED_ACME_URL |
seed.acme = true |
ACME directory URL for TLS certs |
SEED_SHOOT_URL |
seed.shoot.enable = true |
Pool manager endpoint for ephemeral VMs |
Access via EnvironmentFile = "/run/seed/env" in systemd services (not $ENV — systemd strips inherited env in Kata VMs).
| Path | Description |
|---|---|
/seed/storage/<name> |
Persistent volume mount (default) |
/seed/tpm/age-identity |
TPM-backed age key for sops-nix |
/run/seed/env |
k8s-injected env vars (source this) |
/run/current-system |
NixOS system closure |
Same nix config produces the same store paths, which produces the same generation hash. The controller skips reconciliation entirely when nothing changed. If the store path didn't change, the pod won't restart.
- Eval (
nix eval): NixOS option type errors. Immediate, precise tracebacks. - Build (
nix build): Derivation failures (missing deps, compile errors). After eval succeeds. - Runtime: systemd service failures inside the VM. Use
ssh seed.loom.farm logs <instance>or expose a health endpoint.
Most errors are caught at stage 1.
plant <flake-uri> <code> register a repo (silo:name, github:user/repo)
status [repo] instance status + namespace + DNS names
logs <[repo/]instance> logs (flags: -f, --lines N, --json)
restart <[repo/]instance> restart an instance
keys <[repo/]instance> show age public key (for sops encryption)
help show usage
silo:my-app → tarball+https://silo.loom.farm/my-app/archive/master.tar.gz
github:user/repo → passed through to nix
git+https://... → passed through to nix
seed.size = "xs"; # xs|s|m|l|xl — VM sizing tier
seed.expose.<name>.enable = true; # well-known: port/protocol from service table
seed.expose.<name> = { port; protocol; }; # custom: specify explicitly
seed.expose.<name> = port; # bare port shorthand
seed.storage.<name> = "1Gi"; # or { size; mountPoint; }
seed.rollout = "recreate"; # or "rolling"
seed.acme = true; # auto-detected from expose protocols
seed.shoot.enable = false; # ephemeral VM forkingRuntimeDirectorymust be set explicitly for services needing/run/<name>/- PVC mounts are root-owned — use
systemd.tmpfiles.rulesto chown for non-root services - No
kubectl execin Kata VMs — debug via logs, port-forward, or storage - Use
EnvironmentFile = "/run/seed/env"for SEED_* env vars in systemd services - Persist
/var/lib/acmeviaseed.storageto avoid LE rate limits on redeploy nix eval .#seeds.<name>.meta --jsonis the fast feedback loop — use it before every push
Seed uses NixOS as the instance abstraction instead of containers. Every instance is a real NixOS system evaluated from a nix flake.
The full NixOS module ecosystem is available — services.postgresql, security.acme, services.openssh, sops-nix — with correct service dependencies, user management, and systemd lifecycle. Multi-service instances are just NixOS config.
The tradeoff is boot time (systemd startup, not millisecond cold starts). Seed isn't a function runtime — it's infrastructure.
Because NixOS is declarative, typed, reproducible, and introspectable, it is trivially wielded by modern LLMs. An agent can compose NixOS modules, debug systemd journals, and reason about option types without the friction a human faces. Nix is perfectly positioned to never be typed by a human again. Seed leans into that.
MIT