A framework for running applications inside AWS Nitro Enclaves with zero SDK imports. The enclave supervisor handles attestation, KMS secret management, PCR extension, and BIP-340 Schnorr response signing automatically. You write a plain HTTP server.
Client AWS Services (KMS, SSM)
| ^
| HTTPS (port 443) |
v |
EC2 Instance (m6i.xlarge, Amazon Linux 2023) |
| |
| vsock:1024 gvproxy (Docker)
v 192.168.127.1
Nitro Enclave ---------------------------------->+
├── nitriding (TLS :443 -> :7073)
├── enclave-supervisor (reverse proxy :7073 -> :7074)
│ ├── attestation key + Schnorr signing
│ ├── KMS secret decryption
│ ├── PCR extension endpoints
│ └── management API (/health, /v1/enclave-info, ...)
├── your-app (plain HTTP server :7074)
└── viproxy (IMDS forwarding -> vsock CID 3:8002)
- nitriding starts, sets up the TAP network interface via gvproxy, and terminates TLS on port 443
- enclave-supervisor initializes:
- Decrypts secrets from KMS using a Nitro attestation document (PCR0-bound)
- Sets decrypted secrets as environment variables
- Generates an ephemeral secp256k1 attestation key
- Registers
SHA256(attestationPubkey)with nitriding (embedded asappKeyHashin attestation UserData) - Starts the reverse proxy on port 7073 with Schnorr response signing
- your-app is launched as a child process on port 7074, inheriting secret env vars
The enclave uses gvproxy for outbound connectivity:
192.168.127.1- Gateway/DNS server (gvproxy)192.168.127.2- Enclave's virtual IP127.0.0.1:80- IMDS endpoint (via viproxy -> vsock CID 3:8002)
Your enclave app is a plain HTTP server — no SDK imports needed. The framework supports three languages:
| Language | Min Version | Template | Nix Build System | Dependency Mechanism |
|---|---|---|---|---|
| Go | 1.25+ | enclave generate template --golang |
buildGoModule |
vendorHash (from go.sum) |
| Node.js | 22+ | enclave generate template --nodejs |
buildNpmPackage |
npmDepsHash (from package-lock.json) |
| .NET | 10.0 | enclave generate template --dotnet |
buildDotnetModule |
deps.json (via fetch-deps) |
Go — The CLI and SDK are written in Go 1.25. Your app is built with buildGoModule using vendored dependencies. The vendorHash is computed from go.sum during enclave setup.
Node.js — Your app is built with buildNpmPackage. Requires package-lock.json committed to the repo (Nix needs it for reproducible dependency hashes).
.NET — Your app targets net10.0 and is built with buildDotnetModule using the .NET 10 SDK (dotnetCorePackages.sdk_10_0_1xx). Dependencies are managed via deps.json generated by Nix's fetch-deps mechanism, not a hash.
- Docker (for reproducible EIF builds via pinned NixOS container)
- Nix (for hash computation and local builds)
- AWS CLI v2 with appropriate credentials
- AWS CDK CLI (
npm install -g aws-cdk) jq- Go apps: Go 1.25+
- Node.js apps: Node.js 22+
- .NET apps: .NET SDK 10.0+
go install github.com/ArkLabsHQ/introspector-enclave/cmd/enclave@latestOr build from source with SDK hashes baked in:
make sdk-hashes REV=v1.0.0 # compute source hash
make vendor-hash # compute vendor hash
make build # build CLI with hashes baked inOption A: Generate a complete template (recommended for new projects):
enclave generate template --golang my-app # Go project
enclave generate template --nodejs my-app # Node.js projectOption B: Add enclave support to an existing repo:
enclave initBoth create:
enclave/enclave.yaml— main config fileflake.nix— Nix build definition (language-specific)enclave/start.sh— enclave boot scriptenclave/gvproxy/— network proxy configenclave/scripts/— initialization scriptsenclave/systemd/— service unit filesenclave/user_data/— EC2 user data
If built with make build, the sdk: section is auto-populated with the correct hashes.
The setup command auto-detects your GitHub remote and computes all nix hashes:
enclave setup # Go app (default), runs in Docker
enclave setup --language nodejs # Node.js app (writes correct flake.nix)
enclave setup --local # uses local nix installationThis populates nix_owner, nix_repo, nix_rev, nix_hash, and nix_vendor_hash in enclave/enclave.yaml from your local git state.
Node.js:
package-lock.jsonmust be committed to your repo. Nix requires it to compute reproducible dependency hashes.
After enclave setup, review and fill in remaining fields:
name: my-app # app name
region: us-east-1 # AWS region
account: "123456789012" # your AWS account ID
sdk:
rev: "v1.0.0" # auto-populated by 'make build'
hash: "sha256-..."
vendor_hash: "sha256-..."
app:
language: go # "go" or "nodejs"
nix_owner: my-org # auto-populated by 'enclave setup'
nix_repo: my-app
nix_rev: "abc123..."
nix_hash: "sha256-..."
nix_vendor_hash: "sha256-..." # Go vendor hash or npm deps hash
nix_sub_packages:
- "cmd" # Go sub-package with main() (Go only)
binary_name: my-app
env:
MY_APP_PORT: "7074"
MY_APP_DATADIR: "/app/data"
secrets:
- name: signing_key
env_var: APP_SIGNING_KEYYour app is a plain HTTP server — no SDK imports needed:
Go:
package main
import (
"net/http"
"os"
)
func main() {
port := os.Getenv("ENCLAVE_APP_PORT") // default 7074
signingKey := os.Getenv("APP_SIGNING_KEY") // decrypted by supervisor
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
w.Write([]byte("Hello from the enclave"))
})
http.ListenAndServe(":"+port, nil)
}Node.js:
const http = require("http");
const port = process.env.ENCLAVE_APP_PORT || "7074";
const signingKey = process.env.APP_SIGNING_KEY; // decrypted by supervisor
http.createServer((req, res) => {
res.writeHead(200, { "Content-Type": "text/plain" });
res.end("Hello from the enclave\n");
}).listen(port, () => console.log(`listening on :${port}`));enclave initRunning init again when enclave.yaml already exists validates all fields and prints a summary.
enclave build # build EIF via Docker + Nix (reproducible)
enclave build --local # build EIF using local Nix installationOutputs artifacts/image.eif and artifacts/pcr.json with PCR0, PCR1, PCR2 measurements.
enclave deploy # deploy CDK stack (VPC, EC2, KMS, IAM, secrets)enclave verify # verify attestation document + PCR0 matchThe enclave build fetches your app source from GitHub at the exact commit specified in enclave.yaml. Code must be committed and pushed before building.
Code-only changes (no new dependencies):
git add . && git commit -m "update" && git push
enclave update # fast: updates nix_rev + nix_hash only (~1 second)
enclave build
enclave deployDependency changes (go.mod/go.sum or package.json/package-lock.json):
git add . && git commit -m "update deps" && git push
enclave setup # full: recomputes all hashes including vendor/deps hash
enclave build
enclave deployThe enclave init and enclave generate template commands scaffold three GitHub Actions workflows for your app:
Triggered manually via workflow_dispatch. Builds the EIF, deploys the CDK stack, and verifies the running enclave:
- Build — installs the CLI, pulls the Nix Docker image, runs
enclave build - Deploy — runs
enclave deploy(creates/updates VPC, EC2, KMS, IAM, S3, secrets) - Publish manifest — creates a GitHub Release (
deployment.json) with PCR values and Elastic IP - Attest — generates GitHub artifact attestations for both
deployment.jsonandpcr.json - Verify — runs
enclave verifyagainst the live enclave (waits up to 5 minutes for boot) - Status page — publishes an attestation status page to
gh-pagesat/attestation/
Required repo variables:
| Variable | Description |
|---|---|
AWS_ROLE_ARN |
IAM role ARN with OIDC trust policy for this repo |
AWS_REGION |
AWS region (e.g. us-east-1) |
Required permissions: id-token: write, contents: write, attestations: write
Triggered manually via workflow_dispatch. Tears down the CDK stack:
- Installs the CLI and authenticates via OIDC
- Creates placeholder build artifacts (CDK synthesis needs them even for destroy)
- Runs
enclave destroy --force
Uses the same AWS_ROLE_ARN and AWS_REGION repo variables as the deploy workflow.
Runs daily (8:00 UTC cron) and on workflow_dispatch. Provides continuous attestation monitoring:
- Downloads
deployment.jsonfrom thelatestGitHub Release - Runs
enclave verifyagainst the live enclave using the manifest's base URL and PCR0 - Updates the attestation status page on
gh-pages
No repo variables needed — all inputs come from the deployment manifest published by the deploy workflow.
Verify an attestation manually:
# Verify against a known PCR0
enclave verify --base-url https://<elastic-ip> --expected-pcr0 <pcr0>
# Verify the deployment manifest provenance
gh attestation verify deployment.json --repo <owner>/<repo>SDK hashes are computed per release tag and baked into CLI builds via ldflags. This lets enclave init auto-populate the sdk: section with the correct source and vendor hashes.
Releases are created via the Release SDK Version GitHub Actions workflow (.github/workflows/sdk-hashes.yml), triggered manually with a version input (e.g. v0.0.29):
- Validate — checks version format (
vX.Y.Z) and that the tag doesn't already exist - Compute vendor hash — runs a trial Nix build of
enclave-supervisorto extract the Go vendor hash - Compute source hash — archives the repo at HEAD, computes
nix hash pathover the archive - Commit and tag — writes
sdk-hashes.json, commits, tags, and pushes tomaster - Verify build — confirms
enclave-supervisorbuilds successfully with the computed hashes
make build # build enclave-cli with SDK hashes from sdk-hashes.json baked in via ldflags
make install # install to $GOPATH/bin| Command | Description |
|---|---|
enclave init |
Scaffold enclave project or validate existing config |
enclave generate template --golang |
Generate a complete Go enclave app template |
enclave generate template --nodejs |
Generate a complete Node.js enclave app template |
enclave generate template --dotnet |
Generate a complete .NET enclave app template |
enclave setup |
Auto-populate app nix hashes from git remote |
enclave setup --language <lang> |
Set language (go, nodejs, dotnet) and rewrite flake.nix |
enclave setup --local |
Use local Nix instead of Docker for hash computation |
enclave update |
Fast update: only nix_rev + nix_hash (code changes, no dep changes) |
enclave build |
Build EIF image (reproducible, via Docker + Nix) |
enclave build --local |
Build EIF using local Nix instead of Docker |
enclave deploy |
Deploy CDK stack (VPC, EC2, KMS, IAM, secrets) |
enclave verify |
Verify attestation document and PCR0 against local build |
enclave status |
Show deployment status |
enclave destroy |
Tear down the CDK stack |
The supervisor exposes management endpoints alongside proxied requests to your app. All responses include Schnorr signature headers.
| Method | Path | Auth | Description |
|---|---|---|---|
| GET | /health |
— | Supervisor health check (ready/degraded) |
| GET | /v1/enclave-info |
— | Build + runtime metadata (version, attestation key, previous PCR0) |
| PUT | /v1/storage/{key} |
Token | Store encrypted data (AES-256-GCM + S3) |
| GET | /v1/storage/{key} |
Token | Retrieve and decrypt stored data |
| DELETE | /v1/storage/{key} |
Token | Delete stored data |
| GET | /v1/storage?prefix= |
Token | List keys matching a prefix |
| PUT | /v1/secrets/{name} |
Token | Create/update a dynamic secret |
| GET | /v1/secrets/{name} |
Token | Retrieve a dynamic secret |
| DELETE | /v1/secrets/{name} |
Token | Delete a dynamic secret |
| GET | /v1/secrets |
Token | List all dynamic secrets (metadata only) |
| GET | /enclave/attestation |
— | Nitro attestation document (served by nitriding) |
* |
/* |
— | All other requests proxied to your app on port 7074 |
Endpoints marked Token require Authorization: Bearer {token} where the token is auto-generated at boot and passed to your app via the ENCLAVE_MGMT_TOKEN environment variable.
These endpoints are used internally and are not exposed to external clients.
| Method | Path | Auth | Description |
|---|---|---|---|
| POST | /v1/export-key |
— | Re-encrypt secrets for locked-key migration (gated by SSM parameter, called by mgmt server) |
Every response includes:
X-Attestation-Signature: BIP-340 Schnorr signature overSHA256(response_body)X-Attestation-Pubkey: compressed public key of the ephemeral attestation key
Clients verify the signature, then confirm the pubkey hash matches appKeyHash in the attestation document's UserData.
The supervisor automatically extends PCR registers 16+ during initialization with SHA256(compressed_secp256k1_pubkey) for each configured secret. This binds the enclave's cryptographic identity to specific PCR values, which can be verified via the attestation document.
The enclave provides persistent encrypted storage backed by S3 with automatic KMS envelope encryption:
- On first boot, a 256-bit Data Encryption Key (DEK) is generated via KMS and stored (encrypted) in SSM
- Data is encrypted with AES-256-GCM (random 12-byte nonce per write) before upload to S3
- The storage bucket is provisioned by the CDK stack and discovered via SSM parameter
- DEK is automatically re-encrypted during locked-key migration
# Store data
curl -X PUT https://your-enclave/v1/storage/my/key \
-H "Authorization: Bearer $ENCLAVE_MGMT_TOKEN" \
-d 'binary data here'
# Retrieve data
curl https://your-enclave/v1/storage/my/key \
-H "Authorization: Bearer $ENCLAVE_MGMT_TOKEN"
# List keys by prefix
curl "https://your-enclave/v1/storage?prefix=my/" \
-H "Authorization: Bearer $ENCLAVE_MGMT_TOKEN"
# Delete
curl -X DELETE https://your-enclave/v1/storage/my/key \
-H "Authorization: Bearer $ENCLAVE_MGMT_TOKEN"Runtime-configurable secrets stored in encrypted S3 (reuses the storage DEK). Unlike static KMS secrets (provisioned at deploy time), dynamic secrets can be created, updated, and deleted at runtime.
- Each secret has a
name, optionalenv_varbinding, andvalue - When
env_varis set, the value is injected as an environment variable (conflicts with static secrets are rejected) - On boot, all stored dynamic secrets are loaded and their env vars injected
- Survive enclave restarts and migrations
# Create a dynamic secret with env var binding
curl -X PUT https://your-enclave/v1/secrets/api-token \
-H "Authorization: Bearer $ENCLAVE_MGMT_TOKEN" \
-H "Content-Type: application/json" \
-d '{"env_var": "API_TOKEN", "value": "sk-..."}'
# List all secrets (metadata only, no values)
curl https://your-enclave/v1/secrets \
-H "Authorization: Bearer $ENCLAVE_MGMT_TOKEN"
# Retrieve a secret
curl https://your-enclave/v1/secrets/api-token \
-H "Authorization: Bearer $ENCLAVE_MGMT_TOKEN"
# Delete
curl -X DELETE https://your-enclave/v1/secrets/api-token \
-H "Authorization: Bearer $ENCLAVE_MGMT_TOKEN"The host-side management server (mgmt/) runs on the EC2 instance at 127.0.0.1:8443 (plain HTTP, localhost only). Access it via SSM Session Manager.
| Method | Path | Description |
|---|---|---|
| GET | /health |
Enclave status (running/stopped, CID, memory, CPU count) |
| GET | /metrics |
Prometheus metrics (nitriding + host-level gauges) |
| POST | /start |
Start enclave via systemctl start enclave-watchdog |
| POST | /stop |
Stop enclave via systemctl stop enclave-watchdog |
| POST | /migrate |
Full locked-key migration (streaming NDJSON progress) |
| POST | /schedule-key-deletion |
Schedule KMS key for 7-day deletion |
The /migrate endpoint orchestrates the 9-step locked-key migration described in Locked KMS Key (Migration) below. It streams progress as newline-delimited JSON:
{"step":1,"total":9,"status":"progress","message":"Reading current KMS key..."}
{"step":2,"total":9,"status":"progress","message":"Creating new KMS key..."}
...
{"step":9,"total":9,"status":"complete","message":"Migration complete. New KMS key: arn:aws:kms:..."}
- On first deploy, the CLI generates 32 random bytes per secret, encrypts them with KMS (attaching a Nitro attestation document), and stores the ciphertext in SSM Parameter Store
- On boot, the supervisor loads ciphertexts from SSM and decrypts via KMS (which validates PCR0)
- Decrypted values are set as environment variables — plaintext only exists in enclave memory
- SSM parameter path:
/{prefix}/{appName}/{secretName}/Ciphertext
The deploy command applies a KMS key policy where:
- The admin statement explicitly excludes
kms:Decryptandkms:CreateGrant— nobody outside the enclave can decrypt - The enclave statement allows
kms:Decryptonly whenkms:RecipientAttestation:PCR0matches the enclave measurement
For maximum security, the lock command applies an irreversible policy using --bypass-policy-lockout-safety-check:
- Removes all admin access (no
kms:PutKeyPolicy, nokms:ScheduleKeyDeletion) - Only the enclave with the exact PCR0 can call
kms:Decrypt - This cannot be undone. Not even the AWS root account can modify the policy afterward.
enclave lockWhen deploying a new enclave version (different PCR0), the upgrade path depends on whether the KMS key is locked.
The KMS key policy still allows kms:PutKeyPolicy, so the CLI can update it with the new PCR0:
- CLI updates the KMS key policy with the new PCR0
- CLI stops the old enclave, uploads the new EIF, and restarts
- The new enclave boots and decrypts secrets using the existing KMS key (new PCR0 now allowed)
The KMS key policy is irreversible — only the old PCR0 can decrypt. Secrets must be re-encrypted to a new KMS key. The management server's POST /migrate endpoint orchestrates this as a 9-step process:
- Read current KMS key ID from SSM
- Create a new KMS key with a policy allowing the new PCR0 to decrypt
- Apply transitional KMS policy (grants Encrypt to EC2 role, no Decrypt)
- Store migration parameters in SSM (
MigrationKMSKeyID,MigrationOldKMSKeyID) - Call
POST /v1/export-keyon the running enclave. The enclave:- Reads
MigrationKMSKeyIDfrom SSM (this is the only gate — if the param is unset, the endpoint returns an error) - Decrypts each secret using the old KMS key (which only this enclave can do)
- Re-encrypts each secret with the new KMS key
- Stores the migration ciphertexts in SSM under
Migration/{secretName}/Ciphertext - Stores its PCR0 and an NSM attestation proof in SSM
- Reads
- Poll SSM for migration ciphertexts (60s timeout)
- Adopt migration ciphertexts — copy to permanent SSM paths, update
KMSKeyID - Download new EIF from S3, stop old enclave, replace EIF, start new enclave
- Clean up migration SSM parameters
The new enclave boots, decrypts secrets using the new KMS key (PCR0 matches), and schedules the old KMS key for deletion (7-day pending window via MigrationOldKMSKeyID).
Each enclave version records its predecessor's PCR0, creating a verifiable upgrade chain:
Genesis -> PCR0_v1 (previous_pcr0=genesis)
-> PCR0_v2 (previous_pcr0=PCR0_v1, attestation=<signed proof>)
-> PCR0_v3 (previous_pcr0=PCR0_v2, attestation=<signed proof>)
The attestation proof is an NSM attestation document — a COSE Sign1 structure signed by AWS Nitro hardware. It contains the enclave's PCR values, proving the reported previous_pcr0 came from a real enclave (not a compromised host).
GET /v1/enclave-info returns both previous_pcr0 and previous_pcr0_attestation. The enclave verify command automatically verifies the attestation document against the AWS Nitro root certificate and confirms the PCR0 inside matches the reported value.
The enclave image is built entirely with Nix using monzo/aws-nitro-util inside a pinned Docker container, producing a byte-identical EIF on every build. This guarantees identical PCR0 measurements, enabling anyone to verify that the running enclave matches the published source code.
The test suite runs a full enclave inside QEMU (-M nitro-enclave) with mock AWS services, executing 15 integration tests followed by a locked-key migration with post-migration verification.
Docker Compose (CI path):
# 1. Build the test EIF (requires Nix)
go build -o /tmp/enclave-cli ./cmd/enclave
cd test/app && /tmp/enclave-cli build --local
# 2. Run all tests
cd test && docker compose --profile test run --build test-runnerNative with Nix:
cd test && nix develop . --command ./run.sh15 integration tests run after enclave boot:
- Health endpoint returns HTTP 200
- Enclave info JSON is valid
- Init completed without errors
- BIP-340 Schnorr signature verification
- SDK version present
- App proxy (requests reach user app through nitriding)
- KMS secrets loaded (SIGNING_KEY decrypted, correct length)
- Encrypted storage round-trip (PUT/GET/DELETE)
- Previous PCR0 field present
- Dynamic secrets round-trip (PUT/GET/LIST/DELETE)
- PCR16 extended with SHA256(compressed secp256k1 pubkey)
- Storage persistence write (for migration verification)
- Dynamic secret persistence write (for migration verification)
- Attestation persistence write (pubkey + PCR16 hash)
- Pre-migration Schnorr signature baseline
Migration verification then runs a full locked-key migration and confirms:
- Secrets decrypted from the new KMS key
- Persistent storage survived
- Dynamic secrets preserved
- Attestation key (SIGNING_KEY) unchanged across migration
- PCR0 attestation chain intact
| Component | Port | Purpose |
|---|---|---|
| Enclave (QEMU via gvproxy) | 8443 | TLS-terminated enclave |
| Management server | 8444 | Host-side mgmt (migration orchestration) |
| LocalStack | 4566 | S3, SSM, STS mock |
| KMS proxy | 4000 | Custom KMS mock |
| Mock IMDS | 1338 | EC2 instance metadata mock |
The Docker test runner image (test/Dockerfile.runner) builds QEMU 9.2.4, vhost-device-vsock 0.3.0, gvproxy 0.8.6, and the CLI/mgmt binaries in a multi-stage build. QEMU 9.2 is the first version with the nitro-enclave machine type.
| Variable | Default | Description |
|---|---|---|
BOOT_TIMEOUT |
90 |
Seconds to wait for QEMU boot |
INIT_TIMEOUT |
120 |
Seconds to wait for enclave Init |
HOST_TLS_PORT |
8443 |
Enclave TLS port on host |
.
├── cmd/enclave/main.go # CLI entry point
├── config.go # Config loading + validation
├── build.go # EIF build orchestration
├── setup.go # Auto-populate app nix hashes
├── update.go # Fast update (rev + source hash only)
├── template.go # Template generation (Go, Node.js)
├── deploy.go # CDK deploy + secret provisioning
├── destroy.go # Stack teardown + KMS key cleanup
├── verify.go # Attestation verification
├── cdk.go # AWS CDK stack definition (Go)
├── init.go # Scaffold command + config template
├── framework_files.go # Framework files as Go string constants
├── version.go # SDK hash vars (set via ldflags)
├── Makefile # Build + hash computation targets
├── sdk-hashes.json # Cached SDK Nix hashes
├── sdk/ # SDK module (built as enclave-supervisor)
│ ├── enclave.go # Init, attestation key, signing middleware, routes
│ ├── kms_ssm.go # KMS encrypt/decrypt, SSM storage
│ ├── storage.go # Encrypted storage (AES-256-GCM + S3)
│ ├── secrets.go # Dynamic secrets API
│ ├── imds.go # IMDS credential fetching
│ ├── migrate.go # Key export for locked-key migration
│ └── cmd/enclave-supervisor/
│ └── main.go # Standalone supervisor binary
├── mgmt/ # Host-side management server
│ ├── main.go # Routes + server setup
│ ├── health.go # Health endpoint (nitro-cli describe)
│ ├── enclave.go # Start/stop via systemd
│ ├── migrate.go # 9-step locked-key migration
│ └── deletion.go # KMS key deletion scheduling
├── test/ # Local testing infrastructure
│ ├── run.sh # 7-step E2E test orchestration
│ ├── integration-test.sh # 15 integration tests
│ ├── boot-qemu.sh # QEMU nitro-enclave boot
│ ├── docker-compose.yml # Mock services + test runner
│ ├── Dockerfile.runner # Test runner image (QEMU 9.2 + tools)
│ └── app/ # Skeleton test application
└── .github/workflows/
├── sdk-hashes.yml # CI: verify SDK hashes on tag push
└── integration-test.yml # CI: full E2E test suite
| Field | Description | Default |
|---|---|---|
name |
App name (used in stack name, EIF) | (required) |
version |
Build version | dev |
region |
AWS region | (required) |
account |
AWS account ID | (required for deploy) |
prefix |
Deployment prefix (stack = {prefix}Nitro{name}) |
dev |
instance_type |
EC2 instance type | m6i.xlarge |
nix_image |
Docker image for builds | nixos/nix:2.24.9 |
sdk.rev |
SDK git commit SHA or tag | (required for build) |
sdk.hash |
Nix source hash (SRI) | (required for build) |
sdk.vendor_hash |
Go vendor hash (SRI) | (required for build) |
app.language |
App language (go, nodejs) |
go |
app.source |
Build source type | nix |
app.nix_owner |
GitHub owner | (auto by setup) |
app.nix_repo |
GitHub repo | (auto by setup) |
app.nix_rev |
Git commit SHA | (auto by setup) |
app.nix_hash |
Nix source hash (SRI) | (auto by setup) |
app.nix_vendor_hash |
Go vendor hash or npm deps hash (SRI) | (auto by setup) |
app.nix_sub_packages |
Go sub-packages to build (Go only) | ["."] |
app.binary_name |
Output binary name | {name} |
app.env |
Environment variables baked into EIF | {} |
secrets[].name |
Secret name (SSM path component) | (required) |
secrets[].env_var |
Env var for decrypted value | (required) |
| Variable | Description | Default |
|---|---|---|
ENCLAVE_APP_PORT |
Port your app listens on | 7074 |
ENCLAVE_PROXY_PORT |
Supervisor proxy port | 7073 |
APP_BINARY_NAME |
User app binary name | app |
ENCLAVE_DEPLOYMENT |
Deployment name for SSM paths | dev |
ENCLAVE_APP_NAME |
App name (used in SSM path construction) | (from config) |
ENCLAVE_KMS_KEY_ID |
KMS key ID override | (auto from SSM) |
ENCLAVE_AWS_REGION |
AWS region for KMS/SSM | us-east-1 |
ENCLAVE_MGMT_TOKEN |
Bearer token for storage/secrets API auth | (auto-generated) |
ENCLAVE_SECRETS_CONFIG |
JSON array of secret definitions (baked into EIF) | (from config) |