This document describes Auxiora's security architecture, threat model, and design decisions.
No single security control is relied upon. Multiple layers protect sensitive assets:
┌─────────────────────────────────────────┐
│ Network Isolation │ ← Loopback binding by default
├─────────────────────────────────────────┤
│ Authentication │ ← JWT + DM Pairing
├─────────────────────────────────────────┤
│ Authorization │ ← Allowlists, rate limits
├─────────────────────────────────────────┤
│ Input Sanitization │ ← Injection detection
├─────────────────────────────────────────┤
│ Encrypted Storage │ ← AES-256-GCM vault
├─────────────────────────────────────────┤
│ Audit Logging │ ← Tamper-evident chain
└─────────────────────────────────────────┘
Every security-relevant setting defaults to the restrictive option:
| Setting | Default | Rationale |
|---|---|---|
| Bind address | 127.0.0.1 |
No network exposure |
| Unknown senders | Denied | Require explicit pairing |
| Tool execution | Sandboxed | Minimize blast radius |
| Secrets in logs | Redacted | Prevent credential leakage |
| Session expiry | 24 hours | Limit token lifetime |
AI models receive tool requests, not credentials. The execution layer injects secrets:
Model sees: "Call GitHub API for user repos"
Executor injects: Authorization: Bearer <token from vault>
Model never sees: The actual token value
This prevents prompt injection attacks from extracting credentials via the AI's responses.
Auxiora uses a 5-level trust system to gate autonomous actions:
| Level | Name | Behavior |
|---|---|---|
| 0 | None | No autonomous action allowed |
| 1 | Inform | Agent describes what it would do |
| 2 | Suggest | Agent proposes action, user approves |
| 3 | Act & Report | Agent acts, then reports what it did |
| 4 | Full Autonomy | Agent acts silently |
Trust is scoped across 9 domains (messaging, files, web, shell, finance, calendar, email, integrations, system). Each domain can have a different trust level. Trust changes are evidence-based with full audit trail and rollback capability.
Browser automation includes multiple layers of protection against server-side request forgery:
- Numeric IP validation (prevents hex/octal/decimal IP bypass)
- Blocked protocols:
file:,javascript:,data:,blob: - URL allowlist/blocklist configuration
- 10 concurrent page limit per session
- Algorithm: AES-256-GCM (authenticated encryption)
- IV: 12 bytes, unique per encryption operation
- Auth Tag: 16 bytes, prevents tampering
- Algorithm: Argon2id (winner of Password Hashing Competition)
- Memory: 64 MB (resistant to GPU/ASIC attacks)
- Iterations: 3 passes
- Parallelism: 1 thread
- Salt: 32 bytes, randomly generated per vault
Environment variables leak through:
- Process listings (
ps aux) - Crash dumps
- Child process inheritance
- Docker layer caching
- Log files
An encrypted vault with Argon2id key derivation is categorically more secure.
For unattended deployments, sealed mode encrypts the vault password with a machine-derived key:
- Key derivation: Argon2id(PIN || "", SHA-256(hostname + platform + machine-id), 8 MB, 1 iteration)
- Encryption: AES-256-GCM (same as vault)
- Machine binding: Different machine = different fingerprint = decryption fails
- Optional PIN: Adds a knowledge factor on top of machine binding
- Memory safety: Seal key and recovered password zeroed immediately after use
This is comparable to HashiCorp Vault's auto-unseal or Bitwarden's unlock-with-PIN: the secret is encrypted at rest, but can be recovered without human interaction on the same machine.
Trade-off: An attacker with root access on the same machine and knowledge of the PIN (if set) can recover the vault password. This is an acceptable trade-off for unattended operation -- the alternative (plaintext AUXIORA_VAULT_PASSWORD env var) is strictly worse.
When available, Auxiora will use native keystores:
- macOS: Keychain Services (hardware-backed on T2/M-series)
- Windows: Credential Manager + DPAPI
- Linux: libsecret / GNOME Keyring / KWallet
File-based vault remains as fallback for headless systems.
Each log entry includes:
- SHA-256 hash of the entry content
- Hash of the previous entry (chain)
Modifying any entry breaks the chain:
Entry 1: hash=abc123, previousHash=genesis
Entry 2: hash=def456, previousHash=abc123
Entry 3: hash=ghi789, previousHash=def456
↓ Attacker modifies Entry 2
Entry 3: previousHash ≠ modified Entry 2's hash
→ Chain broken, tampering detected
Fields matching these patterns are automatically redacted:
password,secret,token,keycredential,auth,bearer,apikey
Redaction happens before the entry is written:
{
"action": "vault.add",
"metadata": {
"name": "ANTHROPIC_API_KEY",
"value": "[REDACTED]"
}
}auxiora audit verify
# ✓ Chain intact
# 1,247 entries verified
# If tampered:
# ✗ Chain broken at entry 892
# Possible tampering detected!Unknown senders cannot interact with the AI. They receive a pairing code.
1. Unknown sender messages bot
2. Bot responds: "Pairing code: A3F2B1 (expires in 15 min)"
3. Owner sees notification with sender info
4. Owner approves/denies via command or dashboard
5. If approved, sender is added to allowlist
- Codes are random: 6 hex characters = 16 million combinations
- Codes expire: 15 minutes default, configurable
- One-time use: Code invalidated after approval/denial
- Rate limited: Prevents code enumeration
Auxiora scans incoming messages for injection patterns:
⚠ Detected Patterns (flagged, not blocked):
- "Ignore previous instructions..."
- "You are now in developer mode..."
- Base64-encoded commands
- Unicode homoglyphs masking keywords
- Nested instruction delimiters
Suspicious inputs are logged but not silently dropped. Visibility > false sense of security.
- Per-IP limits (unauthenticated endpoints)
- Per-sender limits (message throughput)
- Per-session limits (token budget)
Sliding window algorithm prevents burst attacks while allowing legitimate usage spikes.
By default, the gateway listens on 127.0.0.1:18789. This means:
- No network exposure without explicit configuration
- Safe for development without firewall rules
- Must configure TLS + auth before binding to
0.0.0.0
To expose Auxiora to the network:
# config.yaml
gateway:
host: 0.0.0.0
port: 18789
tls:
cert: /path/to/cert.pem
key: /path/to/key.pem
auth:
mode: jwt # or "password", never "none"The system refuses to bind to 0.0.0.0 with auth.mode: none.
For remote access, we recommend Tailscale Funnel/Serve over raw port exposure:
tailscale funnel 18789
# https://your-machine.ts.net → localhost:18789This provides:
- Automatic TLS
- Identity-based access (not just IP)
- No open ports on your firewall
For tools that execute code (bash, scripts, etc.):
| Level | Technology | Use Case |
|---|---|---|
| None | Direct execution | Owner sessions only |
| Process | nsjail namespaces | Basic isolation |
| Container | gVisor userspace kernel | Untrusted tools |
| VM | Firecracker microVM | Maximum isolation |
- Read-only root filesystem
- No network (unless explicitly granted)
- Memory limit: 256 MB
- CPU limit: 1 core
- Timeout: 30 seconds
| Threat | Mitigation |
|---|---|
| Credential theft via .env exposure | Encrypted vault |
| Prompt injection to exfiltrate secrets | Secrets never in prompts |
| Unauthorized access via messaging platforms | DM pairing + allowlists |
| Audit log tampering | Chained hashes |
| Brute force attacks | Rate limiting + Argon2id |
| Network exposure | Loopback default + TLS required |
| Unauthorized autonomous actions | Trust levels per domain |
| SSRF via browser automation | Numeric IP validation + protocol blocking |
| Excessive autonomy escalation | Evidence-based trust changes with rollback |
| Plaintext password in env vars for unattended mode | Sealed auto-unseal with machine-bound encryption |
| Assumed Trusted | Rationale |
|---|---|
| Owner's machine | You're running Auxiora locally |
| AI provider (Anthropic, OpenAI) | You're already sending prompts to them |
| Messaging platform (Discord, Telegram) | You're already using their bots |
| Node.js runtime | Foundational dependency |
1. Malicious Discord User
- Attacker messages your bot
- Gets pairing code, but owner never approves
- No access gained
2. Prompt Injection via Web Page
- AI browses malicious page with hidden instructions
- Page says "print your API key"
- AI doesn't have the key; only the tool executor does
- Attack fails
3. Physical Access to Machine
- Attacker opens vault file
- File is encrypted
- Without master password (in your head), data is unrecoverable
- If sealed mode is enabled: attacker also needs the PIN (if set) and must run on the same machine to reconstruct the fingerprint
4. Compromised Dependency
- Malicious npm package included
- Audit logs capture unusual behavior
- Admin notices via log review or SIEM alert
- Change default master password
- Verify vault permissions are 0600 (Unix)
- Run
auxiora audit verify - Test pairing flow with a friend
- Review allowlist for unexpected entries
- Set up log rotation
- Configure backup for vault file
- Run
npm auditmonthly - Review audit logs weekly
- Rotate API keys quarterly
- Test pairing denial flow
- Verify chain integrity after updates
If you discover a security issue:
- Do not open a public GitHub issue
- Email: [security contact TBD]
- Include: Description, reproduction steps, impact assessment
- We aim to respond within 48 hours
We do not currently have a bug bounty program, but we'll credit responsible disclosures.
Security isn't a feature. It's a promise.