A production-style learning project implementing a high-priority job queue with retries, persistence, and graceful shutdown.
- In-memory priority queue with aging (starvation prevention)
- Worker pool with retry & backoff
- Postgres-backed transactional state
- pending → inflight → done
- Crash recovery (pending + inflight restore on restart)
- Graceful shutdown (workers + HTTP server)
- Metrics exposed over HTTP
- All job state transitions are driven by the database.
- In-memory queue is used only for scheduling.
- No job is lost on crash or restart.
- Retry and recovery paths are fully transactional.
- Inflight jobs are recovered automatically using a visibility timeout.
- Queue: in-memory priority heap
- Dispatcher: coordinates queue and persistent store
- Store: pluggable backend (Postgres implementation)
- Workers: concurrent job processors
- Metrics: runtime counters and gauges
Two tables:
pending_jobsinflight_jobs
State transitions:
pending → inflight → removed inflight → pending (retry)
All transitions are done using Postgres transactions.
- Go
- PostgreSQL
See schema.sql.
# Set database connection
export DATABASE_URL="postgres://USER:PASSWORD@localhost:5432/jobqueue?sslmode=disable"
# Create tables
psql "$DATABASE_URL" -f schema.sql
# Run server
go run ./cmd/jobqueue
| Method | Path | Description |
| ------ | -------- | ------------ |
| POST | /submit | Submit a job |
| GET | /metrics | Metrics |
| GET | /health | Health check |
Run with Docker (Recommended)
docker-compose up --builddocker-compose downPostgres 18 is used for the database. The Go job queue app connects via DATABASE_URL defined in docker-compose.yml. Tables (pending_jobs and inflight_jobs) are automatically initialized from schema.sql. API endpoints remain the same as the local run.
Load Testing Basic load test scripts are available in the scripts/ directory; they measure submission throughput and retry behavior.
