Time-smart, full-stack database backup & restore platform
ChronoStash is an open-source platform for dependable database backups across PostgreSQL, MySQL, and MongoDBβwith S3-compatible storage (AWS S3, MinIO, Cloudflare R2), cron-based schedules, AES-256-GCM encryption, and real-time progress monitoring.
TL;DR: One UI + API to back up Postgres/MySQL/Mongo to S3/R2/MinIO with cron schedules, encryption (AES-256-GCM), retention, and one-click restores. Runs on your laptop, server, or Kubernetes.
- Multi-Database Support - PostgreSQL, MySQL, MongoDB
- Flexible Storage - S3, MinIO, Cloudflare R2, and any S3-compatible storage
- Automated Scheduling - Cron-based backup schedules with timezone support
- Real-time Monitoring - Live progress updates and job status tracking
- Backup Encryption - AES-256-GCM encryption for sensitive data
- Retention Policies - Automatic cleanup based on age or count
- Point-in-Time Restore - Restore to any previous backup
- Job Queue Management - BullMQ-powered async job processing
- Notification System - Slack and Telegram integration
- RESTful API - Full-featured API for automation
- Modern UI - Clean, responsive React interface
- Import/Export - Backup and restore system configurations
- Activity Dashboard - Comprehensive analytics and trends
![]() Dashboard with Analytics |
![]() Backup Management |
![]() Automated Scheduling |
![]() Settings & Notifications |
- Solo devs needing simple nightly backups
- Small teams wanting a UI + API instead of adhoc scripts
- Platform teams standardizing backups across services
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β React SPA ββββββΆβ Express API ββββββΆβ SQLite β
β (Frontend) β β (Backend) β β (Metadata) β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β
β
ββββββββββββΌβββββββββββ
β β β
βββββββΌβββββ βββββΌβββββ ββββΌβββββββββ
β BullMQ β β Redis β β Storage β
β Worker β β Queue β β (S3/R2) β
ββββββββββββ ββββββββββ βββββββββββββ
Frontend
- React 18 with TypeScript
- TanStack Query for data fetching
- Tailwind CSS for styling
- Vite for build tooling
- Lucide React for icons
Backend
- Node.js 20+ with Express
- TypeScript for type safety
- Prisma ORM for database access
- BullMQ for job queue management
- SQLite for metadata storage
- Redis for queue backend
Storage & Database Engines
- PostgreSQL engine with
pg_dump/pg_restore - MySQL engine with
mysqldump/mysql - MongoDB engine with
mongodump/mongorestore - S3-compatible storage (AWS S3, MinIO, Cloudflare R2)
- Node.js 20+ (required)
- pnpm 8+ (required -
npm install -g pnpm) - Redis 7+ (required - for job queue)
# Clone the repository
git clone https://github.com/chronoapps/chronostash.git
cd chronostash
# Install dependencies
pnpm install
# Build shared packages (required first!)
pnpm run build:packages
# Setup environment variables
cp apps/backend/.env.example apps/backend/.env
# Edit apps/backend/.env with your configuration
# Generate Prisma client
cd apps/backend
pnpm prisma:generate
# Run database migrations
pnpm prisma:migrate
# Seed admin user
pnpm seed
# Return to root and start development servers
cd ../..
pnpm run devAccess the application:
- Frontend: http://localhost:5173
- Backend: http://localhost:3001
Default credentials:
- Username:
admin - Password:
admin123456
Create apps/backend/.env:
# Database (SQLite)
DATABASE_URL="file:./data/chronostash.db"
# Redis (required for BullMQ)
REDIS_URL="redis://localhost:6379"
# JWT Authentication (generate with: openssl rand -base64 32)
JWT_SECRET="your-secret-key-change-this"
# Server
PORT=3001
NODE_ENV=development
# Optional: Backup encryption (generate with: openssl rand -base64 32)
ENCRYPTION_KEY="your-encryption-key"
# Optional: Logging
LOG_LEVEL=info
# Admin user (used by seed script)
ADMIN_USERNAME=admin
ADMIN_PASSWORD=admin123456Before creating backups, configure a storage destination:
- Navigate to Storage Targets
- Click Add Storage Target
- Select type (S3, Cloudflare R2, MinIO)
- Enter credentials:
- Name (e.g., "AWS S3 Production")
- Bucket name
- Region
- Access Key ID
- Secret Access Key
- Endpoint (for R2/MinIO)
- Navigate to Databases
- Click Add Database
- Fill in connection details:
- Name (e.g., "Production PostgreSQL")
- Engine (PostgreSQL, MySQL, or MongoDB)
- Host, Port, Username, Password
- Database name (optional - leave empty to backup all databases)
- SSL mode (disable, require, prefer, verify-ca, verify-full)
Manual Backup:
- Navigate to Backups
- Click Create Backup
- Select database and storage target
- Click Create Backup
- Monitor progress in real-time
Scheduled Backup:
- Navigate to Schedules
- Click Create Schedule
- Configure:
- Schedule name
- Database and storage target
- Cron expression (e.g.,
0 2 * * *for daily at 2 AM) - Timezone
- Retention days (automatic cleanup)
- Enable schedule
- Navigate to Backups
- Find the backup to restore
- Click Restore
- Configure target (optional):
- Target host (default: original database)
- Target database name
- Drop existing data option
- Click Start Restore
- Monitor progress
- Navigate to Settings β Notifications
- Configure Slack or Telegram:
- Slack: Webhook URL, channel, username
- Telegram: Bot token, chat ID
- Test notification
- Enable success/failure notifications
# Login
curl -X POST http://localhost:3001/api/auth/login \
-H "Content-Type: application/json" \
-d '{"username": "admin", "password": "admin123456"}'
# Returns: { "token": "jwt-token", "user": {...} }# List backups
curl http://localhost:3001/api/backups \
-H "Authorization: Bearer <token>"
# Create backup
curl -X POST http://localhost:3001/api/backups \
-H "Authorization: Bearer <token>" \
-H "Content-Type: application/json" \
-d '{"databaseId": "db-id", "storageId": "storage-id"}'
# Get backup status
curl http://localhost:3001/api/backups/:id \
-H "Authorization: Bearer <token>"
# Download backup
curl http://localhost:3001/api/backups/:id/download \
-H "Authorization: Bearer <token>" \
-o backup.dump# Create schedule
curl -X POST http://localhost:3001/api/schedules \
-H "Authorization: Bearer <token>" \
-H "Content-Type: application/json" \
-d '{
"name": "Daily Backup",
"databaseId": "db-id",
"storageId": "storage-id",
"cronExpression": "0 2 * * *",
"timezone": "UTC",
"retentionDays": 30,
"enabled": true
}'
# Toggle schedule
curl -X PATCH http://localhost:3001/api/schedules/:id/toggle \
-H "Authorization: Bearer <token>"
# Run schedule immediately
curl -X POST http://localhost:3001/api/schedules/:id/run \
-H "Authorization: Bearer <token>"chronostash/
βββ apps/
β βββ backend/ # Express API server
β β βββ src/
β β β βββ routes/ # API endpoints
β β β βββ jobs/ # BullMQ job handlers
β β β βββ services/ # Business logic
β β β βββ middleware/ # Auth, validation
β β βββ prisma/ # Schema, migrations, seeders
β βββ frontend/ # React SPA
β βββ src/
β βββ pages/ # Route-level components
β βββ components/ # Reusable UI components
β βββ contexts/ # React Context (auth, etc.)
β βββ lib/ # API client, utilities
βββ packages/
β βββ database-engines/ # DB backup/restore implementations
β βββ storage-adapters/ # Storage backend implementations
β βββ shared/ # Shared types & utilities
βββ pnpm-workspace.yaml # pnpm monorepo config
# Development
pnpm run dev # Start both frontend and backend
pnpm --filter @chronostash/backend dev # Backend only
pnpm --filter @chronostash/frontend dev # Frontend only
# Building
pnpm run build # Build all packages and apps
pnpm run build:packages # Build shared packages only
# Database
cd apps/backend
pnpm prisma:generate # Generate Prisma client
pnpm prisma:migrate # Run migrations
pnpm prisma:studio # Open Prisma Studio GUI
# Seeding
pnpm seed # Seed admin user
pnpm seed:demo # Seed demo data (35 days of mock backups)
# Testing
pnpm test # Run all tests
pnpm test:watch # Run tests in watch mode
pnpm test:coverage # Run with coverage
# Type checking & linting
pnpm run type-check # TypeScript check all packages
pnpm run lint # Lint all packages- Create
packages/database-engines/src/my-engine.ts:
import { DatabaseEngine, BackupConfig, RestoreConfig } from "./interface"
export class MyDBEngine implements DatabaseEngine {
async backup(config: BackupConfig): Promise<{ path: string; size: number }> {
// Implement using database CLI tools (spawn child process)
}
async restore(config: RestoreConfig): Promise<void> {
// Implement restore logic
}
}- Register in
packages/database-engines/src/index.ts - Add enum to
packages/shared/src/types/index.ts - Rebuild:
pnpm run build:packages
- Create
packages/storage-adapters/src/my-adapter.ts:
import { StorageAdapter } from "./interface"
export class MyStorageAdapter implements StorageAdapter {
async upload(stream: ReadStream, path: string): Promise<{ url: string }> {
// Implement upload logic
}
async download(path: string): Promise<ReadStream> {
// Implement download logic
}
async delete(path: string): Promise<void> {
// Implement deletion
}
}- Register in
packages/storage-adapters/src/index.ts - Add enum to
packages/shared/src/types/index.ts - Rebuild:
pnpm run build:packages
# Run all tests
pnpm test
# Run specific test file
pnpm test -- backup-service.test.ts
# Run with coverage
pnpm test:coverage
# Run in watch mode
pnpm test:watchWe welcome contributions! Here's how to get started:
- Fork the repository
- Clone your fork:
git clone https://github.com/YOUR_USERNAME/chronostash.git - Create a feature branch:
git checkout -b feature/amazing-feature - Install dependencies:
pnpm install - Build packages:
pnpm run build:packages - Make your changes
- Run tests:
pnpm test - Run type checking:
pnpm run type-check - Run linting:
pnpm run lint - Commit:
git commit -m 'feat: add amazing feature' - Push:
git push origin feature/amazing-feature - Open a Pull Request
- TypeScript for all code
- ESLint + Prettier for formatting
- Conventional Commits for commit messages (
feat:,fix:,docs:, etc.) - Test coverage for new features
- Type safety - avoid
any, use proper types
<type>: <description>
[optional body]
[optional footer]
Types: feat, fix, docs, style, refactor, test, chore
Examples:
feat: add MongoDB connection poolingfix: handle null backup size correctlydocs: update API documentation
See CONTRIBUTING.md for detailed guidelines.
- Secured Databases - Connect to RDS, Azure, GCP, Kubernetes
- Contributing Guide - Development guidelines
- Changelog - Version history
This project is licensed under the MIT License - see the LICENSE file for details.
Built with these amazing open-source projects:
- Prisma - Next-generation ORM
- BullMQ - Reliable job queue
- TanStack Query - Powerful data fetching
- Tailwind CSS - Utility-first CSS
- Vite - Fast build tool
- Express - Web framework
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Support for additional databases (Redis, Cassandra, MariaDB)
- Backup compression options (gzip, zstd, lz4)
- Incremental backups
- Multi-tenancy support
- Webhook notifications
- Backup verification and testing
- Database migration tools
- CLI tool for automation
- Docker Compose deployment option
- Kubernetes operator
- Email notifications
Q: Is telemetry collected? A: No. ChronoStash is 100% self-hosted. We do not phone home.
Q: How big are backup files? A: Depends on DB + compression. Compression options (gzip/zstd/lz4) are on the roadmap.
If you find ChronoStash useful, please consider giving it a star!



