Pebblify is a high-performance migration tool that converts LevelDB databases to PebbleDB format, specifically designed for Cosmos SDK and CometBFT (formerly Tendermint) blockchain nodes.
PebbleDB offers significant performance improvements over LevelDB, including better write throughput, more efficient compaction, and reduced storage overhead. Pebblify makes it easy to migrate your existing node data without manual intervention.
📖 Documentation · 🌐 Website
Warning
This tool is still in the early stages of development and may contain bugs or be unstable. If you notice any unusual behavior, please open an issue.
- Fast parallel conversion — Process multiple databases concurrently with configurable worker count
- Crash recovery — Resume interrupted migrations from the last checkpoint
- Adaptive batching — Automatically adjusts batch sizes based on memory constraints
- Real-time progress — Live progress bar with throughput metrics and ETA
- Data verification — Verify converted data integrity with configurable sampling
- Disk space checks — Pre-flight validation to ensure sufficient storage
- Docker support — Multi-architecture container images (amd64/arm64)
- Go 1.25+ (for building from source)
- Sufficient disk space — Approximately 1.5x the source data size during conversion
- Source database — Valid LevelDB directory structure (Cosmos/CometBFT
data/format)
git clone https://github.com/Dockermint/pebblify.git
cd pebblify
make build # build for current platform
make install # build and install to PATHmake build-dockerpebblify level-to-pebble ~/.gaia/data ./gaia-pebblepebblify recover --tmp-dir /var/tmppebblify verify --sample 10 ~/.gaia/data ./gaia-pebble/datadocker run --rm \
-v /path/to/source:/data/source:ro \
-v /path/to/output:/data/output \
-v /path/to/tmp:/tmp \
dockermint/pebblify:latest \
level-to-pebble /data/source /data/outputFor full command reference and all available flags, see the documentation.
Real-world conversion on a production Cosmos node dataset:
| Metric | Value |
|---|---|
| Total keys | 216,404,586 |
| Duration | 4m 9s |
| Throughput | ~866k keys/s · ~160 MB/s |
| Data processed | 39 GiB read / 39 GiB written |
| Size overhead | +3.7% (LevelDB 23.04 GiB → PebbleDB 23.91 GiB) |
| Data loss | None — 1:1 write/read parity |
Note
Benchmark performed on AMD Ryzen 9 8940HX, 32 GiB DDR5, NVMe (Btrfs). Temp folder on disk, not in RAM.
- Use SSDs — NVMe storage significantly improves conversion speed
- Increase workers — For systems with many CPU cores, increase
-wfor faster parallel processing - Adjust batch memory — Increase
--batch-memoryif you have RAM to spare - Use local temp — If
/tmpis a tmpfs (RAM-based), use--tmp-dirto point to disk storage for large datasets
Contributions are welcome! Please feel free to submit issues and pull requests.
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
- CockroachDB Pebble — The high-performance storage engine
- syndtr/goleveldb — LevelDB implementation in Go