A Rust database interface built on mysql_async with derive macros for easy row mapping.
Check the latest versions on crates.io: rdbi, rdbi-codegen
[dependencies]
rdbi = "0.1"That's it for most users. For TLS connections (required by most cloud database providers), enable a TLS feature:
[dependencies]
rdbi = { version = "0.1", features = ["rustls-tls"] }| Feature | Backend | Notes |
|---|---|---|
rustls-tls |
Rustls (pure Rust, recommended) | No system dependencies, works everywhere |
native-tls |
OS native (OpenSSL/Secure Transport/SChannel) | Use when you need the OS certificate store |
If you want automatic code generation from SQL schemas, also add:
[build-dependencies]
rdbi-codegen = "0.1"use rdbi::{FromRow, Pool, Query, mysql::MySqlPool};
// Define your struct with FromRow derive
#[derive(FromRow)]
pub struct User {
pub id: i64,
pub username: String,
pub email: String,
}
#[tokio::main]
async fn main() -> rdbi::Result<()> {
// Connect to database
let pool = MySqlPool::new("mysql://user:pass@localhost/mydb")?;
// Query with type-safe binding
let users: Vec<User> = Query::new("SELECT * FROM users WHERE id = ?")
.bind(42)
.fetch_all(&pool)
.await?;
Ok(())
}Generate structs and DAO methods automatically from your SQL schema.
1. Add schema file (schema.sql):
CREATE TABLE users (
id BIGINT AUTO_INCREMENT PRIMARY KEY,
username VARCHAR(255) NOT NULL UNIQUE,
email VARCHAR(255) NOT NULL,
status ENUM('ACTIVE', 'INACTIVE') NOT NULL DEFAULT 'ACTIVE',
INDEX idx_status (status)
);2. Configure (Cargo.toml):
[package.metadata.rdbi-codegen]
schema_file = "schema.sql"
output_structs_dir = "src/generated/models"
output_dao_dir = "src/generated/dao"
[dependencies]
rdbi = "0.1"
[build-dependencies]
rdbi-codegen = "0.1"3. Add build script (build.rs):
fn main() {
rdbi_codegen::generate_from_cargo_metadata()
.expect("Failed to generate code");
}4. Include generated code (src/main.rs):
mod generated {
pub mod models;
pub mod dao;
}
use generated::models::*;
use rdbi::mysql::MySqlPool;
#[tokio::main]
async fn main() -> rdbi::Result<()> {
let pool = MySqlPool::new("mysql://user:pass@localhost/mydb")?;
// Use generated DAO methods
let user = generated::dao::users::find_by_id(&pool, 1).await?;
let active = generated::dao::users::find_by_status(&pool, UsersStatus::Active).await?;
Ok(())
}Note: The generated code under
src/generated/should be committed to version control. This ensures IDE support works without building, and changes are reviewable in PRs. Runcargo buildto regenerate after schema changes.
Alternative: OUT_DIR with include!()
If you prefer not to commit generated code, omit the output_*_dir settings. The defaults write to $OUT_DIR, and you use include!():
pub mod models {
include!(concat!(env!("OUT_DIR"), "/models/mod.rs"));
}
pub mod dao {
include!(concat!(env!("OUT_DIR"), "/dao/mod.rs"));
}
pub use models::*;MySqlPool implements Clone — cloning is cheap (Arc-backed) and all clones share the same underlying connection pool. No need to wrap in Arc.
use rdbi::mysql::MySqlPool;
// Default pool: min=10, max=100 connections
let pool = MySqlPool::new("mysql://user:pass@localhost/mydb")?;
// Custom pool size via builder
let pool = MySqlPool::builder("mysql://user:pass@localhost/mydb")
.pool_min(5)
.pool_max(50)
.build()?;
// Or via URL parameters
let pool = MySqlPool::new("mysql://user:pass@localhost/mydb?pool_min=5&pool_max=50")?;
// Clone is cheap — share across services
let pool2 = pool.clone();| Method | Default | Description |
|---|---|---|
pool_min(n) |
10 | Minimum idle connections |
pool_max(n) |
100 | Maximum total connections |
inactive_connection_ttl(d) |
0s | TTL for idle connections above pool_min |
abs_conn_ttl(d) |
None | Absolute TTL for any connection |
| Method | Return | Description |
|---|---|---|
find_all |
Vec<T> |
Fetch all records |
count_all |
i64 |
Count total records |
stream_all |
Vec<T> |
Fetch all (batch-friendly alias) |
| Method | Return | Description |
|---|---|---|
find_by_<pk> |
Option<T> |
Find by primary key |
delete_by_<pk> |
u64 |
Delete by primary key |
Composite PKs generate combined names: find_by_user_id_and_role_id(user_id, role_id)
| Method | Return | Description |
|---|---|---|
insert |
u64 |
Insert entity, returns last_insert_id |
insert_plain |
u64 |
Insert with individual parameters |
insert_all |
u64 |
Batch insert, returns rows_affected |
| Method | Return | When Generated |
|---|---|---|
update |
u64 |
Table has PK + non-PK columns |
update_by_<pk> |
u64 |
Same, with individual parameters |
upsert |
u64 |
Table has PK or unique index |
Methods are generated based on index type (deduplicated by priority):
| Priority | Index Type | Return | Example |
|---|---|---|---|
| 1 | Primary Key | Option<T> |
find_by_id(id) |
| 2 | Unique Index | Option<T> |
find_by_email(email) |
| 3 | Non-Unique Index | Vec<T> |
find_by_status(status) |
| 4 | Foreign Key | Vec<T> |
find_by_user_id(user_id) |
Composite indexes: find_by_user_id_and_device_type(user_id, device_type)
For single-column indexes, pluralized bulk methods are generated:
find_by_ids(&[i64]) -> Vec<T>
find_by_statuses(&[Status]) -> Vec<T>For composite indexes with trailing enum columns:
// Index on (user_id, device_type) where device_type is ENUM
find_by_user_id_and_device_types(user_id, &[DeviceType]) -> Vec<T>| Method | Return | Description |
|---|---|---|
find_all_paginated |
Vec<T> |
Paginated query with sorting |
get_paginated_result |
PaginatedResult<T> |
Includes total count, pages, has_next |
Generated helper types: SortDirection, {Table}SortBy, PaginatedResult<T>
Extend generated DAOs or write standalone queries:
#[derive(rdbi::FromRow)]
pub struct UserStats {
pub user_id: i64,
pub order_count: i64,
}
// Add custom method to generated DAO
impl dao::users::UsersDao {
pub async fn find_with_stats(pool: &impl rdbi::Pool) -> rdbi::Result<Vec<UserStats>> {
rdbi::Query::new(
"SELECT u.id as user_id, COUNT(o.id) as order_count
FROM users u LEFT JOIN orders o ON u.id = o.user_id
GROUP BY u.id"
)
.fetch_all(pool)
.await
}
}rdbi provides three convenience macros for transactional database operations. No trait imports are needed — just use the macros directly:
| Macro | Default Isolation | Description |
|---|---|---|
in_transaction!(pool, |tx| { ... }) |
RepeatableRead |
Auto-commit on Ok, auto-rollback on Err |
in_transaction_with!(pool, level, |tx| { ... }) |
Caller-specified | Same, with explicit isolation level |
with_connection!(pool, |conn| { ... }) |
N/A | No transaction; each statement auto-commits |
use rdbi::IsolationLevel;
// Auto-commit on Ok, auto-rollback on Err (default: RepeatableRead isolation)
let order_id = rdbi::in_transaction!(pool, |tx| {
dao::users::insert(tx, &user).await?;
dao::orders::insert(tx, &order).await?;
Ok(order.id)
}).await?;
// With explicit isolation level
rdbi::in_transaction_with!(pool, IsolationLevel::ReadCommitted, |tx| {
dao::users::insert(tx, &user).await?;
Ok(())
}).await?;
// Without transaction - each statement auto-commits independently
rdbi::with_connection!(pool, |conn| {
dao::users::insert(conn, &user).await?;
Ok(())
}).await?;The default macro arms use rdbi::Error as the body's error type, so there's never any type ambiguity. Use .await? to propagate into your caller's error type:
// Body uses rdbi::Error internally; .await? converts to anyhow::Error
async fn create_order(pool: &MySqlPool, user: &User, order: &Order) -> anyhow::Result<u64> {
let id = rdbi::in_transaction!(pool, |tx| {
dao::users::insert(tx, &user).await?;
let id = dao::orders::insert(tx, &order).await?;
Ok(id)
}).await?; // rdbi::Error -> anyhow::Error via From
Ok(id)
}When you need non-rdbi errors inside the body (e.g., anyhow::bail!(), custom errors), specify the error type explicitly:
// Body uses anyhow::Error — can mix rdbi and non-rdbi errors
rdbi::in_transaction!(pool, anyhow::Error, |tx| {
dao::users::insert(tx, &user).await?; // rdbi::Error -> anyhow via ?
validate_inventory(&order).await?; // anyhow errors work too
Ok(())
}).await?;
// Also works with in_transaction_with!
rdbi::in_transaction_with!(pool, IsolationLevel::ReadCommitted, anyhow::Error, |tx| {
dao::users::insert(tx, &user).await?;
Ok(())
}).await?;Unlike the Transactional trait methods (which require 'static closures), the macros use inline async blocks. This means captured &str and other non-'static references work without cloning:
async fn purchase(pool: &MySqlPool, order_id: &str, account_id: &str) -> anyhow::Result<()> {
rdbi::in_transaction!(pool, |tx| {
rdbi::Query::new("UPDATE orders SET status = 'PAID' WHERE id = ?")
.bind(order_id) // &str captured by reference — no .to_string() needed
.execute(tx).await?;
Ok(())
}).await?;
// order_id and account_id are still usable here
println!("Paid order {} for account {}", order_id, account_id);
Ok(())
}For cases where you need explicit begin/commit/rollback:
use rdbi::Transactional;
let tx = pool.begin().await?;
dao::users::insert(&tx, &user).await?;
dao::orders::insert(&tx, &order).await?;
tx.commit().await?; // or tx.rollback().await?Isolation Levels: ReadUncommitted, ReadCommitted, RepeatableRead (default), Serializable
#[derive(rdbi::FromRow, rdbi::ToParams)]
pub struct User {
#[rdbi(skip_insert)] // Exclude from INSERT (auto-increment)
pub id: i64,
#[rdbi(rename = "user_name")] // Map to different column name
pub username: String,
#[rdbi(skip)] // Don't read from DB (use Default)
pub computed_field: String,
}| MySQL | Rust |
|---|---|
| BIGINT | i64 |
| INT | i32 |
| VARCHAR, TEXT | String |
| BOOLEAN, TINYINT(1) | bool |
| DECIMAL | rust_decimal::Decimal |
| DATETIME, TIMESTAMP | chrono::NaiveDateTime |
| DATE | chrono::NaiveDate |
| TIME | chrono::NaiveTime |
| ENUM | Generated enum |
| BLOB, BINARY | Vec |
| JSON | serde_json::Value |
Nullable columns → Option<T>
# Install
cargo install rdbi-codegen
# Generate code
rdbi-codegen --schema schema.sql --output ./src/generated generate
# Preview without writing
rdbi-codegen --schema schema.sql --output ./src/generated --dry-run generate
# Inspect parsed schema
rdbi-codegen --schema schema.sql inspectFor build.rs via Cargo.toml:
[package.metadata.rdbi-codegen]
schema_file = "schema.sql"
output_structs_dir = "src/generated/models" # Default: $OUT_DIR/models
output_dao_dir = "src/generated/dao" # Default: $OUT_DIR/dao
include_tables = ["users", "orders"] # Only these tables
exclude_tables = ["migrations"] # Skip these tables
generate_structs = true
generate_dao = trueOr create rdbi-codegen.toml for CLI usage.
This project uses conventional commits with release-please for automated releases:
feat: add connection pooling— new feature (bumps minor version)fix: handle timeout correctly— bug fix (bumps patch version)feat!: redesign Pool trait— breaking change (bumps major version)chore:,docs:,refactor:— no version bump
git config core.hooksPath .githooks # Enable pre-commit fmt/clippy checksLicensed under the Apache License, Version 2.0 — see LICENSE for details.
This project also supports the Anti-996 License. We encourage fair labor practices and oppose the "996" working schedule.