A chat application supporting multiple AI models, with AI drawing, file management, and conversation archiving features.
- Multi-model support: chat, image, audio, video models
- Provider isolation: independently configure different API providers
- Markdown rendering: supports math formulas, code highlighting, Mermaid diagrams
- Session management: create, import, delete sessions
- Conversation archiving: local archive and Joplin cloud notes
- Grok Drawing
- Qwen Drawing
- Local file management
- Custom directories and API endpoints support
- API provider management
- Model configuration
- Storage configuration
- Theme toggle (light/dark)
- Frontend: Next.js 16
- UI: React 19 + Tailwind CSS 4
- Database: IndexedDB (local storage)
- Desktop: Tauri 2
- Markdown: react-markdown, remark-gfm, rehype-katex
- Code Highlighting: react-syntax-highlighter
- Diagrams: Mermaid
# Install dependencies
npm install
# Development mode
npm run dev
# Build production
npm run buildBuild the static files and deploy to any static hosting service:
# Build static files
npm run build
# The output is in the 'out' directory
# Deploy the 'out' directory to your server or hosting serviceSupported platforms:
- Vercel
- Netlify
- Cloudflare Pages
- Self-hosted server (Nginx, Apache, etc.)
server {
listen 80;
server_name your-domain.com;
root /path/to/out;
index index.html;
location / {
try_files $uri $uri/ /index.html;
}
# Proxy API requests if needed
location /api/ {
proxy_pass http://localhost:3000;
}
}Build and package the desktop application:
# Install Tauri CLI
npm install -D @tauri-apps/cli
# Development mode
npm run tauri dev
# Build production app
npm run tauri buildThe built executable will be in:
- Windows:
src-tauri/target/release/bundle/nsis/ - macOS:
src-tauri/target/release/bundle/dmg/ - Linux:
src-tauri/target/release/bundle/deb/
In src-tauri/tauri.conf.json:
{
"bundle": {
"active": true,
"targets": "all", // or ["nsis", "msi", "dmg", "app", "deb", "appimage"]
"icon": ["icons/32x32.png", "icons/128x128.png"]
}
}Configure in settings:
- Add API provider (API Key, API Base URL)
- Add models (select type: chat/image/audio/video)
- Configure storage (optional: local archive, Joplin)
Create a .env file for production deployment:
# Required for production server
NEXT_PUBLIC_API_URL=your-api-urlStart production server:
npm run startSupports archiving conversations to:
- Local archive: stored in IndexedDB
- Joplin: sync to cloud notes via Joplin Web Clipper
MIT