Skip to content

openhoat/termaid

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

516 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Termaid - AI-Powered Terminal

License: MIT CI codecov TypeScript Node.js Electron Tested with Vitest GitHub Stars GitHub forks GitHub issues

A modern terminal powered by artificial intelligence, inspired by Warp.

Termaid allows you to describe what you want to do in natural language and the AI generates the appropriate shell commands.

🤖 This project was entirely built with AI — from architecture to code, tests, and documentation.

🎬 Demo

Termaid Demo

Generate shell commands from natural language and get AI-powered interpretations of command output.

📖 Full Documentation

💡 Why Termaid?

Warp is a great AI-powered terminal, but its AI features only work with proprietary cloud providers — there is no support for Ollama, which means no free, local, or self-hosted option.

Termaid was born out of that frustration: a fully open alternative that works with Ollama out of the box, keeping your data local and your wallet intact.

🚀 Features

  • Terminal Base: Full terminal interface with xterm.js
  • Integrated AI: Generate shell commands from natural language descriptions
  • Multi-Provider LLM: Supports Ollama (local/remote), Claude (Anthropic API), and OpenAI (GPT-4o, GPT-4)
  • Modern Interface: Dark theme by default with optional light theme
  • Flexible Configuration: Provider, model, temperature, and more — configurable via UI or environment variables
  • History: Track conversations and executed commands

🤖 Recommended Ollama Models

Termaid uses llama3.2:3b as the default Ollama model, offering a good balance between performance and resource usage.

Alternative Models

Depending on your hardware and needs, you can configure alternative models:

Model Size RAM Required Description
llama3.2:3b 3B ~4 GB Default - Best balance of speed and quality
llama3.1:8b 8B ~8 GB More powerful, better for complex commands
mistral:7b 7B ~6 GB Good compromise between performance and quality
qwen2.5:3b 3B ~4 GB Lightweight alternative, fast responses

Changing the Model

You can change the model in the configuration panel (⚙️ icon) or via the environment variable:

export TERMAID_OLLAMA_MODEL=llama3.1:8b

For more details, see the Configuration guide.

📥 Quick Install

Download the latest release for your platform:

Platform Format Download
Linux AppImage Termaid-1.4.0.AppImage
Linux Debian/Ubuntu termaid_1.4.0_amd64.deb
macOS DMG (ARM) Termaid-1.4.0-arm64.dmg
Windows Installer Termaid.Setup.1.4.0.exe

See all versions on the Releases page.

🔧 Development Setup

git clone https://github.com/openhoat/termaid.git
cd termaid
npm install
npm run dev

Prerequisites: Node.js 18+, npm, and an LLM provider (Ollama, Claude, or OpenAI).

See the Getting Started guide for detailed setup instructions including LLM provider configuration.

📖 Documentation

🔒 Security

  • Commands proposed by AI are not executed automatically
  • You always have control: validation before execution
  • Ability to modify commands before execution
  • Configuration stored locally with electron-store

📄 License

This project is licensed under the MIT License - see the LICENSE.txt file for details.

Copyright © 2026 Olivier Penhoat

👨‍💻 Author

Olivier Penhoat openhoat@gmail.com

🙏 Acknowledgments

  • Warp for the inspiration
  • The Ollama team for their excellent tool
  • Anthropic for the Claude API
  • The open-source community

About

AI-Powered Terminal — Electron app combining a terminal with an LLM assistant (Ollama, Claude)

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors