MarkdownLLM is a Neovim plugin for chatting with LLM providers in a plain Markdown buffer.
Recently, there has been a proliferation of LLM interaction tools designed as agents (e.g., cursor, claude-code, codex, gemini-cli). While these tools can accelerate workflows, they often lack clear and explicit control over the context provided to the LLM, they operate as "black boxes".
There is a tendency to trust agent-driven code changes or commands without fully understanding the underlying logic. This can create a disconnect between the human and the AI, hindering the human's learning and potentially leading to a loss of control over the software produced.
Unlike agent-based systems, interacting with an LLM through its native web interface keeps the human in a clearly leading role, with collaboration being explicit and the developer maintaining a more critical stance.
By bringing LLM chat into Neovim's native markdown environment, this plugin allows you to have the same critical and iterative dialogue you would have in a web interface, but with the full power of Vim's editing capabilities.
- Start LLM chats directly inside Neovim
- Edit any part of the conversation before sending again
- Switch provider, model, and options from YAML frontmatter
- Use presets to seed chats with reusable system instructions
- Run actions on visual selections for common text or code transformations
- Save chats to disk and resume them later
Each chat is a Markdown document with editable YAML frontmatter, so the complete context always stays visible and under your control.
Use MarkdownLLM as a regular chat buffer: start a conversation, iterate on it, and edit the context directly when needed.
Actions let you send a visual selection through a predefined prompt, making common text and code transformations faster.
MarkdownLLM is lightweight completely written in Lua. It has no dependencies on other plugins. Requires Neovim >= 0.10. The only optional suggestion is to use render-markdown.nvim to render markdown in the chat buffer.
{
"PreziosiRaffaele/markdown-llm.nvim",
-- optional markdown renderer
dependencies = {
{
"MeanderingProgrammer/render-markdown.nvim",
},
},
opts = {
log_level = vim.log.levels.INFO,
default_setup_name = "default",
setups = {
{
name = "default",
provider = "openai",
model = "gpt-5.2",
api_key_name = "OPENAI_API_KEY",
},
},
presets = {
{ name = "Chat", instruction = "" },
},
actions = {},
keymaps = {
newChat = "<leader>mn",
sendChat = "<leader>ms",
selectChatSetup = "<leader>mc",
selectDefaultSetup = "<leader>md",
actions = "<leader>ma",
saveChat = "<leader>mw",
resumeChat = "<leader>mr",
},
},
}Add PreziosiRaffaele/markdown-llm.nvim (and optionally MeanderingProgrammer/render-markdown.nvim) as you normally would, then call the setup function in your config:
require("markdownllm").setup({
log_level = vim.log.levels.INFO,
default_setup_name = "default",
setups = {
{
name = "default",
provider = "openai",
model = "gpt-5.2",
api_key_name = "OPENAI_API_KEY",
},
},
presets = {
{ name = "Chat", instruction = "" },
},
actions = {},
keymaps = {
newChat = "<leader>mn",
sendChat = "<leader>ms",
selectChatSetup = "<leader>mc",
selectDefaultSetup = "<leader>md",
actions = "<leader>ma",
saveChat = "<leader>mw",
resumeChat = "<leader>mr",
},
}):MarkLLMNewChatopen a new chat buffer (optionally with a preset).:MarkLLMSendChatsend the current chat buffer to the provider.:MarkLLMRunActionsend the current visual selection using an action.:MarkLLMSelectBufferSetupset the setup for the current buffer.:MarkLLMSelectDefaultSetupset the default setup for new buffers.:MarkLLMSaveChatsave the current chat buffer to disk.:MarkLLMResumeChatresume a saved chat from disk.
Help docs are available in doc/markdownllm.txt after running :helptags.
log_levellogger level (default:vim.log.levels.INFO).log_to_fileenable logging to file (default:false).log_file_pathlog file path (default:stdpath("cache")/markdownllm.log).default_setup_namename of the default setup used for new chats.setupslist of provider/model setups:nameunique label used in selectors.providerprovider name:openai,gemini,grok,deepseek.modelmodel id passed to the provider.api_key_nameenvironment variable containing the API key.base_urloptional override for the selected provider endpoint.timeoutoptional request timeout in milliseconds.temperature,max_tokens,top_p,stop,frequency_penalty,presence_penalty,seed,reasoning_effortmodel parameters.web_searchenable provider web search tool when supported.- OpenAI uses the Responses API:
max_tokensmaps tomax_output_tokens;stop,frequency_penalty,presence_penalty, andseedare currently ignored with a warning.
presetslist of prompt presets used to seed new chats:namelabel shown in the preset selector.instructioncontent injected under the# Systemsection.setupsetup name override; defaults todefault_setup_name.
actionslist of actions used for visual selection prompts:namelabel shown in the action selector.presetpreset name to open; defaults to the first preset.typetext(default) orcode;codewraps the selection in a fenced code block.languageoptional code fence language whentype = "code"; defaults to the current buffer filetype.pre_texttext prepended before the selection.
chat_save_dirdirectory for saved chats (default:stdpath("data")/markdownllm/chats).keymapsoptional command bindings:newChatsendChatselectChatSetupselectDefaultSetupactionssaveChat
resumeChat
Each chat buffer starts with YAML frontmatter that mirrors the setup. Edit it directly to change providers or model options:
---
provider: openai
model: gpt-5.2
api_key_name: OPENAI_API_KEY
temperature: 0.2
max_tokens: 800
reasoning_effort: xhigh
---setups = {
{
name = "OpenAI-5.2",
provider = "openai",
model = "gpt-5.2",
api_key_name = "OPENAI_API_KEY",
reasoning_effort = "xhigh"
},
{
name = "Gemini-2.5-flash",
provider = "gemini",
model = "gemini-2.5-flash",
api_key_name = "GEMINI_API_KEY",
},
{
name = "Grok Code Fast",
provider = "grok",
model = "grok-code-fast-1",
api_key_name = "GROK_API_KEY",
},
{
name = "DeepSeek Chat",
provider = "deepseek",
model = "deepseek-chat",
api_key_name = "DEEPSEEK_API_KEY",
base_url = "https://api.deepseek.com/v1/chat/completions",
},
{
name = "Gemini-2.5-pro",
provider = "gemini",
model = "gemini-2.5-pro",
api_key_name = "GEMINI_API_KEY",
web_search = true,
},
}presets = {
{
name = "Chat",
instruction = "",
},
{
name = "Software Development",
instruction = "You are an expert software developer and architect. Favor the Unix philosophy. Ask clarifying questions when requirements are ambiguous. Propose tradeoffs before making architectural choices.",
},
{
name = "Geopolitics",
instruction = "You are an expert geopolitics analyst and educator. Explain geopolitics topics clearly and neutrally for an intelligent, non-specialist audience.",
},
{
name = "Traduttore Italiano",
setup = "Gemini-2.5-flash", -- Setup Name Override
instruction = "You are an Italian native speaker and translator. Write natural Italian and preserve the original meaning.",
},
{
name = 'Grammar Check',
instruction = 'You are a professional editor. Follow these rules: 1. Correct grammar and punctuation 2. Improve clarity and readability 3. Maintain the original tone and intent 4. Keep the same format (lists stay lists, etc.) 5. Don\'t add explanations or comments',
setup = 'Gemini-2.5-flash',
},
}actions = {
{
name = "Summarize",
preset = "Chat",
type = "text",
pre_text = "Summarize the following text:\n\n",
},
{
name = 'Rewrite',
preset = 'Grammar Check',
type = 'replace_visual',
pre_text = 'Please correct the following text. Return only the revised version without additional comments:',
shortcut = '<leader>mr',
},
{
name = "Explain Code",
preset = "Software Development",
type = "code",
language = "lua",
pre_text = "Explain what this code does:\n\n",
},
{
name = "Traduci",
preset = "Traduttore Italiano",
type = "text",
pre_text = "Traduci questo testo in italiano:\n\n",
},
}Notes:
type = "replace_visual"replaces the selected text in-place with the model response.shortcutadds a visual-mode keymap for the action, bypassing the action picker.
The following providers are currently supported:
OpenAI (Responses API)xAI (Responses API)DeepSeek (chat-completions-compatible)Google (Gemini)
Other providers can be added per request. Raise an issue or PR to add a new provider.
To enable file logging, set log_to_file = true in your setup and optionally
override log_file_path. The default log path is
stdpath("cache")/markdownllm.log. To debug API calls, set
log_level = vim.log.levels.DEBUG to log request and raw response bodies.
Example:
require("markdownllm").setup({
log_level = vim.log.levels.DEBUG,
log_to_file = true,
log_file_path = vim.fn.stdpath("cache") .. "/markdownllm.log",
})MIT. See LICENSE.



