Skip to content

Add setup command for installing debug adapters#5

Merged
akiselev merged 3 commits intomasterfrom
claude/setup-debugger-command-05ZQp
Jan 19, 2026
Merged

Add setup command for installing debug adapters#5
akiselev merged 3 commits intomasterfrom
claude/setup-debugger-command-05ZQp

Conversation

@akiselev
Copy link
Copy Markdown
Owner

Implement the 'debugger setup' command that enables easy installation
of DAP-supported debuggers across platforms. Features include:

  • Support for lldb-dap, codelldb, debugpy, and delve installers
  • GitHub releases and package manager installation methods
  • Project type detection for auto-setup (--auto)
  • Installation verification via DAP protocol
  • Cross-platform support (Linux, macOS, Windows)
  • JSON output mode for agent-friendly output
  • Auto-update of config.toml after installation

Usage examples:
debugger setup lldb # Install lldb-dap
debugger setup --list # List available debuggers
debugger setup --check # Verify installations
debugger setup --auto # Auto-detect and install

Implement the 'debugger setup' command that enables easy installation
of DAP-supported debuggers across platforms. Features include:

- Support for lldb-dap, codelldb, debugpy, and delve installers
- GitHub releases and package manager installation methods
- Project type detection for auto-setup (--auto)
- Installation verification via DAP protocol
- Cross-platform support (Linux, macOS, Windows)
- JSON output mode for agent-friendly output
- Auto-update of config.toml after installation

Usage examples:
  debugger setup lldb       # Install lldb-dap
  debugger setup --list     # List available debuggers
  debugger setup --check    # Verify installations
  debugger setup --auto     # Auto-detect and install
Remove incorrect logic that printed the message when the results
array was empty in non-JSON mode.
Copilot AI review requested due to automatic review settings January 19, 2026 17:39
@gemini-code-assist
Copy link
Copy Markdown

Summary of Changes

Hello @akiselev, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the debugger's usability by introducing a comprehensive setup command. This new functionality allows users to easily install, manage, and verify various debug adapters, abstracting away the complexities of platform-specific installation procedures. It aims to provide a seamless experience for developers to get their preferred debuggers configured and ready for use.

Highlights

  • New Setup Command: Introduced a new debugger setup command to simplify the installation and management of DAP-supported debug adapters across various platforms.
  • Debugger Support: Added support for installing lldb-dap, codelldb, debugpy, and delve debuggers.
  • Installation Methods: Implemented diverse installation methods including GitHub releases, system package managers (e.g., Homebrew, Apt), and language-specific package managers (e.g., pip, go install).
  • Auto-Setup and Verification: Features project type detection for automatic debugger setup (--auto) and includes installation verification via the DAP protocol to ensure functionality.
  • Configuration Integration: Automatically updates the config.toml file after successful debugger installation to reflect the new adapter paths and arguments.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a powerful debugger setup command, which is a fantastic addition for managing debug adapters. The implementation is well-structured within a new setup module. However, I've identified two critical command injection vulnerabilities that must be addressed before merging. Additionally, there are a few other high and medium severity issues related to data loss, error handling, and code clarity that should be fixed. My review provides specific suggestions to resolve these problems.

};

println!("Installing {}...", package);
run_command(&format!("{} install {}", pip.display(), package)).await?;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

This line is vulnerable to command injection. The package variable, which is constructed from user input via the --version flag, is formatted directly into a shell command. A malicious version string (e.g., "1.0; rm -rf /") could lead to arbitrary command execution.

To mitigate this, you should use tokio::process::Command and pass arguments separately, rather than formatting them into a single string to be executed by a shell.

    let output = tokio::process::Command::new(&pip)
        .arg("install")
        .arg(&package)
        .output()
        .await
        .map_err(|e| crate::common::Error::Internal(format!("Failed to run pip: {}", e)))?;

    if !output.status.success() {
        return Err(crate::common::Error::Internal(format!(
            "pip install failed: {}",
            String::from_utf8_lossy(&output.stderr)
        )));
    }

let command = format!("{} install {}", tool, package);
println!("Running: {}", command);

run_command(&command).await?;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

This line is vulnerable to command injection. The package variable is constructed using user-provided input from the --version flag and then formatted directly into a shell command string. A malicious version string could allow for arbitrary command execution.

To fix this, you should use tokio::process::Command to execute go install and pass the package string as a separate argument. This ensures that the user input is treated as a single argument and not interpreted by the shell.

Suggested change
run_command(&command).await?;
let output = tokio::process::Command::new(tool)
.arg("install")
.arg(package)
.output()
.await
.map_err(|e| crate::common::Error::Internal(format!("Failed to run go install: {}", e)))?;
if !output.status.success() {
return Err(crate::common::Error::Internal(format!(
"go install failed: {}",
String::from_utf8_lossy(&output.stderr)
)));
}

Comment on lines +628 to +632
let mut config: toml::Table = if content.is_empty() {
toml::Table::new()
} else {
content.parse().unwrap_or_else(|_| toml::Table::new())
};
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

If the config.toml file is malformed, content.parse() will fail, and unwrap_or_else will create a new, empty toml::Table. When this new table is written back to the file, it will silently overwrite and discard the user's entire existing configuration. This can lead to unexpected data loss.

Instead of creating a new table on parse failure, you should propagate the error and inform the user that their configuration file is invalid and needs to be fixed.

Suggested change
let mut config: toml::Table = if content.is_empty() {
toml::Table::new()
} else {
content.parse().unwrap_or_else(|_| toml::Table::new())
};
let mut config: toml::Table = if content.is_empty() {
toml::Table::new()
} else {
content.parse().map_err(|e| {
crate::common::Error::ConfigParse(format!(
"Failed to parse {}: {}",
config_file.display(),
e
))
})?
};

Comment on lines +210 to +215
for entry in entries.flatten() {
let path = entry.path();
if path.is_file() {
let _ = make_executable(&path);
}
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Using flatten() on an iterator of Results silently discards any errors. If std::fs::read_dir encounters an I/O error or a permission issue for an entry, it will be ignored. This could lead to some binaries not being made executable without any indication of a problem. It's better to handle the Result explicitly to make the code more robust.

                        for entry in entries {
                            if let Ok(entry) = entry {
                                let path = entry.path();
                                if path.is_file() {
                                    // Log or ignore the error, but don't fail the whole installation.
                                    if let Err(e) = make_executable(&path) {
                                        println!("Warning: could not make {} executable: {}", path.display(), e);
                                    }
                                }
                            }
                        }

Comment on lines +253 to +277
let dlv_src = temp_dir.path().join("dlv");
if !dlv_src.exists() {
// Try looking in a subdirectory
let dlv_src_alt = std::fs::read_dir(temp_dir.path())?
.filter_map(|e| e.ok())
.find(|e| e.path().is_dir())
.map(|e| e.path().join("dlv"))
.filter(|p| p.exists());

if dlv_src_alt.is_none() {
return Err(Error::Internal(
"dlv binary not found in downloaded archive".to_string(),
));
}
}

let dlv_src = if dlv_src.exists() {
dlv_src
} else {
std::fs::read_dir(temp_dir.path())?
.filter_map(|e| e.ok())
.find(|e| e.path().is_dir())
.map(|e| e.path().join("dlv"))
.ok_or_else(|| Error::Internal("Could not find dlv in archive".to_string()))?
};
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic to find the dlv binary within the extracted archive is a bit convoluted and contains redundant code. The first if block (lines 254-267) checks for an alternative path but doesn't use the result. The subsequent if/else block then repeats the same search logic.

This can be simplified into a single, clearer block of code that first checks the root and then searches subdirectories if needed.

    // Find dlv binary in extracted directory
    let dlv_src = temp_dir.path().join(binary_name());
    let dlv_src = if dlv_src.exists() {
        dlv_src
    } else {
        // Try looking in a subdirectory
        std::fs::read_dir(temp_dir.path())?
            .filter_map(Result::ok)
            .find(|e| e.path().is_dir())
            .map(|e| e.path().join(binary_name()))
            .filter(|p| p.exists())
            .ok_or_else(|| Error::Internal("dlv binary not found in downloaded archive".to_string()))?
    };

Comment on lines +117 to +122
for entry in entries.flatten() {
let path = entry.path();
if path.extension().map(|e| e == ext).unwrap_or(false) {
return true;
}
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Using flatten() on an iterator of Results will silently discard any errors that occur while reading directory entries (e.g., due to permissions). This could cause the project detection to fail partially without any indication. It's better to handle the Result explicitly, even if it's just to ignore the error, to make the intent clear.

        for entry in entries {
            if let Ok(entry) = entry {
                let path = entry.path();
                if path.extension().map(|e| e == ext).unwrap_or(false) {
                    return true;
                }
            }
        }

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR implements a comprehensive "debugger setup" command that automates the installation of DAP (Debug Adapter Protocol) debuggers across platforms. The implementation includes support for lldb-dap, CodeLLDB, debugpy, and Delve with multiple installation methods (GitHub releases, package managers, language-specific tools).

Changes:

  • Adds setup command infrastructure with installer traits, verification via DAP protocol, and project type detection
  • Implements four debugger installers (lldb-dap, CodeLLDB, debugpy, Delve) with platform-specific installation logic
  • Integrates setup command into CLI with JSON output support and auto-configuration update

Reviewed changes

Copilot reviewed 14 out of 15 changed files in this pull request and generated 7 comments.

Show a summary per file
File Description
src/setup/mod.rs Main setup command logic, orchestration, and config file updates
src/setup/registry.rs Debugger metadata registry and installer factory
src/setup/installer.rs Core installation traits, utilities for downloading and extracting
src/setup/verifier.rs DAP adapter verification by sending initialize requests
src/setup/detector.rs Project type detection from directory contents
src/setup/adapters/*.rs Individual debugger installers for lldb, codelldb, debugpy, and delve
src/commands.rs CLI command definition for setup subcommand
src/cli/mod.rs Command dispatch integration
Cargo.toml Dependencies for HTTP, archive handling, and async operations

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

.args(args)
.stdin(Stdio::piped())
.stdout(Stdio::piped())
.stderr(Stdio::null())
Copy link

Copilot AI Jan 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The stderr output from the adapter process is being discarded by using Stdio::null(). This could make debugging installation issues difficult when adapters fail to start properly. Consider capturing stderr to provide better error messages to users.

Suggested change
.stderr(Stdio::null())
.stderr(Stdio::inherit())

Copilot uses AI. Check for mistakes.
src/setup/mod.rs Outdated
let mut config: toml::Table = if content.is_empty() {
toml::Table::new()
} else {
content.parse().unwrap_or_else(|_| toml::Table::new())
Copy link

Copilot AI Jan 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The config file is parsed with unwrap_or_else which silently ignores parsing errors. If the existing config.toml is malformed, this will discard it without warning the user. Consider notifying the user when the existing config cannot be parsed, or backing up the malformed file before overwriting it.

Suggested change
content.parse().unwrap_or_else(|_| toml::Table::new())
match content.parse() {
Ok(table) => table,
Err(err) => {
// Warn the user and back up the malformed config before overwriting it
eprintln!(
"Warning: existing debug adapter config at '{}' is malformed ({}); \
backing it up and creating a new config.",
config_file.display(),
err
);
let backup_path = config_file.with_extension("toml.bak");
// Best effort backup; if it fails, propagate the error
std::fs::rename(&config_file, &backup_path)?;
toml::Table::new()
}
}

Copilot uses AI. Check for mistakes.
src/setup/mod.rs Outdated
let adapters = config
.get_mut("adapters")
.and_then(|v| v.as_table_mut())
.unwrap();
Copy link

Copilot AI Jan 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The unwrap() call on line 639 will panic if the "adapters" key doesn't contain a table, despite just inserting it as a table on line 636. While logically this shouldn't fail, using expect() with a clear message or proper error handling would be more defensive and aid debugging if the logic changes in the future.

Suggested change
.unwrap();
.expect("Expected 'adapters' to be a TOML table in the debug adapter configuration");

Copilot uses AI. Check for mistakes.
Comment on lines +356 to +373
pub async fn get_github_release(repo: &str, version: Option<&str>) -> Result<GitHubRelease> {
let client = reqwest::Client::new();
let url = if let Some(v) = version {
format!(
"https://api.github.com/repos/{}/releases/tags/{}",
repo, v
)
} else {
format!("https://api.github.com/repos/{}/releases/latest", repo)
};

let response = client
.get(&url)
.header("User-Agent", "debugger-cli")
.header("Accept", "application/vnd.github.v3+json")
.send()
.await
.map_err(|e| Error::Internal(format!("GitHub API error: {}", e)))?;
Copy link

Copilot AI Jan 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The GitHub API is called without any rate limiting or retry logic. GitHub has rate limits (60 requests/hour for unauthenticated, 5000 for authenticated). Consider adding retry logic with exponential backoff, or caching release information to avoid hitting rate limits during repeated installations.

Copilot uses AI. Check for mistakes.
Comment on lines +207 to +260
pub async fn download_file(url: &str, dest: &Path) -> Result<()> {
let client = reqwest::Client::new();
let response = client
.get(url)
.header("User-Agent", "debugger-cli")
.send()
.await
.map_err(|e| Error::Internal(format!("Failed to download {}: {}", url, e)))?;

if !response.status().is_success() {
return Err(Error::Internal(format!(
"Download failed with status {}: {}",
response.status(),
url
)));
}

let total_size = response.content_length().unwrap_or(0);

let pb = if total_size > 0 {
let pb = ProgressBar::new(total_size);
pb.set_style(
ProgressStyle::default_bar()
.template(" [{bar:40.cyan/blue}] {bytes}/{total_bytes} ({eta})")
.unwrap()
.progress_chars("=> "),
);
Some(pb)
} else {
println!(" Downloading...");
None
};

let mut file =
std::fs::File::create(dest).map_err(|e| Error::Internal(format!("Failed to create file: {}", e)))?;

let mut stream = response.bytes_stream();
let mut downloaded: u64 = 0;

while let Some(chunk) = stream.next().await {
let chunk = chunk.map_err(|e| Error::Internal(format!("Download error: {}", e)))?;
std::io::Write::write_all(&mut file, &chunk)?;
downloaded += chunk.len() as u64;
if let Some(ref pb) = pb {
pb.set_position(downloaded);
}
}

if let Some(pb) = pb {
pb.finish_and_clear();
}

Ok(())
}
Copy link

Copilot AI Jan 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Downloaded files are not verified with checksums or signatures before extraction and execution. This is a security risk as compromised downloads or man-in-the-middle attacks could result in executing malicious code. Consider adding checksum verification, especially for binaries downloaded from GitHub releases.

Copilot uses AI. Check for mistakes.
Comment on lines +332 to +353
pub async fn run_command(command: &str) -> Result<String> {
let output = if cfg!(windows) {
tokio::process::Command::new("cmd")
.args(["/C", command])
.output()
.await
} else {
tokio::process::Command::new("sh")
.args(["-c", command])
.output()
.await
};

let output = output.map_err(|e| Error::Internal(format!("Failed to run command: {}", e)))?;

if !output.status.success() {
let stderr = String::from_utf8_lossy(&output.stderr);
return Err(Error::Internal(format!("Command failed: {}", stderr)));
}

Ok(String::from_utf8_lossy(&output.stdout).to_string())
}
Copy link

Copilot AI Jan 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The run_command function executes shell commands using sh -c on Unix and cmd /C on Windows without proper escaping or sanitization. While the commands appear to be constructed from controlled sources in this codebase, this pattern is prone to command injection if any user input ever flows into these commands. Consider using direct process spawning with arguments array instead of shell execution where possible.

Copilot uses AI. Check for mistakes.
if let Ok(path) = which::which(cmd) {
// Verify it's Python 3.7+
let version_check = run_command(&format!(
"{} -c \"import sys; assert sys.version_info >= (3, 7)\"",
Copy link

Copilot AI Jan 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When checking for Python version with run_command, the command includes an assertion that could raise a Python exception. If the version check fails, the error message from the assertion will be generic. Consider using explicit version comparison and providing a clearer error message about the minimum required Python version.

Suggested change
"{} -c \"import sys; assert sys.version_info >= (3, 7)\"",
"{} -c \"import sys; sys.exit(0 if sys.version_info >= (3, 7) else 1)\"",

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: a34243ea53

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +283 to +287
println!("Downloading {}... {:.1} MB", asset.name, asset.size as f64 / 1_000_000.0);
download_file(&asset.browser_download_url, &archive_path).await?;

println!("Extracting...");
extract_tar_gz(&archive_path, temp_dir.path())?;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Use correct extractor for LLVM .tar.xz assets

The GitHub asset patterns here are .tar.xz, but the download is always unpacked with extract_tar_gz (gzip). On platforms where the GitHub release path is chosen (e.g., Linux without apt/dnf/pacman), the install will fail because .tar.xz archives are not gzip-compressed. This makes the fallback installer unusable for those environments.

Useful? React with 👍 / 👎.

Comment on lines +115 to +119
reader.read_line(&mut header_line).await?;
let content_length: usize = header_line
.trim()
.strip_prefix("Content-Length: ")
.and_then(|s| s.parse().ok())
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Parse full DAP headers instead of assuming one line

The verifier only reads a single header line and requires it to be Content-Length, then immediately treats the next line as the blank separator. Adapters that emit an additional header (e.g., Content-Type, which is common in DAP/JSON-RPC) will cause this parser to mis-handle the header block, leading to invalid JSON parsing or a spurious "Invalid DAP header" error. This makes setup --check falsely report failures for valid adapters that include extra headers.

Useful? React with 👍 / 👎.

Security fixes (critical):
- Fix command injection in debugpy.rs: Use run_command_args instead of shell
- Fix command injection in delve.rs: Use run_command_args instead of shell

Robustness improvements (high):
- Fix config parse error handling: Propagate error instead of silently
  creating new table which could overwrite user config
- Use expect() instead of unwrap() for adapters table access

Code quality fixes (medium):
- Fix silent error discard in codelldb.rs: Log warnings for permission errors
- Fix silent error discard in detector.rs: Explicitly handle Result
- Simplify dlv binary search logic in delve.rs

Functionality fixes:
- Add extract_tar_xz() for LLVM releases (they use .tar.xz, not .tar.gz)
- Fix DAP header parsing in verifier.rs: Handle multiple headers
- Capture stderr in verifier.rs for better error messages
- Add retry logic with exponential backoff for GitHub API
- Fix Python version check to use explicit exit code instead of assertion
@akiselev akiselev merged commit 15eef90 into master Jan 19, 2026
@akiselev akiselev deleted the claude/setup-debugger-command-05ZQp branch January 19, 2026 18:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants