Add setup command for installing debug adapters#5
Conversation
Implement the 'debugger setup' command that enables easy installation of DAP-supported debuggers across platforms. Features include: - Support for lldb-dap, codelldb, debugpy, and delve installers - GitHub releases and package manager installation methods - Project type detection for auto-setup (--auto) - Installation verification via DAP protocol - Cross-platform support (Linux, macOS, Windows) - JSON output mode for agent-friendly output - Auto-update of config.toml after installation Usage examples: debugger setup lldb # Install lldb-dap debugger setup --list # List available debuggers debugger setup --check # Verify installations debugger setup --auto # Auto-detect and install
Remove incorrect logic that printed the message when the results array was empty in non-JSON mode.
Summary of ChangesHello @akiselev, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the debugger's usability by introducing a comprehensive Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a powerful debugger setup command, which is a fantastic addition for managing debug adapters. The implementation is well-structured within a new setup module. However, I've identified two critical command injection vulnerabilities that must be addressed before merging. Additionally, there are a few other high and medium severity issues related to data loss, error handling, and code clarity that should be fixed. My review provides specific suggestions to resolve these problems.
src/setup/adapters/debugpy.rs
Outdated
| }; | ||
|
|
||
| println!("Installing {}...", package); | ||
| run_command(&format!("{} install {}", pip.display(), package)).await?; |
There was a problem hiding this comment.
This line is vulnerable to command injection. The package variable, which is constructed from user input via the --version flag, is formatted directly into a shell command. A malicious version string (e.g., "1.0; rm -rf /") could lead to arbitrary command execution.
To mitigate this, you should use tokio::process::Command and pass arguments separately, rather than formatting them into a single string to be executed by a shell.
let output = tokio::process::Command::new(&pip)
.arg("install")
.arg(&package)
.output()
.await
.map_err(|e| crate::common::Error::Internal(format!("Failed to run pip: {}", e)))?;
if !output.status.success() {
return Err(crate::common::Error::Internal(format!(
"pip install failed: {}",
String::from_utf8_lossy(&output.stderr)
)));
}
src/setup/adapters/delve.rs
Outdated
| let command = format!("{} install {}", tool, package); | ||
| println!("Running: {}", command); | ||
|
|
||
| run_command(&command).await?; |
There was a problem hiding this comment.
This line is vulnerable to command injection. The package variable is constructed using user-provided input from the --version flag and then formatted directly into a shell command string. A malicious version string could allow for arbitrary command execution.
To fix this, you should use tokio::process::Command to execute go install and pass the package string as a separate argument. This ensures that the user input is treated as a single argument and not interpreted by the shell.
| run_command(&command).await?; | |
| let output = tokio::process::Command::new(tool) | |
| .arg("install") | |
| .arg(package) | |
| .output() | |
| .await | |
| .map_err(|e| crate::common::Error::Internal(format!("Failed to run go install: {}", e)))?; | |
| if !output.status.success() { | |
| return Err(crate::common::Error::Internal(format!( | |
| "go install failed: {}", | |
| String::from_utf8_lossy(&output.stderr) | |
| ))); | |
| } |
| let mut config: toml::Table = if content.is_empty() { | ||
| toml::Table::new() | ||
| } else { | ||
| content.parse().unwrap_or_else(|_| toml::Table::new()) | ||
| }; |
There was a problem hiding this comment.
If the config.toml file is malformed, content.parse() will fail, and unwrap_or_else will create a new, empty toml::Table. When this new table is written back to the file, it will silently overwrite and discard the user's entire existing configuration. This can lead to unexpected data loss.
Instead of creating a new table on parse failure, you should propagate the error and inform the user that their configuration file is invalid and needs to be fixed.
| let mut config: toml::Table = if content.is_empty() { | |
| toml::Table::new() | |
| } else { | |
| content.parse().unwrap_or_else(|_| toml::Table::new()) | |
| }; | |
| let mut config: toml::Table = if content.is_empty() { | |
| toml::Table::new() | |
| } else { | |
| content.parse().map_err(|e| { | |
| crate::common::Error::ConfigParse(format!( | |
| "Failed to parse {}: {}", | |
| config_file.display(), | |
| e | |
| )) | |
| })? | |
| }; |
src/setup/adapters/codelldb.rs
Outdated
| for entry in entries.flatten() { | ||
| let path = entry.path(); | ||
| if path.is_file() { | ||
| let _ = make_executable(&path); | ||
| } | ||
| } |
There was a problem hiding this comment.
Using flatten() on an iterator of Results silently discards any errors. If std::fs::read_dir encounters an I/O error or a permission issue for an entry, it will be ignored. This could lead to some binaries not being made executable without any indication of a problem. It's better to handle the Result explicitly to make the code more robust.
for entry in entries {
if let Ok(entry) = entry {
let path = entry.path();
if path.is_file() {
// Log or ignore the error, but don't fail the whole installation.
if let Err(e) = make_executable(&path) {
println!("Warning: could not make {} executable: {}", path.display(), e);
}
}
}
}
src/setup/adapters/delve.rs
Outdated
| let dlv_src = temp_dir.path().join("dlv"); | ||
| if !dlv_src.exists() { | ||
| // Try looking in a subdirectory | ||
| let dlv_src_alt = std::fs::read_dir(temp_dir.path())? | ||
| .filter_map(|e| e.ok()) | ||
| .find(|e| e.path().is_dir()) | ||
| .map(|e| e.path().join("dlv")) | ||
| .filter(|p| p.exists()); | ||
|
|
||
| if dlv_src_alt.is_none() { | ||
| return Err(Error::Internal( | ||
| "dlv binary not found in downloaded archive".to_string(), | ||
| )); | ||
| } | ||
| } | ||
|
|
||
| let dlv_src = if dlv_src.exists() { | ||
| dlv_src | ||
| } else { | ||
| std::fs::read_dir(temp_dir.path())? | ||
| .filter_map(|e| e.ok()) | ||
| .find(|e| e.path().is_dir()) | ||
| .map(|e| e.path().join("dlv")) | ||
| .ok_or_else(|| Error::Internal("Could not find dlv in archive".to_string()))? | ||
| }; |
There was a problem hiding this comment.
The logic to find the dlv binary within the extracted archive is a bit convoluted and contains redundant code. The first if block (lines 254-267) checks for an alternative path but doesn't use the result. The subsequent if/else block then repeats the same search logic.
This can be simplified into a single, clearer block of code that first checks the root and then searches subdirectories if needed.
// Find dlv binary in extracted directory
let dlv_src = temp_dir.path().join(binary_name());
let dlv_src = if dlv_src.exists() {
dlv_src
} else {
// Try looking in a subdirectory
std::fs::read_dir(temp_dir.path())?
.filter_map(Result::ok)
.find(|e| e.path().is_dir())
.map(|e| e.path().join(binary_name()))
.filter(|p| p.exists())
.ok_or_else(|| Error::Internal("dlv binary not found in downloaded archive".to_string()))?
};
src/setup/detector.rs
Outdated
| for entry in entries.flatten() { | ||
| let path = entry.path(); | ||
| if path.extension().map(|e| e == ext).unwrap_or(false) { | ||
| return true; | ||
| } | ||
| } |
There was a problem hiding this comment.
Using flatten() on an iterator of Results will silently discard any errors that occur while reading directory entries (e.g., due to permissions). This could cause the project detection to fail partially without any indication. It's better to handle the Result explicitly, even if it's just to ignore the error, to make the intent clear.
for entry in entries {
if let Ok(entry) = entry {
let path = entry.path();
if path.extension().map(|e| e == ext).unwrap_or(false) {
return true;
}
}
}There was a problem hiding this comment.
Pull request overview
This PR implements a comprehensive "debugger setup" command that automates the installation of DAP (Debug Adapter Protocol) debuggers across platforms. The implementation includes support for lldb-dap, CodeLLDB, debugpy, and Delve with multiple installation methods (GitHub releases, package managers, language-specific tools).
Changes:
- Adds setup command infrastructure with installer traits, verification via DAP protocol, and project type detection
- Implements four debugger installers (lldb-dap, CodeLLDB, debugpy, Delve) with platform-specific installation logic
- Integrates setup command into CLI with JSON output support and auto-configuration update
Reviewed changes
Copilot reviewed 14 out of 15 changed files in this pull request and generated 7 comments.
Show a summary per file
| File | Description |
|---|---|
| src/setup/mod.rs | Main setup command logic, orchestration, and config file updates |
| src/setup/registry.rs | Debugger metadata registry and installer factory |
| src/setup/installer.rs | Core installation traits, utilities for downloading and extracting |
| src/setup/verifier.rs | DAP adapter verification by sending initialize requests |
| src/setup/detector.rs | Project type detection from directory contents |
| src/setup/adapters/*.rs | Individual debugger installers for lldb, codelldb, debugpy, and delve |
| src/commands.rs | CLI command definition for setup subcommand |
| src/cli/mod.rs | Command dispatch integration |
| Cargo.toml | Dependencies for HTTP, archive handling, and async operations |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
src/setup/verifier.rs
Outdated
| .args(args) | ||
| .stdin(Stdio::piped()) | ||
| .stdout(Stdio::piped()) | ||
| .stderr(Stdio::null()) |
There was a problem hiding this comment.
The stderr output from the adapter process is being discarded by using Stdio::null(). This could make debugging installation issues difficult when adapters fail to start properly. Consider capturing stderr to provide better error messages to users.
| .stderr(Stdio::null()) | |
| .stderr(Stdio::inherit()) |
src/setup/mod.rs
Outdated
| let mut config: toml::Table = if content.is_empty() { | ||
| toml::Table::new() | ||
| } else { | ||
| content.parse().unwrap_or_else(|_| toml::Table::new()) |
There was a problem hiding this comment.
The config file is parsed with unwrap_or_else which silently ignores parsing errors. If the existing config.toml is malformed, this will discard it without warning the user. Consider notifying the user when the existing config cannot be parsed, or backing up the malformed file before overwriting it.
| content.parse().unwrap_or_else(|_| toml::Table::new()) | |
| match content.parse() { | |
| Ok(table) => table, | |
| Err(err) => { | |
| // Warn the user and back up the malformed config before overwriting it | |
| eprintln!( | |
| "Warning: existing debug adapter config at '{}' is malformed ({}); \ | |
| backing it up and creating a new config.", | |
| config_file.display(), | |
| err | |
| ); | |
| let backup_path = config_file.with_extension("toml.bak"); | |
| // Best effort backup; if it fails, propagate the error | |
| std::fs::rename(&config_file, &backup_path)?; | |
| toml::Table::new() | |
| } | |
| } |
src/setup/mod.rs
Outdated
| let adapters = config | ||
| .get_mut("adapters") | ||
| .and_then(|v| v.as_table_mut()) | ||
| .unwrap(); |
There was a problem hiding this comment.
The unwrap() call on line 639 will panic if the "adapters" key doesn't contain a table, despite just inserting it as a table on line 636. While logically this shouldn't fail, using expect() with a clear message or proper error handling would be more defensive and aid debugging if the logic changes in the future.
| .unwrap(); | |
| .expect("Expected 'adapters' to be a TOML table in the debug adapter configuration"); |
src/setup/installer.rs
Outdated
| pub async fn get_github_release(repo: &str, version: Option<&str>) -> Result<GitHubRelease> { | ||
| let client = reqwest::Client::new(); | ||
| let url = if let Some(v) = version { | ||
| format!( | ||
| "https://api.github.com/repos/{}/releases/tags/{}", | ||
| repo, v | ||
| ) | ||
| } else { | ||
| format!("https://api.github.com/repos/{}/releases/latest", repo) | ||
| }; | ||
|
|
||
| let response = client | ||
| .get(&url) | ||
| .header("User-Agent", "debugger-cli") | ||
| .header("Accept", "application/vnd.github.v3+json") | ||
| .send() | ||
| .await | ||
| .map_err(|e| Error::Internal(format!("GitHub API error: {}", e)))?; |
There was a problem hiding this comment.
The GitHub API is called without any rate limiting or retry logic. GitHub has rate limits (60 requests/hour for unauthenticated, 5000 for authenticated). Consider adding retry logic with exponential backoff, or caching release information to avoid hitting rate limits during repeated installations.
| pub async fn download_file(url: &str, dest: &Path) -> Result<()> { | ||
| let client = reqwest::Client::new(); | ||
| let response = client | ||
| .get(url) | ||
| .header("User-Agent", "debugger-cli") | ||
| .send() | ||
| .await | ||
| .map_err(|e| Error::Internal(format!("Failed to download {}: {}", url, e)))?; | ||
|
|
||
| if !response.status().is_success() { | ||
| return Err(Error::Internal(format!( | ||
| "Download failed with status {}: {}", | ||
| response.status(), | ||
| url | ||
| ))); | ||
| } | ||
|
|
||
| let total_size = response.content_length().unwrap_or(0); | ||
|
|
||
| let pb = if total_size > 0 { | ||
| let pb = ProgressBar::new(total_size); | ||
| pb.set_style( | ||
| ProgressStyle::default_bar() | ||
| .template(" [{bar:40.cyan/blue}] {bytes}/{total_bytes} ({eta})") | ||
| .unwrap() | ||
| .progress_chars("=> "), | ||
| ); | ||
| Some(pb) | ||
| } else { | ||
| println!(" Downloading..."); | ||
| None | ||
| }; | ||
|
|
||
| let mut file = | ||
| std::fs::File::create(dest).map_err(|e| Error::Internal(format!("Failed to create file: {}", e)))?; | ||
|
|
||
| let mut stream = response.bytes_stream(); | ||
| let mut downloaded: u64 = 0; | ||
|
|
||
| while let Some(chunk) = stream.next().await { | ||
| let chunk = chunk.map_err(|e| Error::Internal(format!("Download error: {}", e)))?; | ||
| std::io::Write::write_all(&mut file, &chunk)?; | ||
| downloaded += chunk.len() as u64; | ||
| if let Some(ref pb) = pb { | ||
| pb.set_position(downloaded); | ||
| } | ||
| } | ||
|
|
||
| if let Some(pb) = pb { | ||
| pb.finish_and_clear(); | ||
| } | ||
|
|
||
| Ok(()) | ||
| } |
There was a problem hiding this comment.
Downloaded files are not verified with checksums or signatures before extraction and execution. This is a security risk as compromised downloads or man-in-the-middle attacks could result in executing malicious code. Consider adding checksum verification, especially for binaries downloaded from GitHub releases.
| pub async fn run_command(command: &str) -> Result<String> { | ||
| let output = if cfg!(windows) { | ||
| tokio::process::Command::new("cmd") | ||
| .args(["/C", command]) | ||
| .output() | ||
| .await | ||
| } else { | ||
| tokio::process::Command::new("sh") | ||
| .args(["-c", command]) | ||
| .output() | ||
| .await | ||
| }; | ||
|
|
||
| let output = output.map_err(|e| Error::Internal(format!("Failed to run command: {}", e)))?; | ||
|
|
||
| if !output.status.success() { | ||
| let stderr = String::from_utf8_lossy(&output.stderr); | ||
| return Err(Error::Internal(format!("Command failed: {}", stderr))); | ||
| } | ||
|
|
||
| Ok(String::from_utf8_lossy(&output.stdout).to_string()) | ||
| } |
There was a problem hiding this comment.
The run_command function executes shell commands using sh -c on Unix and cmd /C on Windows without proper escaping or sanitization. While the commands appear to be constructed from controlled sources in this codebase, this pattern is prone to command injection if any user input ever flows into these commands. Consider using direct process spawning with arguments array instead of shell execution where possible.
src/setup/adapters/debugpy.rs
Outdated
| if let Ok(path) = which::which(cmd) { | ||
| // Verify it's Python 3.7+ | ||
| let version_check = run_command(&format!( | ||
| "{} -c \"import sys; assert sys.version_info >= (3, 7)\"", |
There was a problem hiding this comment.
When checking for Python version with run_command, the command includes an assertion that could raise a Python exception. If the version check fails, the error message from the assertion will be generic. Consider using explicit version comparison and providing a clearer error message about the minimum required Python version.
| "{} -c \"import sys; assert sys.version_info >= (3, 7)\"", | |
| "{} -c \"import sys; sys.exit(0 if sys.version_info >= (3, 7) else 1)\"", |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: a34243ea53
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
src/setup/adapters/lldb.rs
Outdated
| println!("Downloading {}... {:.1} MB", asset.name, asset.size as f64 / 1_000_000.0); | ||
| download_file(&asset.browser_download_url, &archive_path).await?; | ||
|
|
||
| println!("Extracting..."); | ||
| extract_tar_gz(&archive_path, temp_dir.path())?; |
There was a problem hiding this comment.
Use correct extractor for LLVM .tar.xz assets
The GitHub asset patterns here are .tar.xz, but the download is always unpacked with extract_tar_gz (gzip). On platforms where the GitHub release path is chosen (e.g., Linux without apt/dnf/pacman), the install will fail because .tar.xz archives are not gzip-compressed. This makes the fallback installer unusable for those environments.
Useful? React with 👍 / 👎.
src/setup/verifier.rs
Outdated
| reader.read_line(&mut header_line).await?; | ||
| let content_length: usize = header_line | ||
| .trim() | ||
| .strip_prefix("Content-Length: ") | ||
| .and_then(|s| s.parse().ok()) |
There was a problem hiding this comment.
Parse full DAP headers instead of assuming one line
The verifier only reads a single header line and requires it to be Content-Length, then immediately treats the next line as the blank separator. Adapters that emit an additional header (e.g., Content-Type, which is common in DAP/JSON-RPC) will cause this parser to mis-handle the header block, leading to invalid JSON parsing or a spurious "Invalid DAP header" error. This makes setup --check falsely report failures for valid adapters that include extra headers.
Useful? React with 👍 / 👎.
Security fixes (critical): - Fix command injection in debugpy.rs: Use run_command_args instead of shell - Fix command injection in delve.rs: Use run_command_args instead of shell Robustness improvements (high): - Fix config parse error handling: Propagate error instead of silently creating new table which could overwrite user config - Use expect() instead of unwrap() for adapters table access Code quality fixes (medium): - Fix silent error discard in codelldb.rs: Log warnings for permission errors - Fix silent error discard in detector.rs: Explicitly handle Result - Simplify dlv binary search logic in delve.rs Functionality fixes: - Add extract_tar_xz() for LLVM releases (they use .tar.xz, not .tar.gz) - Fix DAP header parsing in verifier.rs: Handle multiple headers - Capture stderr in verifier.rs for better error messages - Add retry logic with exponential backoff for GitHub API - Fix Python version check to use explicit exit code instead of assertion
Implement the 'debugger setup' command that enables easy installation
of DAP-supported debuggers across platforms. Features include:
Usage examples:
debugger setup lldb # Install lldb-dap
debugger setup --list # List available debuggers
debugger setup --check # Verify installations
debugger setup --auto # Auto-detect and install