Hello maintainers,
I would like to report a potential command-injection vulnerability in your GitHub Actions workflow.
The affected workflow file(s) invoke an LLM to process and summarize issues, and then directly concatenate the LLM output into a shell command argument for gh issue comment --body. Because untrusted model output is inserted into a shell command context, an attacker may craft a malicious issue so that the LLM response contains an injection payload.
Impact:
- Possible command injection during workflow execution.
- Possible leakage of sensitive environment variables (for example
GITHUB_TOKEN or GH_TOKEN).
- Although these tokens are typically short-lived and scoped to workflow job/step execution, an attacker may attempt to prolong execution time (for example via sleep-based techniques) and abuse the token during that window.
Recommended remediation:
- Do not place
${{ steps.inference.outputs.response }} directly in a shell command argument.
- Pass it through a step environment variable first (for example
RESPONSE).
- In shell, reference it only as a double-quoted variable (for example
"$RESPONSE").
Affected workflow file(s) observed:
Thank you for your time and for maintaining this project.
Hello maintainers,
I would like to report a potential command-injection vulnerability in your GitHub Actions workflow.
The affected workflow file(s) invoke an LLM to process and summarize issues, and then directly concatenate the LLM output into a shell command argument for
gh issue comment --body. Because untrusted model output is inserted into a shell command context, an attacker may craft a malicious issue so that the LLM response contains an injection payload.Impact:
GITHUB_TOKENorGH_TOKEN).Recommended remediation:
${{ steps.inference.outputs.response }}directly in a shell command argument.RESPONSE)."$RESPONSE").Affected workflow file(s) observed:
Thank you for your time and for maintaining this project.