Comprehensive security auditing tools for "vibe coded" projects - Because you can't read the code, but you still need to ship safely.
Protects against the nightmare scenario: accidentally publishing private data, API keys, or security vulnerabilities when you can't manually review the code.
When you're shipping AI-generated code (vibe coding), you face a unique challenge: you can't manually review every line of code before publishing. This creates serious risks:
- π API keys and secrets accidentally committed
- π Personal file paths revealing your system structure
- π Obsidian vault references leaking private notes
- π Security vulnerabilities you didn't spot
- πΎ Private data exposure from your development environment
This tool suite solves that problem by providing multi-layered automated security audits before you publish anything publicly.
- Python 3.7+
- OpenRouter API key (for multi-model auditing)
# Clone the repository
git clone https://github.com/csmoove530/vibe-codebase-audit.git
cd vibe-codebase-audit
# Make scripts executable
chmod +x audit-tool.py multi-model-audit.py
# Install dependencies (for multi-model audit)
pip install requests1. Quick Automated Scan
python3 audit-tool.py /path/to/your/project2. Multi-Model AI Audit (recommended before publishing)
export OPENROUTER_API_KEY="your_key_here"
python3 multi-model-audit.py /path/to/your/projectFast, local scanning for common security issues.
What it checks:
- β API keys and authentication tokens
- β Passwords and secrets
- β Personal information (emails, phone numbers)
- β File system paths that reveal user info
- β References to Obsidian vaults
- β Common code vulnerabilities
- β Security-related TODOs/FIXMEs
Output:
- Risk score (0-100)
- Detailed findings by severity
- JSON report for further analysis
Example:
python3 audit-tool.py ~/my-vibe-coded-app
# Output:
# π VIBE CODE PROJECT AUDIT REPORT
# Risk Level: π HIGH
# Risk Score: 50/100
# Total Findings: 7
# ...Uses multiple AI models (Claude, GPT-4, Gemini) to independently review your code for security issues.
What it does:
- π€ Sends your code to 3 different AI models
- π§ Each model independently assesses security
- π Generates consensus report
- π Plain English explanations
- β Confirms private data presence/absence
Why multiple models?
- Catches issues one model might miss
- Provides confidence through consensus
- Different models have different security expertise
- Reduces false negatives
Example:
export OPENROUTER_API_KEY="sk-or-v1-..."
python3 multi-model-audit.py ~/my-vibe-coded-app
# Output:
# π― MULTI-MODEL CONSENSUS AUDIT REPORT
# Models Audited: CLAUDE, GPT4, GEMINI
# Risk Level: π’ LOW
# Private Data Found: β
NO
# Safe to Publish: YES
# ...Before publishing any vibe-coded project:
python3 audit-tool.py /path/to/projectReview the findings. If Risk Score > 20, investigate all flagged issues.
- Remove any API keys or secrets found
- Redact personal file paths
- Address security vulnerabilities
export OPENROUTER_API_KEY="your_key"
python3 multi-model-audit.py /path/to/projectGet AI consensus on whether it's safe to publish.
- Check that all models agree (or understand disagreements)
- Verify "Private Data Found: NO"
- Confirm "Safe to Publish: YES"
Once all audits pass and issues are resolved, publish your code knowing it's been thoroughly vetted.
- API keys (OpenAI, Anthropic, AWS, GitHub, etc.)
- Authentication tokens
- Database credentials
- SSH keys and certificates
- Bearer tokens
- Passwords in code
- Email addresses
- Phone numbers
- File paths revealing user directories
- Obsidian vault references
- Personal documents paths
- User-specific system paths
- Command injection risks
- SQL injection patterns
- Cross-site scripting (XSS)
- Path traversal vulnerabilities
- Unsafe deserialization
- Weak cryptography usage
- Debug mode enabled
- Security-related TODOs
- FIXMEs mentioning auth/validation
- Incomplete security implementations
Risk Score Scale (0-100):
- 0-19: β SAFE - No significant issues
- 20-49: π‘ MEDIUM - Minor issues, review recommended
- 50-79: π HIGH - Significant issues, fixes required
- 80-100: π΄ CRITICAL - Severe issues, DO NOT PUBLISH
Risk Calculation:
- Critical findings: 20 points each
- High findings: 10 points each
- Medium findings: 5 points each
- Low findings: 2 points each
The Tea dating app incident (2023) involved:
- Database credentials leaked in public repository
- User data (photos, messages) exposed
- API endpoints accessible without authentication
How this tool would have caught it:
- audit-tool.py would flag database credentials immediately
- multi-model-audit.py would identify the API authentication issues
- Both would give CRITICAL risk scores
- Developer would be alerted before publishing
Use this tool to prevent your own Tea moment.
- Audit AI-generated code before committing
- Verify no personal data leaked from prompts
- Check for hardcoded secrets you didn't notice
- Ensure Obsidian notes didn't bleed into code
- Pre-commit security checks
- CI/CD integration for automated audits
- Protect contributors from accidental leaks
- Maintain security standards
- Peace of mind before publishing
- Learning tool to understand security risks
- Backup check for manual reviews
- Archive audit reports for compliance
GitHub Actions Example:
name: Security Audit
on: [push, pull_request]
jobs:
audit:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Run Security Audit
run: |
python3 audit-tool.py .
if [ $(jq '.risk_score' audit-report.json) -gt 50 ]; then
echo "Risk score too high!"
exit 1
fiEdit audit-tool.py to add project-specific patterns:
# Add to _check_secrets method
patterns = {
"Custom API Key": r'YOUR_PATTERN_HERE',
# ...
}# Audit multiple projects
for dir in ~/projects/*; do
echo "Auditing $dir..."
python3 audit-tool.py "$dir"
done- Console: Human-readable report with colored output
- audit-report.json: Detailed JSON for programmatic use
- Console: Consensus report with all model results
- multi-model-audit-report.json: Full multi-model analysis
Both reports saved in the project directory being audited.
# Required for multi-model audit
export OPENROUTER_API_KEY="sk-or-v1-..."
# Optional: customize output
export AUDIT_VERBOSE=trueDefault ignored:
.git/node_modules/__pycache__/.DS_Storevenv/,.venv/dist/,build/
Edit _get_files_to_scan() in audit-tool.py to customize.
Found a security pattern we should check? Have ideas for improvements?
- Fork the repository
- Create a feature branch
- Add your improvements
- Test thoroughly
- Submit a pull request
Particularly valuable contributions:
- New security patterns to detect
- False positive reduction
- Additional AI model integrations
- Better reporting formats
The automated scanner runs entirely locally. Nothing is transmitted.
- Code sent to AI providers via OpenRouter
- Use ONLY on code you're comfortable sharing
- OpenRouter privacy policy applies
- Don't audit codebases with secrets already present
Best practice: Run automated scan first, fix issues, THEN run multi-model audit.
MIT License - see LICENSE file for details.
Use freely for personal and commercial projects.
Created to solve a real problem in the vibe coding workflow. Inspired by the need to ship fast without shipping vulnerabilities.
Special thanks to the AI models that make vibe coding possible (and these audit tools necessary).
- Issues: GitHub Issues
- Feature Requests: Open an issue with the
enhancementlabel - Security Concerns: Email maintainer directly
Future improvements:
- GUI interface for non-technical users
- VS Code extension for real-time scanning
- Pre-commit hooks for automatic auditing
- Severity customization per project
- Integration with secret scanning services
- Support for additional AI models
- Custom reporting templates
- Team/enterprise features
Ship with confidence. Audit with rigor. Vibe in peace. π
Built with β€οΈ for the vibe coding community.