Skip to content

csmoove530/vibe-codebase-audit

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

2 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ”’ Vibe Codebase Audit

Comprehensive security auditing tools for "vibe coded" projects - Because you can't read the code, but you still need to ship safely.

Protects against the nightmare scenario: accidentally publishing private data, API keys, or security vulnerabilities when you can't manually review the code.


🎯 What Problem Does This Solve?

When you're shipping AI-generated code (vibe coding), you face a unique challenge: you can't manually review every line of code before publishing. This creates serious risks:

  • πŸ”‘ API keys and secrets accidentally committed
  • πŸ“ Personal file paths revealing your system structure
  • πŸ““ Obsidian vault references leaking private notes
  • πŸ”“ Security vulnerabilities you didn't spot
  • πŸ’Ύ Private data exposure from your development environment

This tool suite solves that problem by providing multi-layered automated security audits before you publish anything publicly.


⚑ Quick Start

Prerequisites

  • Python 3.7+
  • OpenRouter API key (for multi-model auditing)

Installation

# Clone the repository
git clone https://github.com/csmoove530/vibe-codebase-audit.git
cd vibe-codebase-audit

# Make scripts executable
chmod +x audit-tool.py multi-model-audit.py

# Install dependencies (for multi-model audit)
pip install requests

Basic Usage

1. Quick Automated Scan

python3 audit-tool.py /path/to/your/project

2. Multi-Model AI Audit (recommended before publishing)

export OPENROUTER_API_KEY="your_key_here"
python3 multi-model-audit.py /path/to/your/project

πŸ› οΈ Tools Included

1. audit-tool.py - Automated Pattern Scanner

Fast, local scanning for common security issues.

What it checks:

  • βœ… API keys and authentication tokens
  • βœ… Passwords and secrets
  • βœ… Personal information (emails, phone numbers)
  • βœ… File system paths that reveal user info
  • βœ… References to Obsidian vaults
  • βœ… Common code vulnerabilities
  • βœ… Security-related TODOs/FIXMEs

Output:

  • Risk score (0-100)
  • Detailed findings by severity
  • JSON report for further analysis

Example:

python3 audit-tool.py ~/my-vibe-coded-app

# Output:
# πŸ”’ VIBE CODE PROJECT AUDIT REPORT
# Risk Level: 🟠 HIGH
# Risk Score: 50/100
# Total Findings: 7
# ...

2. multi-model-audit.py - AI Consensus Audit

Uses multiple AI models (Claude, GPT-4, Gemini) to independently review your code for security issues.

What it does:

  • πŸ€– Sends your code to 3 different AI models
  • 🧠 Each model independently assesses security
  • πŸ“Š Generates consensus report
  • πŸ“ Plain English explanations
  • βœ… Confirms private data presence/absence

Why multiple models?

  • Catches issues one model might miss
  • Provides confidence through consensus
  • Different models have different security expertise
  • Reduces false negatives

Example:

export OPENROUTER_API_KEY="sk-or-v1-..."
python3 multi-model-audit.py ~/my-vibe-coded-app

# Output:
# 🎯 MULTI-MODEL CONSENSUS AUDIT REPORT
# Models Audited: CLAUDE, GPT4, GEMINI
# Risk Level: 🟒 LOW
# Private Data Found: βœ… NO
# Safe to Publish: YES
# ...

πŸ“‹ Audit Workflow (Recommended)

Before publishing any vibe-coded project:

Step 1: Automated Scan

python3 audit-tool.py /path/to/project

Review the findings. If Risk Score > 20, investigate all flagged issues.

Step 2: Fix Critical Issues

  • Remove any API keys or secrets found
  • Redact personal file paths
  • Address security vulnerabilities

Step 3: Multi-Model Audit

export OPENROUTER_API_KEY="your_key"
python3 multi-model-audit.py /path/to/project

Get AI consensus on whether it's safe to publish.

Step 4: Review Consensus Report

  • Check that all models agree (or understand disagreements)
  • Verify "Private Data Found: NO"
  • Confirm "Safe to Publish: YES"

Step 5: Publish with Confidence

Once all audits pass and issues are resolved, publish your code knowing it's been thoroughly vetted.


πŸ” What Gets Checked

Secrets & Credentials

  • API keys (OpenAI, Anthropic, AWS, GitHub, etc.)
  • Authentication tokens
  • Database credentials
  • SSH keys and certificates
  • Bearer tokens
  • Passwords in code

Personal Data

  • Email addresses
  • Phone numbers
  • File paths revealing user directories
  • Obsidian vault references
  • Personal documents paths
  • User-specific system paths

Security Vulnerabilities

  • Command injection risks
  • SQL injection patterns
  • Cross-site scripting (XSS)
  • Path traversal vulnerabilities
  • Unsafe deserialization
  • Weak cryptography usage
  • Debug mode enabled

Code Quality Issues

  • Security-related TODOs
  • FIXMEs mentioning auth/validation
  • Incomplete security implementations

πŸ“Š Understanding Risk Scores

Risk Score Scale (0-100):

  • 0-19: βœ… SAFE - No significant issues
  • 20-49: 🟑 MEDIUM - Minor issues, review recommended
  • 50-79: 🟠 HIGH - Significant issues, fixes required
  • 80-100: πŸ”΄ CRITICAL - Severe issues, DO NOT PUBLISH

Risk Calculation:

  • Critical findings: 20 points each
  • High findings: 10 points each
  • Medium findings: 5 points each
  • Low findings: 2 points each

πŸŽ“ Real-World Example: Tea App Incident

The Tea dating app incident (2023) involved:

  • Database credentials leaked in public repository
  • User data (photos, messages) exposed
  • API endpoints accessible without authentication

How this tool would have caught it:

  1. audit-tool.py would flag database credentials immediately
  2. multi-model-audit.py would identify the API authentication issues
  3. Both would give CRITICAL risk scores
  4. Developer would be alerted before publishing

Use this tool to prevent your own Tea moment.


πŸ’‘ Use Cases

For Vibe Coders

  • Audit AI-generated code before committing
  • Verify no personal data leaked from prompts
  • Check for hardcoded secrets you didn't notice
  • Ensure Obsidian notes didn't bleed into code

For Open Source Projects

  • Pre-commit security checks
  • CI/CD integration for automated audits
  • Protect contributors from accidental leaks
  • Maintain security standards

For Solo Developers

  • Peace of mind before publishing
  • Learning tool to understand security risks
  • Backup check for manual reviews
  • Archive audit reports for compliance

πŸš€ Advanced Usage

CI/CD Integration

GitHub Actions Example:

name: Security Audit
on: [push, pull_request]
jobs:
  audit:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - name: Run Security Audit
        run: |
          python3 audit-tool.py .
          if [ $(jq '.risk_score' audit-report.json) -gt 50 ]; then
            echo "Risk score too high!"
            exit 1
          fi

Custom Patterns

Edit audit-tool.py to add project-specific patterns:

# Add to _check_secrets method
patterns = {
    "Custom API Key": r'YOUR_PATTERN_HERE',
    # ...
}

Bulk Project Auditing

# Audit multiple projects
for dir in ~/projects/*; do
    echo "Auditing $dir..."
    python3 audit-tool.py "$dir"
done

πŸ“ Report Outputs

audit-tool.py Output

  • Console: Human-readable report with colored output
  • audit-report.json: Detailed JSON for programmatic use

multi-model-audit.py Output

  • Console: Consensus report with all model results
  • multi-model-audit-report.json: Full multi-model analysis

Both reports saved in the project directory being audited.


βš™οΈ Configuration

Environment Variables

# Required for multi-model audit
export OPENROUTER_API_KEY="sk-or-v1-..."

# Optional: customize output
export AUDIT_VERBOSE=true

Ignored Files/Directories

Default ignored:

  • .git/
  • node_modules/
  • __pycache__/
  • .DS_Store
  • venv/, .venv/
  • dist/, build/

Edit _get_files_to_scan() in audit-tool.py to customize.


🀝 Contributing

Found a security pattern we should check? Have ideas for improvements?

  1. Fork the repository
  2. Create a feature branch
  3. Add your improvements
  4. Test thoroughly
  5. Submit a pull request

Particularly valuable contributions:

  • New security patterns to detect
  • False positive reduction
  • Additional AI model integrations
  • Better reporting formats

πŸ” Privacy & Security

Your Code Never Leaves Your Machine (audit-tool.py)

The automated scanner runs entirely locally. Nothing is transmitted.

Multi-Model Audit Privacy (multi-model-audit.py)

  • Code sent to AI providers via OpenRouter
  • Use ONLY on code you're comfortable sharing
  • OpenRouter privacy policy applies
  • Don't audit codebases with secrets already present

Best practice: Run automated scan first, fix issues, THEN run multi-model audit.


πŸ“œ License

MIT License - see LICENSE file for details.

Use freely for personal and commercial projects.


πŸ™ Acknowledgments

Created to solve a real problem in the vibe coding workflow. Inspired by the need to ship fast without shipping vulnerabilities.

Special thanks to the AI models that make vibe coding possible (and these audit tools necessary).


πŸ“ž Support

  • Issues: GitHub Issues
  • Feature Requests: Open an issue with the enhancement label
  • Security Concerns: Email maintainer directly

🎯 Roadmap

Future improvements:

  • GUI interface for non-technical users
  • VS Code extension for real-time scanning
  • Pre-commit hooks for automatic auditing
  • Severity customization per project
  • Integration with secret scanning services
  • Support for additional AI models
  • Custom reporting templates
  • Team/enterprise features

Ship with confidence. Audit with rigor. Vibe in peace. πŸš€

Built with ❀️ for the vibe coding community.

About

Comprehensive security auditing tools for vibe coded projects. Protect against accidental API key leaks, private data exposure, and security vulnerabilities before publishing.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages