Skip to content

varundataquest/LLMScannr.ai

Repository files navigation

AI Security - AI security platform combining LLM threat analysis and ML model vulnerability scanning

A comprehensive AI security testing platform for evaluating Large Language Models (LLMs) against adversarial prompts and security vulnerabilities. This application provides both static and dynamic analysis capabilities to assess model robustness across multiple security categories, plus integrated model scanning for security vulnerabilities.

πŸš€ Features

πŸ” Static Analysis

  • Security Scoring System: 0-100 scoring based on compliance (30pts), security features (40pts), and safety measures (30pts)
  • Model Cards: Detailed JSON score cards for each model with comprehensive security metrics
  • Visual Indicators: Color-coded scores with intuitive progress bars
  • Export Capabilities: View and download model score cards for external analysis

⚑ Dynamic Analysis

  • Real-time Testing: Live adversarial prompt testing against multiple LLM providers
  • Caching System: 24-hour cache with force-refresh capability for performance optimization
  • Multiple Models: Support for Claude, GPT-4, Llama, and other leading models
  • Error Handling: Graceful fallback when API calls fail or timeout

πŸ” Model Scanner

  • File Upload Scanning: Upload model files (.pt, .pkl, .h5, .pb, .onnx, .tflite, .safetensors, .bin) for security analysis
  • Repository Scanning: Scan GitHub and HuggingFace repositories for model files
  • Security Issue Detection: Identify vulnerabilities, malicious code, and security risks in models
  • Comprehensive Reporting: Detailed breakdown of security issues by severity and category
  • Integration: Seamlessly integrated with the main AI Security platform

πŸ”‘ LLM API Management

  • API Key Storage: Secure localStorage-based storage for LLM API keys and endpoints
  • Persistent Settings: API credentials persist across browser sessions
  • User-Friendly Interface: Simple form for managing API credentials
  • Ready for Backend Integration: Designed to easily upgrade to secure backend storage

πŸ›‘οΈ Security Categories

  • Security & Access Control: Data leak prevention, prompt injection resistance, authentication bypass protection
  • Compliance & Legal: AI Act compliance, NIST framework adherence, regulatory content filtering
  • Trust & Safety: Misinformation prevention, scam detection, deepfake resistance
  • Brand Protection: Impersonation resistance, political bias mitigation, brand safety

πŸ“š Prompt Bank

  • Comprehensive Library: Curated collection of adversarial prompts across all security categories
  • Search & Filter: Advanced search functionality with category-based filtering
  • Copy-to-Clipboard: Easy prompt copying for external testing
  • Category Icons: Visual category identification with custom SVG icons

πŸ” Audit System

  • Model Performance Tracking: Historical analysis of model security improvements
  • Detailed Metrics: Granular breakdown of security test results
  • Visual Analytics: Charts and graphs for security trend analysis

🎨 User Interface

  • Modern Design: Dark theme with consistent blue (#2563eb) color scheme
  • Responsive Layout: Mobile-friendly design with adaptive components
  • Model Icons: Visual model identification with official brand icons
  • Loading States: Smooth loading animations and progress indicators
  • Error Handling: User-friendly error messages and fallback states
  • Intuitive Navigation: Clear separation between LLM Security and Model Security features

πŸ› οΈ Technical Stack

  • Frontend: Next.js 14 with React
  • Styling: Custom CSS with responsive design
  • API Integration: OpenRouter for multi-model testing
  • Caching: Local JSON-based caching system
  • Icons: Custom SVG icons and official model brand assets
  • Model Scanning: Python-based model scanner with FastAPI integration
  • File Handling: Formidable for file upload processing

πŸš€ Getting Started

Prerequisites

  • Node.js 18+
  • npm or yarn package manager
  • Python 3.8+ (for model scanner functionality)

Installation

# 1. Clone the repository
git clone https://github.com/varundataquest/AI-Security.git
cd AI-Security

# 2. Install dependencies
npm install

# 3. Set up environment variables (optional)
# Create a .env.local file and add:
OPENROUTER_API_KEY=your-openrouter-api-key-here
# Get your API key from https://openrouter.ai/keys

# 4. Run the development server
npm run dev

Open http://localhost:3000 in your browser to view the application.

Model Scanner Setup (Optional)

To enable model scanning functionality:

  1. Navigate to the modelscanner directory:

    cd modelscanner-main/server
  2. Install Python dependencies:

    pip install -r ../requirements.txt
  3. Start the model scanner server:

    uvicorn main:app --host 0.0.0.0 --port 8000
  4. The AI Security app will automatically connect to the scanner server running on port 8000.

πŸ”‘ API Key Setup

For full functionality with real model testing:

  1. Get an API key: Sign up at OpenRouter
  2. Create environment file: Create .env.local in the project root
  3. Add your key: Add OPENROUTER_API_KEY=your-key-here to the file
  4. Restart the server: The app will automatically use real model testing

Without an API key: The app will show cached/mock data for demonstration purposes.

LLM API Management

  • Access: Go to the main LLM Security page
  • Configure: Enter your API key and endpoint in the "LLM API Settings" section
  • Save: Click "Save" to store your credentials (persisted in browser localStorage)
  • Security: Credentials are stored locally and not shared with other users

πŸ“Š Usage

Static Analysis

  • Navigate to the Static Analysis tab
  • View security scores for all models
  • Click "View Card" to see detailed JSON score cards
  • Use "Download" to export model security data

Dynamic Analysis

  • Switch to the Dynamic Analysis tab
  • View real-time test results from live model testing
  • Use "Rerun Tests" to force fresh calculations
  • Monitor API response times and success rates

Model Scanner

  • Click on "Model Security" in the top navigation
  • File Scanning: Upload a model file (.pt, .pkl, .h5, etc.) for security analysis
  • Repository Scanning: Enter a GitHub or HuggingFace repository URL
  • View Results: Review detailed security reports with issue categorization
  • Integration: Results are displayed in a user-friendly format

Prompt Bank

  • Access the Prompt Bank page
  • Search for specific security test prompts
  • Filter by security categories
  • Copy prompts for external testing

Audit

  • Review historical security performance
  • Analyze trends across different models
  • Compare security metrics over time

πŸ”§ Configuration

Adding New Models

Edit the models array in pages/api/scores.js:

const models = [
  { id: 'anthropic/claude-3-5-sonnet', name: 'Claude 3.5 Sonnet' },
  // Add your models here
];

Customizing Security Categories

Modify the security categories in the relevant page components to match your specific security requirements.

Styling Updates

Update styles/globals.css for custom theming and component styling.

πŸ“ˆ Performance Features

  • Caching System: 24-hour TTL with force-refresh capability
  • Optimized API Calls: Efficient request handling with timeout management
  • Memoization: React useMemo for expensive calculations
  • Error Recovery: Graceful fallbacks for failed API calls
  • Model Scanner Integration: Proxy-based integration for seamless user experience

TO DO LIST:

πŸ” 1. Threat Modeling for AI Agents Start by identifying the components and potential attack vectors.

Components to model: LLM (e.g., GPT, Claude)

Prompting layer (system/user prompts)

Memory/persistence modules

Plugins/tools/actions

APIs or external databases

Deployment environment (browser, backend, cloud)

Threat Description
Prompt Injection User input manipulates the system prompt or behavior
Data Leakage Sensitive data in context or output
Jailbreaking Bypassing safety mechanisms
Adversarial Examples Malicious inputs to produce incorrect or unsafe outputs
Tool/Plugin Abuse Malicious commands through API or plugin interfaces
Supply Chain Attacks Compromised models or dependencies
Unauthorized Memory Access Gaining access to stored information or agent history
Misuse of Autonomous Capabilities Agents executing harmful actions automatically

Build Agent Security!

🀝 Contributing

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

🏷️ Version History

  • v1.1.0: Added Model Scanner integration and LLM API management
    • Integrated Python-based model scanner for file and repository security analysis
    • Added LLM API key and endpoint management with localStorage persistence
    • Updated navigation: Model Security now links directly to Model Scanner
    • Enhanced UI with intuitive tab-based interface for scanning options
  • v1.0.0: Complete AI Security with consistent blue color scheme and all features implemented
    • Includes: Static Analysis, Dynamic Analysis, Prompt Bank, Audit system, caching, model icons, and comprehensive security testing

Built with ❀️ for AI Security Testing

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Packages

 
 
 

Contributors