Skip to content

PHom798/MobileNetV2-On-Device-Inference

Repository files navigation

🧠 MobileNetV2 On-Device Inference

Real-Time Image Classification Powered by ONNX Runtime

Typing SVG

Made with Flutter ONNX Runtime MobileNetV2 Dart


Version Platform Model Size Status


🚀 Bringing Computer Vision to Your Pocket

Complete on-device image classification with zero latency and maximum privacy

iphone-zoom-out-middle-move-out.1.online-video-cutter.com.mp4


✨ Features

🎯 Core Capabilities

  • 🔥 MobileNetV2 Integration - Optimized CNN architecture running entirely on-device
  • ⚡ ONNX Runtime - High-performance inference engine for mobile platforms
  • 📸 ImageNet Classification - Recognizes 1000+ object categories with high accuracy
  • 🎨 Production-Ready Preprocessing - ImageNet normalization and center crop
  • 🏆 Top-K Predictions - Configurable multi-class prediction output
  • ✅ Confidence Thresholding - Smart fallback for low-confidence predictions
  • 🔒 Privacy-First - All inference happens locally, no data leaves device
  • 📱 Cross-Platform - Runs on Android and iOS with Flutter

🛠️ Technical Highlights

  • Preprocessing Pipeline: ImageNet-standard normalization (mean/std)
  • Input Processing: Center crop to 224x224 model input size
  • Output Processing: Softmax activation for probability distribution
  • Confidence Filtering: Configurable threshold with fallback handling
  • Top-K Selection: Returns top N predictions with scores

✨ Learn

Check out my blog post on Mastering On-Device ML in Flutter: A Guide to Softmax, Top-K, and Confidence Checks for more insights on implementing machine learning models efficiently.


🎥 Previews

Colorful QR Example 1
demoai.mp4
Colorful QR Example 1

🏗️ Architecture

┌─────────────────┐
│  Camera/Gallery │
└────────┬────────┘
         │
         ▼
┌─────────────────┐
│ Image Loading   │
│  & Decoding     │
└────────┬────────┘
         │
         ▼
┌─────────────────┐
│  Center Crop    │
│   (224x224)     │
└────────┬────────┘
         │
         ▼
┌─────────────────┐
│   ImageNet      │
│ Normalization   │
└────────┬────────┘
         │
         ▼
┌─────────────────┐
│  MobileNetV2    │
│  ONNX Model     │
└────────┬────────┘
         │
         ▼
┌─────────────────┐
│    Softmax      │
│   Activation    │
└────────┬────────┘
         │
         ▼
┌─────────────────┐
│   Top-K + CI    │
│   Filtering     │
└────────┬────────┘
         │
         ▼
┌─────────────────┐
│  Display Results│
└─────────────────┘

📦 Installation

Prerequisites

dependencies:
  flutter:
    sdk: flutter
  onnxruntime: ^1.15.0  # Or latest version
  image: ^4.0.0

Setup Steps

  1. Clone the repository
https://github.com/PHom798/MobileNetV2-On-Device-Inference.git
cd mobilenetv2-flutter
  1. Install dependencies
flutter pub get
  1. Add the ONNX model

    • Download MobileNetV2 ONNX model
    • Place in assets/models/mobilenetv2.onnx
    • Add to pubspec.yaml:
    flutter:
      assets:
        - assets/models/mobilenetv2.onnx
        - assets/labels/imagenet_classes.txt
  2. Run the app

flutter run

🎮 Usage

Basic Implementation

import 'package:onnxruntime/onnxruntime.dart';

class ImageClassifier {
  late OrtSession session;
  
  Future<void> initialize() async {
    // Load model
    final modelBytes = await rootBundle.load('assets/models/mobilenetv2.onnx');
    session = OrtSession.fromBuffer(modelBytes.buffer.asUint8List());
  }
  
  Future<List<Prediction>> classify(String imagePath) async {
    // 1. Load and preprocess image
    final preprocessed = await preprocessImage(imagePath);
    
    // 2. Run inference
    final inputs = {'input': OrtValueTensor.createTensorWithDataList(
      preprocessed,
      [1, 3, 224, 224],
    )};
    
    final outputs = await session.runAsync(
      OrtRunOptions(),
      inputs,
    );
    
    // 3. Apply softmax
    final logits = outputs[0]?.value as List<List<double>>;
    final probabilities = softmax(logits[0]);
    
    // 4. Get Top-K with confidence threshold
    return getTopKPredictions(probabilities, k: 5, threshold: 0.1);
  }
}

Preprocessing Pipeline

Future<List<double>> preprocessImage(String path) async {
  final img = decodeImage(File(path).readAsBytesSync())!;
  
  // Center crop to 224x224
  final cropped = copyCrop(img, 
    x: (img.width - 224) ~/ 2,
    y: (img.height - 224) ~/ 2,
    width: 224,
    height: 224,
  );
  
  // ImageNet normalization
  final mean = [0.485, 0.456, 0.406];
  final std = [0.229, 0.224, 0.225];
  
  List<double> normalized = [];
  for (var c = 0; c < 3; c++) {
    for (var y = 0; y < 224; y++) {
      for (var x = 0; x < 224; x++) {
        final pixel = cropped.getPixel(x, y);
        final value = (pixel[c] / 255.0 - mean[c]) / std[c];
        normalized.add(value);
      }
    }
  }
  
  return normalized;
}

📊 Model Details

Property Value
Architecture MobileNetV2
Input Size 224 × 224 × 3
Parameters ~3.5M
Model Size ~14MB
Classes 1000 (ImageNet)
Top-1 Accuracy ~71.8%
Top-5 Accuracy ~90.3%
Inference Time 20-50ms (device dependent)

🔧 Configuration

Adjustable Parameters

class ModelConfig {
  static const int inputSize = 224;
  static const int topK = 5;
  static const double confidenceThreshold = 0.1;
  static const List<double> imagenetMean = [0.485, 0.456, 0.406];
  static const List<double> imagenetStd = [0.229, 0.224, 0.225];
}

🌟 Key Advantages

🔒 Privacy & Security

  • 100% On-Device - No internet required
  • Zero Data Transmission - Images never leave device
  • GDPR Compliant - No external data processing

⚡ Performance

  • Low Latency - Instant results without network delay
  • Offline First - Works without connectivity
  • Efficient - Optimized for mobile CPUs

💰 Cost-Effective

  • No API Costs - Zero inference fees
  • Scalable - No server infrastructure needed
  • Sustainable - Reduced carbon footprint

🤝 Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/AmazingFeature)
  3. Commit your changes (git commit -m 'Add some AmazingFeature')
  4. Push to the branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

🙏 Acknowledgments

  • MobileNetV2 - Sandler et al., 2018
  • ONNX Runtime - Microsoft's cross-platform inference engine
  • ImageNet - Dataset and pretrained weights
  • Flutter Team - Amazing cross-platform framework

💬 Connect & Support

For questions, feedback, or collaborations:

GitHub Twitter LinkedIn Email


⭐ Star this repo if you find it useful!

Made with ❤️ and Flutter

Footer

About

Integrated MobileNetV2 model directly on mobile devices, utilizing ImageNet normalization, center cropping, softmax activation, Top-K predictions, and confidence checks with fallback. This implementation enhances on-device performance and efficiency.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors