Built with React โข TypeScript โข Node.js โข Cloudflare Edge
Accurate. Fast. Reliable.
Measure your internet speed with precision using advanced TCP analysis and real-time streaming
Quick Navigation
๐ฏ Overview โข โจ Features โข ๐ง Stack โข โ๏ธ Architecture โข ๐ Flow โข ๐ Structure
๐ API โข ๐ ๏ธ Setup โข ๐ Usage โข ๐ง Troubleshooting โข ๐ Deploy โข ๐ Performance โข ๐ฎ Roadmap โข Author
This repository is a sanitized showcase version of a production SaaS system.
Core business logic, sensitive components, and full implementation details are intentionally excluded to protect intellectual property and maintain system security.
The goal of this repository is to demonstrate:
- System architecture
- Code structure
- Engineering practices
๐ฉ Full codebase access can be provided upon request for technical evaluation.
Contact: amar01pawar80@gmail.com
Traditional speed tests often provide inaccurate results due to:
- TCP slow-start contamination
- No bufferbloat detection
- Single-threaded testing bottlenecks
- Server-side performance limitations
- Poor protocol overhead handling
NetPulse delivers 99% accurate measurements through:
- Advanced TCP Analysis - Dynamic grace periods exclude TCP slow-start phase
- Multi-Core Processing - Cluster mode utilizes all CPU cores
- Edge Computing - Cloudflare deployment for nearest-edge routing
- Protocol Intelligence - XHR and WebSocket support with overhead detection
- Real-Time Streaming - Chunked transfer with backpressure handling
A production-grade speed testing platform featuring:
- ๐จ Modern React UI with Tailwind CSS
- โก High-performance Node.js backend (Express + Cluster)
- ๐ต Cloudflare Pages Functions for edge deployment
- ๐ณ Docker & Kubernetes ready
- ๐ Real-time charts and PDF report generation
| Feature | Description | Technology |
|---|---|---|
| Download Speed Test | Multi-stream HTTP download with chunked encoding | XHR / WebSocket |
| Upload Speed Test | Binary stream upload with real-time byte counting | POST / WebSocket |
| Ping & Jitter | 5-sample averaging with high-resolution timers | hrtime() |
| Bufferbloat Analysis | A-F rating based on latency under load | M-Labs algorithm |
| Packet Loss Detection | Sent/received packet tracking | Custom implementation |
| Protocol Overhead | Automatic detection and compensation | HTTP/TCP/IP factor |
graph LR
A[User Interface] --> B[Test Engine]
B --> C[Web Worker]
C --> D[XHR Protocol]
C --> E[WebSocket Protocol]
D --> F[Backend API]
E --> G[WebSocket Server]
F --> H[Cluster Mode]
G --> H
H --> I[Multi-Core CPU]
Intelligent Testing:
- Dynamic Grace Period - Auto-adjusts 1-3s based on connection speed
- TCP Slow-Start Exclusion - Removes first 2-3 seconds from calculation
- Goodput Reporting - Reports application-level throughput
- Parallel Connections - 4 concurrent streams by default
- Abort Support - Clean test cancellation via AbortController
Developer Experience:
- TypeScript Strict Mode - Full type safety
- ESLint + Prettier - Code quality enforced
- Hot Module Replacement - Vite dev server
- Health Checks - Built-in monitoring endpoints
- Comprehensive Logging - Morgan + custom metrics
Modern React SPA with TypeScript & Vite
graph BT
A[User Interface] --> B[React Components]
B --> C[State Management]
C --> D[Web Workers]
D --> E[Speed Test Engine]
B --> F[UI Libraries]
F --> G[Tailwind CSS]
F --> H[Framer Motion]
E --> I[Data Visualization]
I --> J[Recharts]
I --> K[jsPDF Reports]
Core Stack:
| Layer | Technology | Version | Purpose |
|---|---|---|---|
| Framework | React | 18.3.1 | Component-based UI |
| Language | TypeScript | 5.5.3 | Type safety & DX |
| Build Tool | Vite | 7.0.6 | Fast HMR & bundling |
| Styling | Tailwind CSS | 3.4.1 | Utility-first CSS |
| Animation | Framer Motion | Latest | Smooth animations |
| Charts | Recharts | Latest | Real-time graphs |
| Reporting | jsPDF + html2canvas | Latest | PDF generation |
| Notifications | React Hot Toast | Latest | User feedback |
Key Features:
- Type-Safe Development - Full TypeScript coverage
- Hot Module Replacement - Instant feedback loop
- Code Splitting - Optimized bundle sizes
- Tree Shaking - Eliminate dead code
- Responsive Design - Mobile-first approach
High-Performance Node.js with Cluster Mode
graph TB
A[HTTP Requests] --> B[Express Server]
B --> C[Middleware Layer]
C --> D[Compression]
C --> E[Helmet Security]
C --> F[Rate Limiter]
C --> G[Morgan Logger]
B --> H[Cluster Master]
H --> I[Worker 1]
H --> J[Worker 2]
H --> K[Worker N]
I --> L[download endpoint]
I --> M[upload endpoint]
I --> N[WebSocket server]
J --> L
J --> M
J --> N
K --> L
K --> M
K --> N
Core Stack:
| Layer | Technology | Version | Purpose |
|---|---|---|---|
| Runtime | Node.js | 20+ | JavaScript runtime |
| Framework | Express | 4.18.2 | Web server |
| Clustering | cluster module | Built-in | Multi-process |
| WebSocket | ws | 8.18.2 | Real-time protocol |
| Security | Helmet | Latest | Security headers |
| Compression | compression | Latest | Gzip responses |
| Rate Limiting | express-rate-limit | Latest | DDoS protection |
| Logging | morgan | Latest | HTTP logger |
| CORS | cors | Latest | Cross-origin support |
Performance Optimizations:
- Multi-Core Processing - Utilizes all CPU cores
- Stream-Based Transfers - No buffering overhead
- Backpressure Handling - Memory-efficient streaming
- Pre-Generated Buffers - Reduced crypto calls
- Chunked Encoding - Better throughput
Multi-Platform Deployment Strategy
| Platform | Type | Use Case | Benefits |
|---|---|---|---|
| Cloudflare Pages | Edge Computing | Global distribution | 275+ locations, low latency |
| Docker | Containerization | Consistent environments | Reproducible builds |
| Kubernetes | Orchestration | Production scaling | Auto-healing, load balancing |
| Render | Managed Hosting | Easy deployment | Zero DevOps, auto-SSL |
| Vercel | Serverless | Static + API routes | Edge functions, analytics |
| Railway | Cloud Platform | Full-stack apps | Database integration |
DevOps Toolchain:
- CI/CD - GitHub Actions (26 workflows)
- Local Dev - Docker Compose
- K8s Packaging - Helm Charts
- Edge Deployment - Wrangler CLI
- Monitoring - Health checks, logging
graph TB
subgraph Client
A[React SPA] --> B[SpeedTestEngine]
B --> C[Web Worker]
C --> D[XHR Protocol]
C --> E[WebSocket Protocol]
end
subgraph Edge
F[Cloudflare Pages] --> G[download function]
F --> H[upload function]
F --> I[ping function]
F --> J[websocket function]
end
subgraph Origin
K[Express Server] --> L[Cluster Master]
L --> M[Worker 1]
L --> N[Worker 2]
L --> O[Worker N]
M --> P[download endpoint]
M --> Q[upload endpoint]
M --> R[WebSocket server]
end
B --> F
B --> K
D --> G
D --> P
E --> J
E --> R
Frontend (/project/src):
App.tsx- Router & layout orchestrationNewSpeedTest.tsx- Main test UI (24.3KB)SpeedTestEngine.ts- Test coordinator (219 lines)speedTestWorker.ts- Web Worker logic (1471 lines)serverConfig.ts- Environment-aware config
Backend (/backend):
server.js- Express app with clustering (888 lines)start.js- Process launcherpublic/- Built frontend assets
Edge Functions (/functions):
index.ts- Main entry pointdownload.ts- Stream-based downloadupload.ts- Stream-based uploadping.ts- Latency measurementwebsocket.ts- WS protocol handler
sequenceDiagram
participant U as User
participant UI as React UI
participant E as Engine
participant W as Web Worker
participant S as Server
U->>UI: Click "Start Test"
UI->>E: runSpeedTest()
E->>W: postMessage(START_TEST)
Note over W: Initialize AbortController
W->>S: GET /ping (5 samples)
S-->>W: Response timestamps
W->>W: Calculate avg ping + jitter
W->>S: GET /download?bytes=50MB
S-->>W: Stream chunks (1-4MB)
W->>W: Measure throughput
Note over W: Exclude TCP slow-start
W->>S: POST /upload (binary)
S-->>W: Byte count ACK
W->>W: Calculate upload speed
W->>UI: TEST_COMPLETE
UI->>U: Display results
Phase 1: Initialization (0-10%)
- User clicks "Start Test"
SpeedTestEngine.runSpeedTest()called- Web Worker initialized with config
- AbortController created for cancellation
Phase 2: Ping Measurement (10-30%)
- Send 5 ping requests to
/pingendpoint - Server responds with precise timestamps
- Calculate average, min, max, jitter
- Update progress UI in real-time
Phase 3: Download Test (30-70%)
- Request random data (default 50MB)
- Server streams chunks via
Transfer-Encoding: chunked - Web Worker measures bytes/time
- TCP Grace Period: First 2-3 seconds excluded
- Dynamic adjustment: Fast connections = 1s grace, Slow = 3s
Phase 4: Upload Test (70-90%)
- Generate random binary data
- POST to
/uploadasapplication/octet-stream - Server counts bytes in real-time (no buffering)
- Calculate throughput from duration
Phase 5: Bufferbloat (Optional, 90-95%)
- Create background download load
- Measure ping under load
- Compare to idle ping
- Assign A-F rating (M-Labs algorithm)
Phase 6: Completion (95-100%)
- Compile final results object
- Generate unique test ID
- Store in localStorage (history)
- Navigate to results page
netpulse/
โโโ ๐ .github/workflows/ # CI/CD pipelines (26 files)
โ โโโ ci-cd.yml # Build & test automation
โ โโโ render-deploy.yml # Auto-deploy to Render
โ โโโ kubernetes-deploy.yml # K8s deployment
โ โโโ security-scan.yml # Dependency scanning
โ
โโโ ๐ api/ # Legacy API routes
โ โโโ ping.js # Simple ping endpoint
โ
โโโ ๐ backend/ # Node.js Express server
โ โโโ public/ # Built frontend assets
โ โ โโโ index.html
โ โ โโโ assets/ # Vite build output
โ โโโ server.js # Main Express app (888 lines)
โ โโโ server-azure.js # Azure-specific variant
โ โโโ start.js # Process launcher
โ โโโ package.json # Backend dependencies
โ
โโโ ๐ functions/ # Cloudflare Pages Functions
โ โโโ index.ts # Entry point
โ โโโ download.ts # Download speed test
โ โโโ upload.ts # Upload speed test
โ โโโ ping.ts # Ping/jitter measurement
โ โโโ status.ts # Server health
โ โโโ websocket.ts # WebSocket protocol
โ โโโ _routes.json # Routing config
โ
โโโ ๐ helm/netpulse/ # Kubernetes Helm chart
โ โโโ Chart.yaml
โ โโโ values.yaml
โ โโโ templates/
โ โโโ deployment.yaml
โ
โโโ ๐ kubernetes/ # Raw K8s manifests
โ โโโ deployment.yaml # Deployment + Service + PVC
โ
โโโ ๐ project/ # React frontend (Vite)
โ โโโ src/
โ โ โโโ components/ # React components (31 files)
โ โ โ โโโ NewSpeedTest.tsx # Main test UI
โ โ โ โโโ NewResultsPage.tsx # Results display
โ โ โ โโโ NewHeader.tsx # Navigation
โ โ โ โโโ NewFooter.tsx # Footer
โ โ โ โโโ BufferbloatAnalysis.tsx
โ โ โ โโโ PacketLossCard.tsx
โ โ โ โโโ ProtocolOverheadInfo.tsx
โ โ โ โโโ TcpGracePeriodInfo.tsx
โ โ โ โโโ ReportGenerator.tsx
โ โ โ โโโ ... (21 more)
โ โ โโโ utils/
โ โ โ โโโ speedTestEngine.ts # Test orchestrator
โ โ โ โโโ speedTestWorker.ts # Web Worker (1471 lines)
โ โ โ โโโ webSocketService.ts # WS client
โ โ โโโ types/
โ โ โ โโโ speedTest.ts # TypeScript interfaces
โ โ โโโ config/
โ โ โ โโโ serverConfig.ts # Environment config
โ โ โโโ App.tsx # Root component
โ โ โโโ main.tsx # Entry point
โ โ โโโ index.css # Global styles
โ โโโ public/
โ โ โโโ speedometer.svg
โ โโโ package.json # Frontend dependencies
โ โโโ vite.config.ts # Vite configuration
โ โโโ tailwind.config.js # Tailwind setup
โ โโโ tsconfig.json # TypeScript config
โ
โโโ ๐ scripts/ # Build utilities
โ โโโ copy-frontend.js # Copy build to backend
โ
โโโ ๐ณ Dockerfile # Multi-stage build
โโโ ๐ณ docker-compose.yml # Local dev orchestration
โโโ ๐ณ docker-entrypoint.sh # Container startup
โโโ ๐ render.yaml # Render deployment config
โโโ ๐ vercel.json # Vercel deployment
โโโ ๐ wrangler.toml # Cloudflare Wrangler CLI
โโโ ๐ railway.toml # Railway deployment
โโโ ๐ staticwebapp.config.json # Azure Static Web Apps
โโโ ๐ package.json # Root workspace config
Purpose: Measure round-trip time to server
Query Parameters:
timestamp(optional): Client timestamp for RTT calculationincludeLoad(boolean): Include server load metrics
Response:
{
"status": "ok",
"message": "pong",
"serverTimestamp": 1234567890123,
"requestTimestamp": 1234567890100,
"serverProcessingTime": 23,
"preciseProcessingTimeMs": 0.145
}Implementation: Uses process.hrtime() for nanosecond precision
Purpose: Stream random data for download measurement
Query Parameters:
bytes(number): Size in bytes (default: 1MB, max: 500MB)chunkSize(number): Chunk size for streaming (default: 1MB)
Headers:
Content-Type: application/octet-stream
Transfer-Encoding: chunked
X-Download-Size-Bytes: 52428800
X-Download-Duration: 4.523
X-Download-Throughput-MBps: 11.05
Features:
- Pre-generated random buffers (crypto.randomBytes)
- Backpressure handling via
drainevent - Client disconnect detection
- Performance metric logging
Purpose: Receive binary stream for upload measurement
Request Headers:
Content-Type: application/octet-stream
Content-Length: 52428800
Response:
{
"status": "success",
"receivedAt": 1234567890123,
"byteLength": 52428800,
"duration": "4.892",
"throughputMBps": "10.23",
"throughputMbps": "81.84"
}Implementation:
- Stream processing (no buffering)
- Real-time byte counting
- Memory efficient for large uploads
Purpose: Detailed server performance statistics
Response:
{
"status": "ok",
"timestamp": 1234567890123,
"serverUptime": 3600.5,
"metrics": {
"totalRequests": 15234,
"errors": 12,
"requestsLastMinute": 245,
"requestsPerSecond": "4.08",
"endpointStats": {
"ping": 5023,
"download": 5102,
"upload": 5109
}
},
"system": { /* os.cpus(), os.freemem(), etc */ },
"process": { /* process.memoryUsage(), v8 stats */ },
"cluster": { /* worker info */ }
}Purpose: Lightweight endpoint for load balancers
Response:
{
"status": "ok",
"timestamp": 1234567890123,
"uptime": 3600.5,
"requestsPerSecond": "4.08"
}Purpose: Bidirectional communication for advanced testing
Client Messages:
{
"type": "start_test",
"payload": {
"phase": "download",
"size": 52428800
}
}Server Messages:
{
"type": "upload_ack",
"timestamp": 1234567890123,
"bytesReceived": 65536,
"totalBytesReceived": 1048576
}All backend endpoints are mirrored as Cloudflare Pages Functions:
| Function | Purpose | Notes |
|---|---|---|
functions/download.ts |
Edge download test | Stream-based response |
functions/upload.ts |
Edge upload test | Request body streaming |
functions/ping.ts |
Edge latency | Minimal processing |
functions/status.ts |
Edge health | Basic status |
functions/websocket.ts |
Edge WebSocket | Not yet implemented |
Key Differences from Backend:
- Runs at edge (closer to users = lower latency)
- Stateless (no cluster mode needed)
- Limited to Cloudflare's runtime constraints
- Automatic scaling to 275+ locations
Required:
- Node.js >= 20.12.0
- npm or yarn
- Git
Optional:
- Docker Desktop (for containerized dev)
- Kubernetes cluster (minikube/kind)
- Wrangler CLI (Cloudflare deployment)
Step 1: Clone Repository
git clone https://github.com/yourusername/netpulse.git
cd netpulseStep 2: Install Frontend Dependencies
cd project
npm installStep 3: Install Backend Dependencies
cd ../backend
npm installStep 4: Configure Environment (Optional)
Create backend/.env:
PORT=3000
ENABLE_CLUSTER=true
RATE_LIMIT_WINDOW_MS=60000
RATE_LIMIT_MAX_REQUESTS=500
KEEP_ALIVE_TIMEOUT=65000
HEADERS_TIMEOUT=66000Step 5: Start Backend Server
cd backend
npm startServer runs on http://localhost:3000
Step 6: Start Frontend Dev Server (new terminal)
cd project
npm run devFrontend runs on http://localhost:5173
Access: Open http://localhost:5173 in browser
Step 1: Build Docker Image
docker build -t netpulse:latest .Step 2: Run Container
docker run -d \
-p 3000:3000 \
-e NODE_ENV=production \
-e ENABLE_CLUSTER=true \
--name netpulse \
netpulse:latestStep 3: Verify Health
curl http://localhost:3000/healthAccess: Open http://localhost:3000
Step 1: Start All Services
docker-compose up -dStep 2: View Logs
docker-compose logs -fStep 3: Stop Services
docker-compose downServices:
netpulse: Main application (port 3000)- Volume:
netpulse_logsfor persistent logs
Access: Open http://localhost:3000
Prerequisites:
- kubectl configured
- Kubernetes cluster (v1.19+)
- Docker registry access
Step 1: Build and Push Image
docker build -t your-registry/netpulse:latest .
docker push your-registry/netpulse:latestStep 2: Update Deployment Config
Edit kubernetes/deployment.yaml:
image: your-registry/netpulse:latestStep 3: Deploy to Cluster
kubectl apply -f kubernetes/deployment.yamlStep 4: Verify Deployment
kubectl get pods -l app=netpulse
kubectl get svc netpulseAccess: Use LoadBalancer IP from kubectl get svc
Prerequisites:
- Wrangler CLI installed:
npm i -g wrangler - Cloudflare account
- Domain configured on Cloudflare
Step 1: Login to Cloudflare
wrangler loginStep 2: Initialize Project
wrangler initStep 3: Configure wrangler.toml
name = "netpulse"
compatibility_date = "2024-01-01"
pages_build_output_dir = "./project/dist"Step 4: Deploy
wrangler pages deploy project/dist --project-name=netpulseAccess: https://netpulse.pages.dev
Step 1: Push to GitHub
git add .
git commit -m "Initial commit"
git push origin mainStep 2: Connect Render
- Go to https://render.com/dashboard
- Click "New +" โ "Web Service"
- Connect GitHub repository
- Configure:
- Name: netpulse
- Environment: Docker
- Plan: Free
- Auto-Deploy: Yes
Step 3: Deploy
Render automatically detects render.yaml and deploys.
Access: https://your-service.onrender.com
1. Navigate to Application
Open your browser to the deployed URL (e.g., http://localhost:3000)
2. Configure Test Settings (Optional) Click the gear icon โ๏ธ to adjust:
- Test Duration: 5-30 seconds per phase
- Parallel Connections: 1-8 streams
- Protocol: XHR or WebSocket
- TCP Grace Period: 1-5 seconds
- Enable Bufferbloat Test: Yes/No
3. Start Test Click the big "START TEST" button
4. Watch Progress Real-time updates show:
- Current phase (Ping โ Download โ Upload)
- Live speed gauge
- Progress bar (%)
- Elapsed time
5. Review Results After completion, see:
- Download Speed: Mbps + MB/s
- Upload Speed: Mbps + MB/s
- Ping: ms (average of 5 samples)
- Jitter: ms (variance)
- Bufferbloat Grade: A-F
- Packet Loss: % lost packets
- Server Location: Tested endpoint
6. Share Results
- Click "Share" to generate PDF report
- Copy result link
- View historical tests
Backend (.env):
# Server Configuration
PORT=3000 # Listen port
ENABLE_CLUSTER=true # Use all CPU cores
NODE_ENV=production # production/development
# Performance Tuning
KEEP_ALIVE_TIMEOUT=65000 # HTTP keep-alive (ms)
HEADERS_TIMEOUT=66000 # Headers timeout (ms)
# Rate Limiting
RATE_LIMIT_WINDOW_MS=60000 # 1 minute window
RATE_LIMIT_MAX_REQUESTS=500 # Max requests per IP
# Optional: Custom Server Host
HOST=0.0.0.0 # Bind addressFrontend (.env):
# API Configuration
VITE_API_BASE_URL=http://localhost:3000
VITE_WS_URL=ws://localhost:3000
# Feature Flags
VITE_ENABLE_BUFFERBLOAT=true
VITE_ENABLE_PACKET_LOSS=true
VITE_DEFAULT_PROTOCOL=xhrFor Most Accurate Results:
- Use Ethernet - WiFi introduces variance
- Close Other Tabs - Background downloads affect results
- Test Multiple Times - Average 3-5 tests
- Different Times of Day - Network congestion varies
- Disable VPN/Proxy - Adds latency and overhead
- Update Network Drivers - Old drivers can bottleneck
Understanding TCP Grace Period:
The TCP Grace Period excludes the first few seconds of testing to avoid measuring TCP slow-start:
- Fast Connection (>50 Mbps): 1 second grace
- Medium Connection (10-50 Mbps): 2 seconds grace
- Slow Connection (<10 Mbps): 3 seconds grace
This is automatic and configurable via enableDynamicGracePeriod in config.
Step 1: Complete a speed test
Step 2: On results page, click "Generate PDF"
Step 3: Report includes:
- Test summary (download, upload, ping, jitter)
- Speed graph over time
- Bufferbloat analysis
- Timestamp and server info
- QR code for sharing
Step 4: Download or share link
All tests are stored in browser localStorage:
Access:
const history = JSON.parse(localStorage.getItem('speedTestHistory') || '[]');
console.log(history);Fields:
id: Unique test identifiertimestamp: Unix timestampdownloadSpeed: MbpsuploadSpeed: Mbpsping: msjitter: msbufferbloat: { rating, latencyIncrease }packetLoss: { percentage, sent, received }
Symptoms: Test stops midway with error message
Causes:
- Server disconnected
- Rate limit exceeded
- Browser extension blocking requests
Solutions:
# Check server logs
docker logs netpulse
# Increase rate limit in backend/.env
RATE_LIMIT_MAX_REQUESTS=1000
# Disable ad blockers for test domainSymptoms: Results much lower than expected
Diagnosis:
-
Check TCP Grace Period:
// In speedTestWorker.ts console.log('Grace period:', gracePeriodMs);
-
Verify Server Resources:
curl http://localhost:3000/status
-
Test with Larger File:
Default: 10MB โ Try 50MB or 100MB
Fix:
- Increase test duration in settings
- Enable cluster mode:
ENABLE_CLUSTER=true - Use WebSocket protocol (lower overhead)
Symptoms: "WebSocket connection failed" error
Check:
# Verify WebSocket endpoint
curl -i http://localhost:3000/websocket
# Check firewall
netstat -an | grep 3000Fix:
// In serverConfig.ts, ensure correct WS URL
const wsUrl = import.meta.env.VITE_WS_URL || 'ws://localhost:3000';Symptoms: Container exits immediately
Debug:
docker run -it netpulse:latest /bin/sh
npm start
# Check error outputCommon Fixes:
# Ensure proper permissions
RUN chmod +x /docker-entrypoint.sh
# Check port binding
EXPOSE 3000Symptoms: Deployment fails with build error
Check Logs:
# In Render dashboard
View Logs โ Build LogsCommon Issues:
-
Node Version Mismatch:
# In render.yaml envVars: - key: NODE_VERSION value: "20"
-
Memory Limit:
plan: standard # Upgrade from free tier
-
Build Timeout:
# Optimize Dockerfile RUN npm ci --only=production
Symptoms: "Network Error" or infinite loading
Check:
// In browser console
console.log(window.location.origin);Fix:
// In serverConfig.ts
const baseUrl = import.meta.env.PROD
? '' // Use relative URLs for co-hosted
: 'http://localhost:3000';For Separate Hosting:
# Frontend .env
VITE_API_BASE_URL=https://api.yourdomain.comEnable Verbose Logging:
Backend:
# Set debug level
NODE_DEBUG=* npm startFrontend:
// In speedTestWorker.ts
const DEBUG = true;
if (DEBUG) console.log('Download chunk:', bytesWritten);Monitor Server Metrics:
# Every 10 seconds
watch -n 10 'curl http://localhost:3000/status | jq .metrics'Profile Node.js:
node --inspect server.js
# Open chrome://inspect in ChromeBefore deploying to production:
Security:
- โ Update rate limits for production traffic
- โ Enable HTTPS (TLS termination)
- โ Configure CORS for specific domains only
- โ Rotate any hardcoded secrets
Performance:
- โ
Enable cluster mode (
ENABLE_CLUSTER=true) - โ Tune socket timeouts for high-latency clients
- โ Increase max upload size if needed
- โ Configure reverse proxy (nginx/traefik)
Monitoring:
- โ Set up health check alerts
- โ Configure log aggregation (ELK/DataDog)
- โ Monitor memory usage (V8 heap stats)
- โ Track error rates per endpoint
Scalability:
- โ Horizontal scaling plan (load balancer)
- โ Database for test history (optional)
- โ CDN for static assets
- โ Redis for rate limiting (distributed)
docker-compose.prod.yml:
version: '3.8'
services:
netpulse:
image: your-registry/netpulse:latest
ports:
- "80:10000"
environment:
- NODE_ENV=production
- ENABLE_CLUSTER=true
- RATE_LIMIT_MAX_REQUESTS=1000
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:10000/health"]
interval: 30s
timeout: 10s
retries: 3
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"Deploy:
docker-compose -f docker-compose.prod.yml up -dingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: netpulse-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rate-limit: "100"
nginx.ingress.kubernetes.io/rate-limit-window: "1m"
spec:
rules:
- host: speedtest.yourdomain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: netpulse
port:
number: 80Apply:
kubectl apply -f ingress.yamlCustom Domain:
- Go to Cloudflare Dashboard
- Pages โ Your Project โ Custom Domains
- Add
speedtest.yourdomain.com - Wait for DNS propagation
Environment Variables:
# In Cloudflare Dashboard
Pages โ Settings โ Environment Variables
VITE_API_BASE_URL = https://speedtest.yourdomain.com
Tested Configuration:
- CPU: 8-core (Intel i7-12700K)
- RAM: 32GB DDR4
- Network: 1 Gbps
Single Instance (No Cluster):
- Concurrent Users: ~500
- Peak Throughput: ~2 Gbps
- Memory Usage: ~400MB
- CPU Usage: ~60% (single core)
Cluster Mode (8 Cores):
- Concurrent Users: ~4000
- Peak Throughput: ~8 Gbps
- Memory Usage: ~3.2GB
- CPU Usage: ~70% (all cores)
Optimization Impact:
| Optimization | Improvement | Notes |
|---|---|---|
| Pre-generated Buffers | +40% throughput | Reduces crypto.randomBytes calls |
| Chunked Encoding | +25% throughput | Better streaming efficiency |
| Backpressure Handling | +15% stability | Prevents memory spikes |
| Cluster Mode | +700% capacity | Linear scaling with cores |
| Disabled Compression | +20% throughput | For test endpoints only |
Bundle Sizes (Production Build):
Total: 1.2MB (gzipped: 380KB)
โโโ vendor.js: 650KB (React, Recharts)
โโโ index.js: 280KB (App code)
โโโ speedTestWorker.js: 120KB (Web Worker)
โโโ styles.css: 45KB
Load Time Metrics:
- First Contentful Paint: ~800ms
- Time to Interactive: ~1.2s
- Lighthouse Score: 92/100
Optimization Techniques:
- Code splitting (lazy loading)
- Tree shaking (unused code elimination)
- Minification (Terser)
- Asset optimization (WebP, SVG)
- Mobile App - React Native iOS/Android
- Historical Analytics - Trend graphs over time
- Multi-Language Support - i18n with 10+ languages
- Custom Server Selection - Manual server picker with map
- WebRTC Testing - P2P speed tests between users
- Video Streaming Test - Simulate Netflix/YouTube
- Gaming Latency Test - Simulate online gaming
- API Rate Limiting Dashboard - Visual analytics
- Machine Learning - Anomaly detection for results
- Blockchain Verification - Immutable test records
- Enterprise Features - White-label solutions
- Global Leaderboard - Compare speeds worldwide
Created by: Amar Pawar
Current Version: 2.0.0
License: MIT License
Contributions are welcome! Please follow these guidelines:
- Fork the repository
- Create feature branch:
git checkout -b feat/amazing-feature
- Make changes and ensure tests pass:
npm run lint npm run test - Commit using Conventional Commits:
git commit -m "feat: add amazing feature" - Push to your fork:
git push origin feat/amazing-feature
- Open Pull Request on GitHub
- TypeScript strict mode enabled
- ESLint rules enforced
- Prettier formatting
- Unit tests for critical paths
- Documentation for public APIs
Built with โค๏ธ using React, TypeScript, Node.js, and Cloudflare