Skip to content

Guide: API Endpoints

Ben edited this page Oct 23, 2025 · 2 revisions

Contents

  1. Requirements
  2. Quick Start
  3. Authentication Methods

Guide

Send your SQL Server changes directly to HTTP APIs and webhooks. This guide shows you how to set up real-time database change notifications to any HTTP endpoint with support for multiple authentication methods, custom headers, and payload compression.

Why Use HTTP Endpoints?

HTTP endpoints provide the simplest way to send database changes to external systems. This direct approach means you can integrate your data with any platform that accepts webhooks or POST requests. Unlike message queues, HTTP endpoints require no middleware—just point and shoot. Perfect for simple integrations, webhooks, and scenarios where you control both ends of the connection.

Quick Start

Follow these steps to get database changes flowing to HTTP endpoints in under 10 minutes.

Step 1: Prepare Your Endpoint

Before configuring Trignis, ensure your HTTP endpoint is ready to receive POST requests:

Requirements:

  • Accepts POST requests with JSON payloads
  • Returns HTTP 2xx status codes for successful receipt
  • Can handle your expected (compressed) request volume
  • (Optional) Supports authentication if needed

Test Your Endpoint:

curl -X POST https://api.yourcompany.com/v1/customers/changes \
  -H "Content-Type: application/json" \
  -d '{"test": "message"}'

Serverless Setup Examples:

AWS Lambda Function URL:

  1. Create a Lambda function in the AWS Console
  2. Enable Function URL in the Lambda configuration
  3. Set authentication to "NONE" or configure IAM
  4. Copy the Function URL (e.g., https://abc123.lambda-url.us-east-1.on.aws/)

Azure Function HTTP Trigger:

  1. Create a Function App in the Azure Portal
  2. Add an HTTP trigger function
  3. Set authorization level to "Function" or "Anonymous"
  4. Copy the function URL with key if needed

Step 2: Configure Trignis

Add an HTTP endpoint to your environment configuration file (e.g., environments/production.json):

Simple Webhook (No Auth):

{
  "ChangeTracking": {
    "ApiEndpoints": [
      {
        "Key": "webhook_customers",
        "Url": "https://hooks.slack.com/services/T00/B00/XXX"
      }
    ]
  }
}

REST API with Bearer Token:

{
  "ChangeTracking": {
    "ApiEndpoints": [
      {
        "Key": "api_customers",
        "Url": "https://api.yourcompany.com/v1/customers/changes",
        "Auth": {
          "Type": "Bearer",
          "Token": "your-api-token-here"
        }
      }
    ]
  }
}

API with Custom Headers:

{
  "ChangeTracking": {
    "ApiEndpoints": [
      {
        "Key": "api_customers",
        "Url": "https://api.yourcompany.com/v1/customers/changes",
        "Auth": {
          "Type": "ApiKey",
          "ApiKey": "your-api-key",
          "HeaderName": "X-API-Key"
        },
        "CustomHeaders": {
          "X-Source": "trignis",
          "X-Environment": "{environment}"
        }
      }
    ]
  }
}

Step 3: Test Your Configuration

Start Trignis and watch for successful deliveries:

# Windows
TrignisBackgroundService.bat test

Look for log entries that indicate a succesful delivery.

Step 4: Verify Delivery

Your endpoint will receive POST requests with this structure (which is fixed):

{
  "Metadata": {
    "Environment": "production",
    "Object": "Customers",
    "Database": "PrimaryDatabase",
    "Timestamp": "2025-01-15T10:30:00Z",
    "Version": 1543,
    "RecordCount": 25
  },
  "Data": [
    {
      "CustomerId": 12345,
      "Name": "John Doe",
      "Email": "john@example.com",
      "$operation": "UPDATE"
    }
  ]
}

Payload Details:

  • Metadata.Environment - Which environment sent the data (production, staging, etc.)
  • Metadata.Object - The tracking object name (table identifier)
  • Metadata.Database - The database connection name
  • Metadata.Version - Current change tracking version
  • Metadata.RecordCount - Number of records in this payload
  • Data[].operation - Operation type: INSERT, UPDATE, DELETE, or FULL (initial sync)

Your endpoint must:

  • Accept POST requests with Content-Type: application/json
  • Return HTTP 2xx status code for successful receipt
  • Respond within 30 seconds (default timeout)
  • Handle the payload structure shown above

Authentication Methods

Trignis supports multiple authentication methods to integrate with any API.

No Authentication

For public webhooks or internal unprotected endpoints:

{
  "Key": "public_webhook",
  "Url": "https://hooks.example.com/abc123"
}

Use when:

  • Sending to public webhook services (Slack, Discord)
  • Internal networks with network-level security
  • Testing and development environments

Bearer Token

For modern APIs using OAuth 2.0 tokens or custom bearer tokens:

{
  "Key": "bearer_api",
  "Url": "https://api.example.com/changes",
  "Auth": {
    "Type": "Bearer",
    "Token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9..."
  }
}

Use when:

  • API uses OAuth 2.0 access tokens
  • Service provides bearer tokens for authentication
  • Modern REST APIs and microservices

Basic Authentication

For legacy APIs using username and password:

{
  "Key": "basic_auth_api",
  "Url": "https://legacy-api.example.com/changes",
  "Auth": {
    "Type": "Basic",
    "Username": "trignis",
    "Password": "secure-password"
  }
}

Use when:

  • Older systems requiring HTTP Basic Auth
  • Internal APIs with simple authentication
  • Legacy enterprise systems

Security tip: Use encrypted passwords with PWENC: prefix (see Password Encryption section).

API Key

For services using API keys in custom headers:

{
  "Key": "apikey_service",
  "Url": "https://api.example.com/changes",
  "Auth": {
    "Type": "ApiKey",
    "ApiKey": "sk_live_abc123",
    "HeaderName": "X-API-Key"
  }
}

Use when:

  • Service uses API keys for authentication
  • Keys should be sent in specific headers
  • Third-party APIs (Stripe, SendGrid, etc.)

Default header: If you don't specify HeaderName, it defaults to X-API-Key.

OAuth 2.0 Client Credentials

For enterprise APIs requiring OAuth 2.0 client credentials flow:

{
  "Key": "oauth2_api",
  "Url": "https://api.example.com/changes",
  "Auth": {
    "Type": "OAuth2ClientCredentials",
    "TokenEndpoint": "https://auth.example.com/oauth/token",
    "ClientId": "trignis-client",
    "ClientSecret": "your-client-secret",
    "Scope": "api.write"
  }
}

Use when:

  • Enterprise APIs requiring OAuth 2.0
  • Machine-to-machine authentication
  • APIs with token rotation requirements

How it works:

  1. Trignis automatically fetches access tokens from the token endpoint
  2. Tokens are cached and automatically refreshed before expiration
  3. Expired tokens trigger automatic re-authentication
  4. No manual token management required

Advanced Patterns

Pattern 1: Dynamic URL Parameters

Use placeholders in URLs to create dynamic endpoints:

{
  "Key": "dynamic_webhook",
  "Url": "https://api.example.com/{environment}/{object}/changes?timestamp={timestamp}",
  "CustomHeaders": {
    "X-Database": "{database}",
    "X-Batch": "{batch}/{totalbatches}"
  }
}

Available placeholders:

  • {timestamp} - Current timestamp (yyyyMMddHHmmss)
  • {object} - Tracking object name (e.g., "Customers")
  • {database} - Database connection name
  • {environment} - Environment name (e.g., "production")
  • {key} - Endpoint key
  • {batch} - Current batch number (when batching enabled)
  • {totalbatches} - Total number of batches
  • {guid} - New GUID for each request

Example result:

https://api.example.com/production/Customers/changes?timestamp=20250115103000
X-Database: PrimaryDatabase
X-Batch: 1/5

Pattern 2: Multiple Endpoints per Table

Send the same changes to multiple destinations:

{
  "ChangeTracking": {
    "TrackingObjects": [
      {
        "Name": "Customers",
        "TableName": "dbo.Customers",
        "ApiEndpointKeys": [
          "crm_webhook",
          "analytics_api",
          "backup_system"
        ]
      }
    ],
    "ApiEndpoints": [
      {
        "Key": "crm_webhook",
        "Url": "https://crm.example.com/webhooks"
      },
      {
        "Key": "analytics_api",
        "Url": "https://analytics.example.com/ingest",
        "Auth": {
          "Type": "Bearer",
          "Token": "analytics-token"
        }
      },
      {
        "Key": "backup_system",
        "Url": "https://backup.example.com/receive"
      }
    ]
  }
}

Each change is sent to all three endpoints independently.

Pattern 3: Payload Compression

Enable automatic compression for large payloads:

{
  "Key": "compressed_api",
  "Url": "https://api.example.com/changes",
  "EnableCompression": true
}

How it works:

  • Messages over 1KB are automatically compressed using gzip
  • Content-Encoding header set to "gzip"
  • Typically achieves 60-80% size reduction
  • Your endpoint must support gzip decompression

Your endpoint requirements:

  • Must handle Content-Encoding: gzip header
  • Must decompress payload before parsing JSON
  • Most modern web frameworks handle this automatically

Pattern 4: Different Tables to Different APIs

Route specific tables to their respective APIs:

{
  "ChangeTracking": {
    "TrackingObjects": [
      {
        "Name": "Customers",
        "TableName": "dbo.Customers",
        "ApiEndpointKeys": ["crm_api"]
      },
      {
        "Name": "Orders",
        "TableName": "dbo.Orders",
        "ApiEndpointKeys": ["fulfillment_api"]
      },
      {
        "Name": "Products",
        "TableName": "dbo.Products",
        "ApiEndpointKeys": ["inventory_api"]
      }
    ],
    "ApiEndpoints": [
      {
        "Key": "crm_api",
        "Url": "https://crm.example.com/api/customers"
      },
      {
        "Key": "fulfillment_api",
        "Url": "https://fulfillment.example.com/api/orders"
      },
      {
        "Key": "inventory_api",
        "Url": "https://inventory.example.com/api/products"
      }
    ]
  }
}

Pattern 5: Batch Processing for High Volume

Handle millions of changes by batching:

{
  "ChangeTracking": {
    "GlobalSettings": {
      "MaxRecordsPerBatch": 500,
      "EnablePayloadBatching": true
    },
    "TrackingObjects": [
      {
        "Name": "Orders",
        "TableName": "dbo.Orders",
        "InitialSyncMode": "Full"
      }
    ]
  }
}

What happens:

  • Large result sets are split into batches of 500 records
  • Each batch is sent as a separate HTTP request
  • Batch metadata included in headers: X-Batch-Number, X-Total-Batches

Your endpoint will receive:

  • Multiple requests for the same change set
  • Header X-Batch-Number: 1 through the total number of batches
  • Header X-Total-Batches: 10 (total count)
  • Each request contains up to 500 records in the Data array

Error Handling & Reliability

Trignis has various guardrails in place.

Circuit Breaker Pattern

Trignis uses circuit breakers to protect both systems:

How it works:

  1. Closed (normal): Requests flow normally
  2. Open (failed): After 3 consecutive failures, circuit opens for 60 seconds
  3. Half-open (testing): After timeout, one test request is sent
  4. Success: Circuit closes, normal operation resumes
  5. Failure: Circuit opens again for another 60 seconds

Log messages:

[WARN] Circuit breaker opened for 'api_customers' for 60s due to: Connection timeout
[DBG] Circuit breaker closed for 'api_customers' after successful request

Dead Letter Queue

Failed messages are automatically saved for later retry:

# Check failed messages
curl http://localhost:2455/health/deadletters

Response shows accumulated failures:

{
  "totalDeadLetters": 15,
  "last24Hours": 3,
  "last7Days": 8,
  "details": [
    {
      "endpoint": "api_customers",
      "error": "HTTP 500: Internal Server Error",
      "timestamp": "2025-01-15T10:30:00Z"
    }
  ]
}

Retry Logic

Trignis automatically retries failed requests:

Retry schedule:

  1. First retry: 2 seconds
  2. Second retry: 4 seconds
  3. Third retry: 8 seconds
  4. After 3 failures: Circuit breaker opens

Your endpoint should:

  • Return 2xx status codes for success
  • Return 5xx for temporary failures (will be retried)
  • Return 4xx for permanent failures (won't be retried)
  • Respond within 30 seconds (default timeout)

Troubleshooting Guide

"Connection refused" or "Connection timeout"

What it means: Trignis can't reach your HTTP endpoint.

Check:

  1. Is the URL correct? Test with curl https://your-api.com/endpoint
  2. Is the service running? Check your API logs or server status
  3. Is the port accessible? Check firewall rules
  4. Is DNS resolving? Try nslookup your-api.com

Fix: Update the URL, start your service, or open firewall ports.

"401 Unauthorized" or "403 Forbidden"

What it means: Authentication failed or insufficient permissions.

Check:

  1. Is the auth type correct? (Bearer, Basic, ApiKey, OAuth2)
  2. Are credentials valid? Test with curl:
    curl -H "Authorization: Bearer your-token" https://api.example.com
  3. Has the token expired? (OAuth tokens typically expire)
  4. Does the API key have the right permissions?

Fix: Update credentials, refresh OAuth tokens, or check API key permissions.

"413 Payload Too Large"

What it means: Your message exceeds the API's size limit.

Options:

  1. Enable compression:

    "EnableCompression": true
  2. Enable batching to split into smaller chunks:

    "MaxRecordsPerBatch": 100,
    "EnablePayloadBatching": true
  3. Filter data in your stored procedure to send only essential fields

  4. Increase API limits if you control the endpoint

"500 Internal Server Error"

What it means: Your endpoint returned an error processing the request.

Check:

  1. View your API logs for error details
  2. Test with a sample payload:
    curl -X POST https://your-api.com/endpoint \
      -H "Content-Type: application/json" \
      -d '{"test": "data"}'
  3. Is your API handling the payload structure correctly?
  4. Are there any required fields missing?

Fix: Debug your endpoint handler, check logs, ensure proper error handling.

Messages arrive compressed when not expected

What it means: Compression enabled and messages exceed 1KB.

Solution: Either disable compression or update your endpoint to handle gzip:

{
  "EnableCompression": false
}

Or handle decompression in your code (see Pattern 3 above).

Circuit breaker won't close

What it means: Your endpoint keeps failing, preventing further requests.

Fix:

  1. Check dead letter queue: GET /health/deadletters
  2. Review error details in logs
  3. Fix the underlying issue (auth, network, API bugs)
  4. Wait 60 seconds for automatic retry
  5. If urgent, restart Trignis to reset all circuit breakers

High latency or slow responses

What it means: Your endpoint is taking too long to respond.

Check:

  1. Endpoint response time (must be < 30 seconds)
  2. Network latency between Trignis and endpoint
  3. Database query performance
  4. Endpoint processing logic

Fix:

  • Optimize your endpoint code
  • Add indexes to database queries
  • Consider async processing (return 200 immediately, process later)
  • Reduce MaxRecordsPerBatch to send smaller payloads

Need more detailed logs?

Add debug logging to appsettings.json:

{
  "Serilog": {
    "MinimumLevel": {
      "Override": {
        "Trignis.MicrosoftSQL.Services.ChangeTrackingBackgroundService": "Debug"
      }
    }
  }
}

Then restart Trignis and check log/trignis-*.log for detailed HTTP request/response information.


Platform-Specific Configuration

HTTP Endpoint Properties

All Available Properties:

Property Required Default Notes
Key Yes - Unique identifier for this endpoint
Url Yes - Full HTTP/HTTPS URL
Auth No null Authentication configuration
CustomHeaders No {} Additional headers to send
EnableCompression No false Auto-compress payloads > 1KB

Authentication Properties

Bearer Token:

{
  "Type": "Bearer",
  "Token": "your-token-here"
}

Basic Auth:

{
  "Type": "Basic",
  "Username": "username",
  "Password": "password"
}

API Key:

{
  "Type": "ApiKey",
  "ApiKey": "your-api-key",
  "HeaderName": "X-API-Key"
}

OAuth 2.0 Client Credentials:

{
  "Type": "OAuth2ClientCredentials",
  "TokenEndpoint": "https://auth.example.com/oauth/token",
  "ClientId": "client-id",
  "ClientSecret": "client-secret",
  "Scope": "api.write",
  "TokenExpirationSeconds": 3600
}

Monitoring Production Systems

Health Check Endpoints

Check HTTP endpoint connection health:

curl http://localhost:2455/health/connections

Response shows which endpoints are healthy:

{
  "totalEndpoints": 3,
  "healthyEndpoints": 2,
  "unhealthyEndpoints": 1,
  "details": {
    "api_customers": {
      "isHealthy": true,
      "consecutiveFailures": 0,
      "lastSuccess": "2025-01-15T10:30:00Z"
    },
    "webhook_orders": {
      "isHealthy": false,
      "consecutiveFailures": 5,
      "downtimeDuration": "00:05:00",
      "lastError": "Connection timeout"
    }
  }
}

Dead Letter Queue Statistics

Check failed messages:

curl http://localhost:2455/health/deadletters

Response shows accumulated failures:

{
  "totalDeadLetters": 42,
  "last24Hours": 8,
  "last7Days": 15,
  "recentFailures": [
    {
      "endpoint": "api_customers",
      "object": "Customers",
      "error": "HTTP 500: Internal Server Error",
      "timestamp": "2025-01-15T10:25:00Z",
      "recordCount": 150
    }
  ]
}

Set up alerts when last24Hours exceeds your threshold.

Log Monitoring

Watch for these patterns in log/trignis-*.log:

Success:

[INFO] [production] └─ [HTTP] Exported to 'api_customers' (200 OK)

Circuit breaker opened:

[WARN] Circuit breaker opened for 'api_customers' for 60s due to: Connection timeout

Payload compressed:

[DEBUG] Compressed payload from 15234 to 3456 bytes (77.30% reduction)

Authentication refreshed:

[INFO] OAuth2 token refreshed for endpoint 'api_customers'

Dead letter saved:

[WARN] Saved dead letter for Customers (PrimaryDatabase): HTTP 500 Internal Server Error

Setting Up Alerts

Recommended alerts:

  1. Failed Deliveries: When last24Hours > 10 in dead letter stats
  2. Circuit Breaker Open: When logs contain "Circuit breaker opened"
  3. High Error Rate: When consecutive failures > 3
  4. Slow Responses: When response time > 5 seconds

Implementation options:

  • Poll the /health/deadletters endpoint periodically
  • Monitor log files for error patterns
  • Set up alerts in your monitoring system (Prometheus, Datadog, etc.)
  • Configure Windows Event Log monitoring if enabled

Performance Tuning

Optimize for Latency

Want changes delivered as fast as possible?

{
  "ChangeTracking": {
    "GlobalSettings": {
      "PollingIntervalSeconds": 5
    }
  }
}

Trade-off: More database load from frequent polling. More HTTP requests to your endpoint.

Optimize for Throughput

Processing millions of rows? Enable batching and compression:

{
  "ChangeTracking": {
    "GlobalSettings": {
      "PollingIntervalSeconds": 60,
      "MaxRecordsPerBatch": 1000,
      "EnablePayloadBatching": true
    },
    "ApiEndpoints": [
      {
        "Key": "bulk_api",
        "Url": "https://api.example.com/bulk",
        "EnableCompression": true
      }
    ]
  }
}

Trade-off: Higher latency (60s polling), but better for large datasets.

Optimize for Reliability

Increase timeout and retry attempts for unreliable networks:

{
  "ChangeTracking": {
    "GlobalSettings": {
      "HttpTimeoutSeconds": 60,
      "MaxRetryAttempts": 5
    }
  }
}

Note: These settings apply globally to all HTTP endpoints.

Endpoint Response Time Targets

Scenario Recommended Timeout Max Batch Size
Fast internal API 5 seconds 1000 records
External webhook 10 seconds 500 records
Slow processing 30 seconds 100 records
Bulk data export 60 seconds 5000 records

Your endpoint should respond as quickly as possible. If processing takes time, return 200 immediately and process asynchronously.


Security Best Practices

Use HTTPS Always

Make sure to always use encrypted URLs.

{
  "Url": "https://api.example.com/changes"  // ✓ Good
  "Url": "http://api.example.com/changes"   // ✗ Bad (except localhost)
}

Rate Limiting

Protect your endpoint from excessive requests by configuring rate limits on your web server or API gateway. Consider:

  • Limiting requests per minute from the Trignis server IP
  • Setting maximum concurrent connections
  • Configuring request throttling based on your capacity

Use IP Whitelisting

If possible, restrict access to Trignis server IP at your firewall or load balancer level:

  • Only allow connections from known server IP addresses using Trignis
  • Block all other incoming traffic to the endpoint
  • Configure this at network, firewall, or reverse proxy level

Need help? Check the main README or open an issue on GitHub.

Additional Resources

Clone this wiki locally