-
Notifications
You must be signed in to change notification settings - Fork 0
Guide: API Endpoints
Send your SQL Server changes directly to HTTP APIs and webhooks. This guide shows you how to set up real-time database change notifications to any HTTP endpoint with support for multiple authentication methods, custom headers, and payload compression.
HTTP endpoints provide the simplest way to send database changes to external systems. This direct approach means you can integrate your data with any platform that accepts webhooks or POST requests. Unlike message queues, HTTP endpoints require no middleware—just point and shoot. Perfect for simple integrations, webhooks, and scenarios where you control both ends of the connection.
Follow these steps to get database changes flowing to HTTP endpoints in under 10 minutes.
Before configuring Trignis, ensure your HTTP endpoint is ready to receive POST requests:
Requirements:
- Accepts POST requests with JSON payloads
- Returns HTTP 2xx status codes for successful receipt
- Can handle your expected (compressed) request volume
- (Optional) Supports authentication if needed
Test Your Endpoint:
curl -X POST https://api.yourcompany.com/v1/customers/changes \
-H "Content-Type: application/json" \
-d '{"test": "message"}'Serverless Setup Examples:
AWS Lambda Function URL:
- Create a Lambda function in the AWS Console
- Enable Function URL in the Lambda configuration
- Set authentication to "NONE" or configure IAM
- Copy the Function URL (e.g.,
https://abc123.lambda-url.us-east-1.on.aws/)
Azure Function HTTP Trigger:
- Create a Function App in the Azure Portal
- Add an HTTP trigger function
- Set authorization level to "Function" or "Anonymous"
- Copy the function URL with key if needed
Add an HTTP endpoint to your environment configuration file (e.g., environments/production.json):
Simple Webhook (No Auth):
{
"ChangeTracking": {
"ApiEndpoints": [
{
"Key": "webhook_customers",
"Url": "https://hooks.slack.com/services/T00/B00/XXX"
}
]
}
}REST API with Bearer Token:
{
"ChangeTracking": {
"ApiEndpoints": [
{
"Key": "api_customers",
"Url": "https://api.yourcompany.com/v1/customers/changes",
"Auth": {
"Type": "Bearer",
"Token": "your-api-token-here"
}
}
]
}
}API with Custom Headers:
{
"ChangeTracking": {
"ApiEndpoints": [
{
"Key": "api_customers",
"Url": "https://api.yourcompany.com/v1/customers/changes",
"Auth": {
"Type": "ApiKey",
"ApiKey": "your-api-key",
"HeaderName": "X-API-Key"
},
"CustomHeaders": {
"X-Source": "trignis",
"X-Environment": "{environment}"
}
}
]
}
}Start Trignis and watch for successful deliveries:
# Windows
TrignisBackgroundService.bat testLook for log entries that indicate a succesful delivery.
Your endpoint will receive POST requests with this structure (which is fixed):
{
"Metadata": {
"Environment": "production",
"Object": "Customers",
"Database": "PrimaryDatabase",
"Timestamp": "2025-01-15T10:30:00Z",
"Version": 1543,
"RecordCount": 25
},
"Data": [
{
"CustomerId": 12345,
"Name": "John Doe",
"Email": "john@example.com",
"$operation": "UPDATE"
}
]
}Payload Details:
-
Metadata.Environment- Which environment sent the data (production, staging, etc.) -
Metadata.Object- The tracking object name (table identifier) -
Metadata.Database- The database connection name -
Metadata.Version- Current change tracking version -
Metadata.RecordCount- Number of records in this payload -
Data[].operation- Operation type: INSERT, UPDATE, DELETE, or FULL (initial sync)
Your endpoint must:
- Accept POST requests with Content-Type: application/json
- Return HTTP 2xx status code for successful receipt
- Respond within 30 seconds (default timeout)
- Handle the payload structure shown above
Trignis supports multiple authentication methods to integrate with any API.
For public webhooks or internal unprotected endpoints:
{
"Key": "public_webhook",
"Url": "https://hooks.example.com/abc123"
}Use when:
- Sending to public webhook services (Slack, Discord)
- Internal networks with network-level security
- Testing and development environments
For modern APIs using OAuth 2.0 tokens or custom bearer tokens:
{
"Key": "bearer_api",
"Url": "https://api.example.com/changes",
"Auth": {
"Type": "Bearer",
"Token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9..."
}
}Use when:
- API uses OAuth 2.0 access tokens
- Service provides bearer tokens for authentication
- Modern REST APIs and microservices
For legacy APIs using username and password:
{
"Key": "basic_auth_api",
"Url": "https://legacy-api.example.com/changes",
"Auth": {
"Type": "Basic",
"Username": "trignis",
"Password": "secure-password"
}
}Use when:
- Older systems requiring HTTP Basic Auth
- Internal APIs with simple authentication
- Legacy enterprise systems
Security tip: Use encrypted passwords with PWENC: prefix (see Password Encryption section).
For services using API keys in custom headers:
{
"Key": "apikey_service",
"Url": "https://api.example.com/changes",
"Auth": {
"Type": "ApiKey",
"ApiKey": "sk_live_abc123",
"HeaderName": "X-API-Key"
}
}Use when:
- Service uses API keys for authentication
- Keys should be sent in specific headers
- Third-party APIs (Stripe, SendGrid, etc.)
Default header: If you don't specify HeaderName, it defaults to X-API-Key.
For enterprise APIs requiring OAuth 2.0 client credentials flow:
{
"Key": "oauth2_api",
"Url": "https://api.example.com/changes",
"Auth": {
"Type": "OAuth2ClientCredentials",
"TokenEndpoint": "https://auth.example.com/oauth/token",
"ClientId": "trignis-client",
"ClientSecret": "your-client-secret",
"Scope": "api.write"
}
}Use when:
- Enterprise APIs requiring OAuth 2.0
- Machine-to-machine authentication
- APIs with token rotation requirements
How it works:
- Trignis automatically fetches access tokens from the token endpoint
- Tokens are cached and automatically refreshed before expiration
- Expired tokens trigger automatic re-authentication
- No manual token management required
Use placeholders in URLs to create dynamic endpoints:
{
"Key": "dynamic_webhook",
"Url": "https://api.example.com/{environment}/{object}/changes?timestamp={timestamp}",
"CustomHeaders": {
"X-Database": "{database}",
"X-Batch": "{batch}/{totalbatches}"
}
}Available placeholders:
-
{timestamp}- Current timestamp (yyyyMMddHHmmss) -
{object}- Tracking object name (e.g., "Customers") -
{database}- Database connection name -
{environment}- Environment name (e.g., "production") -
{key}- Endpoint key -
{batch}- Current batch number (when batching enabled) -
{totalbatches}- Total number of batches -
{guid}- New GUID for each request
Example result:
https://api.example.com/production/Customers/changes?timestamp=20250115103000
X-Database: PrimaryDatabase
X-Batch: 1/5
Send the same changes to multiple destinations:
{
"ChangeTracking": {
"TrackingObjects": [
{
"Name": "Customers",
"TableName": "dbo.Customers",
"ApiEndpointKeys": [
"crm_webhook",
"analytics_api",
"backup_system"
]
}
],
"ApiEndpoints": [
{
"Key": "crm_webhook",
"Url": "https://crm.example.com/webhooks"
},
{
"Key": "analytics_api",
"Url": "https://analytics.example.com/ingest",
"Auth": {
"Type": "Bearer",
"Token": "analytics-token"
}
},
{
"Key": "backup_system",
"Url": "https://backup.example.com/receive"
}
]
}
}Each change is sent to all three endpoints independently.
Enable automatic compression for large payloads:
{
"Key": "compressed_api",
"Url": "https://api.example.com/changes",
"EnableCompression": true
}How it works:
- Messages over 1KB are automatically compressed using gzip
- Content-Encoding header set to "gzip"
- Typically achieves 60-80% size reduction
- Your endpoint must support gzip decompression
Your endpoint requirements:
- Must handle Content-Encoding: gzip header
- Must decompress payload before parsing JSON
- Most modern web frameworks handle this automatically
Route specific tables to their respective APIs:
{
"ChangeTracking": {
"TrackingObjects": [
{
"Name": "Customers",
"TableName": "dbo.Customers",
"ApiEndpointKeys": ["crm_api"]
},
{
"Name": "Orders",
"TableName": "dbo.Orders",
"ApiEndpointKeys": ["fulfillment_api"]
},
{
"Name": "Products",
"TableName": "dbo.Products",
"ApiEndpointKeys": ["inventory_api"]
}
],
"ApiEndpoints": [
{
"Key": "crm_api",
"Url": "https://crm.example.com/api/customers"
},
{
"Key": "fulfillment_api",
"Url": "https://fulfillment.example.com/api/orders"
},
{
"Key": "inventory_api",
"Url": "https://inventory.example.com/api/products"
}
]
}
}Handle millions of changes by batching:
{
"ChangeTracking": {
"GlobalSettings": {
"MaxRecordsPerBatch": 500,
"EnablePayloadBatching": true
},
"TrackingObjects": [
{
"Name": "Orders",
"TableName": "dbo.Orders",
"InitialSyncMode": "Full"
}
]
}
}What happens:
- Large result sets are split into batches of 500 records
- Each batch is sent as a separate HTTP request
- Batch metadata included in headers:
X-Batch-Number,X-Total-Batches
Your endpoint will receive:
- Multiple requests for the same change set
- Header
X-Batch-Number: 1through the total number of batches - Header
X-Total-Batches: 10(total count) - Each request contains up to 500 records in the Data array
Trignis has various guardrails in place.
Trignis uses circuit breakers to protect both systems:
How it works:
- Closed (normal): Requests flow normally
- Open (failed): After 3 consecutive failures, circuit opens for 60 seconds
- Half-open (testing): After timeout, one test request is sent
- Success: Circuit closes, normal operation resumes
- Failure: Circuit opens again for another 60 seconds
Log messages:
[WARN] Circuit breaker opened for 'api_customers' for 60s due to: Connection timeout
[DBG] Circuit breaker closed for 'api_customers' after successful request
Failed messages are automatically saved for later retry:
# Check failed messages
curl http://localhost:2455/health/deadlettersResponse shows accumulated failures:
{
"totalDeadLetters": 15,
"last24Hours": 3,
"last7Days": 8,
"details": [
{
"endpoint": "api_customers",
"error": "HTTP 500: Internal Server Error",
"timestamp": "2025-01-15T10:30:00Z"
}
]
}Trignis automatically retries failed requests:
Retry schedule:
- First retry: 2 seconds
- Second retry: 4 seconds
- Third retry: 8 seconds
- After 3 failures: Circuit breaker opens
Your endpoint should:
- Return 2xx status codes for success
- Return 5xx for temporary failures (will be retried)
- Return 4xx for permanent failures (won't be retried)
- Respond within 30 seconds (default timeout)
What it means: Trignis can't reach your HTTP endpoint.
Check:
- Is the URL correct? Test with
curl https://your-api.com/endpoint - Is the service running? Check your API logs or server status
- Is the port accessible? Check firewall rules
- Is DNS resolving? Try
nslookup your-api.com
Fix: Update the URL, start your service, or open firewall ports.
What it means: Authentication failed or insufficient permissions.
Check:
- Is the auth type correct? (Bearer, Basic, ApiKey, OAuth2)
- Are credentials valid? Test with curl:
curl -H "Authorization: Bearer your-token" https://api.example.com - Has the token expired? (OAuth tokens typically expire)
- Does the API key have the right permissions?
Fix: Update credentials, refresh OAuth tokens, or check API key permissions.
What it means: Your message exceeds the API's size limit.
Options:
-
Enable compression:
"EnableCompression": true
-
Enable batching to split into smaller chunks:
"MaxRecordsPerBatch": 100, "EnablePayloadBatching": true
-
Filter data in your stored procedure to send only essential fields
-
Increase API limits if you control the endpoint
What it means: Your endpoint returned an error processing the request.
Check:
- View your API logs for error details
- Test with a sample payload:
curl -X POST https://your-api.com/endpoint \ -H "Content-Type: application/json" \ -d '{"test": "data"}'
- Is your API handling the payload structure correctly?
- Are there any required fields missing?
Fix: Debug your endpoint handler, check logs, ensure proper error handling.
What it means: Compression enabled and messages exceed 1KB.
Solution: Either disable compression or update your endpoint to handle gzip:
{
"EnableCompression": false
}Or handle decompression in your code (see Pattern 3 above).
What it means: Your endpoint keeps failing, preventing further requests.
Fix:
- Check dead letter queue:
GET /health/deadletters - Review error details in logs
- Fix the underlying issue (auth, network, API bugs)
- Wait 60 seconds for automatic retry
- If urgent, restart Trignis to reset all circuit breakers
What it means: Your endpoint is taking too long to respond.
Check:
- Endpoint response time (must be < 30 seconds)
- Network latency between Trignis and endpoint
- Database query performance
- Endpoint processing logic
Fix:
- Optimize your endpoint code
- Add indexes to database queries
- Consider async processing (return 200 immediately, process later)
- Reduce
MaxRecordsPerBatchto send smaller payloads
Add debug logging to appsettings.json:
{
"Serilog": {
"MinimumLevel": {
"Override": {
"Trignis.MicrosoftSQL.Services.ChangeTrackingBackgroundService": "Debug"
}
}
}
}Then restart Trignis and check log/trignis-*.log for detailed HTTP request/response information.
All Available Properties:
| Property | Required | Default | Notes |
|---|---|---|---|
Key |
Yes | - | Unique identifier for this endpoint |
Url |
Yes | - | Full HTTP/HTTPS URL |
Auth |
No | null | Authentication configuration |
CustomHeaders |
No | {} | Additional headers to send |
EnableCompression |
No | false | Auto-compress payloads > 1KB |
Bearer Token:
{
"Type": "Bearer",
"Token": "your-token-here"
}Basic Auth:
{
"Type": "Basic",
"Username": "username",
"Password": "password"
}API Key:
{
"Type": "ApiKey",
"ApiKey": "your-api-key",
"HeaderName": "X-API-Key"
}OAuth 2.0 Client Credentials:
{
"Type": "OAuth2ClientCredentials",
"TokenEndpoint": "https://auth.example.com/oauth/token",
"ClientId": "client-id",
"ClientSecret": "client-secret",
"Scope": "api.write",
"TokenExpirationSeconds": 3600
}Check HTTP endpoint connection health:
curl http://localhost:2455/health/connectionsResponse shows which endpoints are healthy:
{
"totalEndpoints": 3,
"healthyEndpoints": 2,
"unhealthyEndpoints": 1,
"details": {
"api_customers": {
"isHealthy": true,
"consecutiveFailures": 0,
"lastSuccess": "2025-01-15T10:30:00Z"
},
"webhook_orders": {
"isHealthy": false,
"consecutiveFailures": 5,
"downtimeDuration": "00:05:00",
"lastError": "Connection timeout"
}
}
}Check failed messages:
curl http://localhost:2455/health/deadlettersResponse shows accumulated failures:
{
"totalDeadLetters": 42,
"last24Hours": 8,
"last7Days": 15,
"recentFailures": [
{
"endpoint": "api_customers",
"object": "Customers",
"error": "HTTP 500: Internal Server Error",
"timestamp": "2025-01-15T10:25:00Z",
"recordCount": 150
}
]
}Set up alerts when last24Hours exceeds your threshold.
Watch for these patterns in log/trignis-*.log:
Success:
[INFO] [production] └─ [HTTP] Exported to 'api_customers' (200 OK)
Circuit breaker opened:
[WARN] Circuit breaker opened for 'api_customers' for 60s due to: Connection timeout
Payload compressed:
[DEBUG] Compressed payload from 15234 to 3456 bytes (77.30% reduction)
Authentication refreshed:
[INFO] OAuth2 token refreshed for endpoint 'api_customers'
Dead letter saved:
[WARN] Saved dead letter for Customers (PrimaryDatabase): HTTP 500 Internal Server Error
Recommended alerts:
-
Failed Deliveries: When
last24Hours > 10in dead letter stats - Circuit Breaker Open: When logs contain "Circuit breaker opened"
- High Error Rate: When consecutive failures > 3
- Slow Responses: When response time > 5 seconds
Implementation options:
- Poll the
/health/deadlettersendpoint periodically - Monitor log files for error patterns
- Set up alerts in your monitoring system (Prometheus, Datadog, etc.)
- Configure Windows Event Log monitoring if enabled
Want changes delivered as fast as possible?
{
"ChangeTracking": {
"GlobalSettings": {
"PollingIntervalSeconds": 5
}
}
}Trade-off: More database load from frequent polling. More HTTP requests to your endpoint.
Processing millions of rows? Enable batching and compression:
{
"ChangeTracking": {
"GlobalSettings": {
"PollingIntervalSeconds": 60,
"MaxRecordsPerBatch": 1000,
"EnablePayloadBatching": true
},
"ApiEndpoints": [
{
"Key": "bulk_api",
"Url": "https://api.example.com/bulk",
"EnableCompression": true
}
]
}
}Trade-off: Higher latency (60s polling), but better for large datasets.
Increase timeout and retry attempts for unreliable networks:
{
"ChangeTracking": {
"GlobalSettings": {
"HttpTimeoutSeconds": 60,
"MaxRetryAttempts": 5
}
}
}Note: These settings apply globally to all HTTP endpoints.
| Scenario | Recommended Timeout | Max Batch Size |
|---|---|---|
| Fast internal API | 5 seconds | 1000 records |
| External webhook | 10 seconds | 500 records |
| Slow processing | 30 seconds | 100 records |
| Bulk data export | 60 seconds | 5000 records |
Your endpoint should respond as quickly as possible. If processing takes time, return 200 immediately and process asynchronously.
Make sure to always use encrypted URLs.
{
"Url": "https://api.example.com/changes" // ✓ Good
"Url": "http://api.example.com/changes" // ✗ Bad (except localhost)
}Protect your endpoint from excessive requests by configuring rate limits on your web server or API gateway. Consider:
- Limiting requests per minute from the Trignis server IP
- Setting maximum concurrent connections
- Configuring request throttling based on your capacity
If possible, restrict access to Trignis server IP at your firewall or load balancer level:
- Only allow connections from known server IP addresses using Trignis
- Block all other incoming traffic to the endpoint
- Configure this at network, firewall, or reverse proxy level
Need help? Check the main README or open an issue on GitHub.