This Terraform module deploys Detectify's Internal Scanning solution on AWS using Amazon EKS with Auto Mode.
- EKS Auto Mode - Simplified Kubernetes management with automatic node provisioning
- Automatic TLS - ACM certificate provisioning with DNS validation
- Internal ALB - Secure internal Application Load Balancer
- Horizontal Pod Autoscaling - Automatic scaling based on CPU/memory utilization
- KMS Encryption - Secrets encrypted at rest using AWS KMS
- CloudWatch Integration - Optional observability with CloudWatch Logs and Metrics
provider "aws" {
region = "eu-west-1"
}
# Retrieves cluster auth token for Kubernetes and Helm providers
data "aws_eks_cluster_auth" "cluster" {
name = module.internal_scanner.cluster_name
depends_on = [
# Explicit dependency must be set for data resources or it'll be
# evaluated as soon as the cluster name is known rather than waiting
# until the cluster is deployed.
# https://developer.hashicorp.com/terraform/language/data-sources#dependencies
module.internal_scanner.cluster_name,
]
}
provider "kubernetes" {
host = module.internal_scanner.cluster_endpoint
cluster_ca_certificate = base64decode(module.internal_scanner.cluster_certificate_authority_data)
token = data.aws_eks_cluster_auth.cluster.token
}
provider "helm" {
kubernetes = {
host = module.internal_scanner.cluster_endpoint
cluster_ca_certificate = base64decode(module.internal_scanner.cluster_certificate_authority_data)
token = data.aws_eks_cluster_auth.cluster.token
}
}
module "internal_scanner" {
source = "detectify/internal-scanning/aws"
version = "~> 1.0"
# Core Configuration
name = "detectify-scanner"
# Network Configuration
vpc_id = "vpc-xxxxx"
private_subnet_ids = ["subnet-xxxxx", "subnet-yyyyy"]
# License Configuration (provided by Detectify)
license_key = var.license_key
connector_api_key = var.connector_api_key
# Registry Authentication (provided by Detectify)
registry_username = var.registry_username
registry_password = var.registry_password
}- Basic Deployment - Minimal configuration
| Name | Version |
|---|---|
| terraform | >= 1.5.0 |
| aws | >= 5.52 |
| helm | >= 2.9.0 |
| kubernetes | >= 2.13.1 |
| Name | Version |
|---|---|
| aws | 5.100.0 |
| helm | 3.1.1 |
| kubernetes | 3.0.1 |
| Name | Source | Version |
|---|---|---|
| eks | terraform-aws-modules/eks/aws | ~> 20.0 |
| Name | Type |
|---|---|
| aws_acm_certificate.scan_scheduler | resource |
| aws_acm_certificate_validation.scan_scheduler | resource |
| aws_iam_role.cloudwatch_observability_role | resource |
| aws_iam_role_policy_attachment.cloudwatch_observability_policy_attachment | resource |
| aws_kms_alias.eks_secrets | resource |
| aws_kms_key.eks_secrets | resource |
| aws_route53_record.scan_scheduler | resource |
| aws_route53_record.scan_scheduler_cert_validation | resource |
| helm_release.scanner | resource |
| kubernetes_ingress_class_v1.auto_mode_alb | resource |
| kubernetes_ingress_v1.scan_scheduler | resource |
| kubernetes_manifest.alb_params | resource |
| kubernetes_storage_class_v1.ebs_gp3 | resource |
| aws_caller_identity.current | data source |
| aws_elb_hosted_zone_id.main | data source |
| aws_lb.auto_mode_alb | data source |
| aws_region.current | data source |
| Name | Description | Type | Default | Required |
|---|---|---|---|---|
| api_allowed_cidrs | CIDR blocks allowed to access the scanner API endpoint via an internal ALB. Example: ["10.0.0.0/16", "172.16.0.0/12"] api_enabled must be true for this to have any effect. |
list(string) |
[] |
no |
| api_domain | Hostname for the scanner API endpoint (e.g., scanner.example.com). This endpoint is used to: - Start scans (from CI/CD pipelines or manually) - Get scan status - Get logs (for support requests to Detectify) The endpoint is exposed via an internal ALB and is only accessible from networks specified in api_allowed_cidrs. |
string |
null |
no |
| api_enabled | Enables the Internal Scanner REST API that maybe be used to start scans, fetch results etc. Enabling this creates an internal load balancer to expose the API. When api_domain is alsoprovided, DNS records and a TLS certificate are set up as well. |
bool |
false |
no |
| chrome_controller_replicas | Number of Chrome Controller replicas | number |
1 |
no |
| chrome_controller_resources | Resource requests and limits for Chrome Controller | object({ |
{ |
no |
| cluster_admin_role_arns | IAM role ARNs to grant cluster admin access (for AWS Console/CLI access) | list(string) |
[] |
no |
| cluster_endpoint_public_access | Enable public access to the EKS cluster API endpoint. When true, the Kubernetes API is reachable over the internet (subject to cluster_endpoint_public_access_cidrs). Use this when users need kubectl/deployment access without direct connection via e.g. VPN or bastion hosts. Private access remains enabled regardless of this setting. IMPORTANT: Even with public access, all requests still require valid IAM authentication. Restrict access further using cluster_endpoint_public_access_cidrs. |
bool |
true |
no |
| cluster_endpoint_public_access_cidrs | CIDR blocks allowed to access the EKS cluster API endpoint over the public internet. Only applies when cluster_endpoint_public_access = true. IMPORTANT: When enabling public access, restrict this to specific IPs instead of using the default 0.0.0.0/0. AWS requires at least one CIDR in this list. Example: ["203.0.113.0/24", "198.51.100.10/32"] |
list(string) |
[ |
no |
| cluster_security_group_additional_rules | Additional security group rules for the EKS cluster API endpoint. Required when Terraform runs from a network that doesn't have direct access to the private subnets (e.g., local machine via VPN, CI/CD pipeline in another VPC, bastion host). Add an ingress rule for port 443 from the CIDR where Terraform is running. Example: cluster_security_group_additional_rules = { terraform_access = { description = "Allow Terraform access from VPN" type = "ingress" from_port = 443 to_port = 443 protocol = "tcp" cidr_blocks = ["10.0.0.0/8"] # Your VPN/CI network CIDR } } |
map(any) |
{} |
no |
| cluster_version | Kubernetes version for EKS cluster | string |
"1.35" |
no |
| completed_scans_poll_interval_seconds | Interval in seconds for checking if running scans have completed. Minimum: 10 seconds. Lower values provide faster result reporting but increase Redis load. | number |
60 |
no |
| connector_api_key | Connector API key for authentication with Detectify services | string |
n/a | yes |
| connector_server_url | Connector service URL for scanner communication. Defaults to production connector. | string |
"https://connector.detectify.com" |
no |
| deploy_redis | Deploy in-cluster Redis. Set to false when using managed Redis (e.g., ElastiCache, Memorystore) and override redis_url. | bool |
true |
no |
| enable_autoscaling | Enable Horizontal Pod Autoscaler (HPA) for scan-scheduler and scan-manager | bool |
false |
no |
| enable_cloudwatch_observability | Enable Amazon CloudWatch Observability addon for logs and metrics | bool |
true |
no |
| enable_cluster_creator_admin_permissions | Whether to grant the cluster creator admin permissions. Set to true to allow the creator to manage the cluster, false to manage all access manually via cluster_admin_role_arns. | bool |
true |
no |
| helm_chart_path | Local path to Helm chart. Takes precedence over helm_chart_repository when set. | string |
null |
no |
| helm_chart_repository | Helm chart repository URL. Set to null to use helm_chart_path instead. | string |
"https://detectify.github.io/helm-charts" |
no |
| helm_chart_version | Helm chart version. Only used when helm_chart_repository is set. | string |
null |
no |
| image_registry_path | Path within the registry where scanner images are stored. Combined with registry_server to form full image URLs (e.g., registry_server/image_registry_path/scan-scheduler). | string |
"internal-scanning" |
no |
| internal_scanning_version | Version tag for all scanner images (scan-scheduler, scan-manager, scan-worker, chrome-controller, chrome-container). Defaults to 'latest' to ensure customers always have the newest security tests. For production stability, consider pinning to a specific version (e.g., 'v1.0.0'). | string |
"stable" |
no |
| kms_key_arn | ARN of an existing KMS key to use for secrets encryption. If not provided, a new KMS key will be created. | string |
null |
no |
| kms_key_deletion_window | Number of days before KMS key is deleted after destruction (7-30 days) | number |
30 |
no |
| license_key | Scanner license key | string |
n/a | yes |
| license_server_url | License validation server URL. Defaults to production license server. | string |
"https://license.detectify.com" |
no |
| log_format | Log output format. Use 'json' for machine-readable logs (recommended for log aggregation systems like ELK, Splunk, CloudWatch). Use 'text' for human-readable console output. | string |
"json" |
no |
| max_scan_duration_seconds | Maximum duration for a single scan in seconds. If not specified, defaults to 172800 (2 days). Only set this if you need to override the default. | number |
null |
no |
| name | Unique name for this deployment (e.g. production-internal-scanning) | string |
n/a | yes |
| private_subnet_ids | Private subnet IDs for EKS nodes and internal ALB | list(string) |
n/a | yes |
| redis_resources | Resource requests and limits for Redis | object({ |
{ |
no |
| redis_storage_class | Kubernetes StorageClass for the Redis PVC. Defaults to ebs-gp3 (EKS Auto Mode with EBS CSI driver). | string |
"ebs-gp3" |
no |
| redis_storage_size | Redis persistent volume size | string |
"8Gi" |
no |
| redis_url | Redis connection URL. Override when using external/managed Redis. Include credentials and use rediss:// for TLS (e.g., rediss://user:pass@my-redis.example.com:6379). | string |
"redis://redis:6379" |
no |
| registry_password | Docker registry password for image pulls | string |
n/a | yes |
| registry_server | Docker registry server hostname for authentication and image pulls (e.g., registry.detectify.com) | string |
"registry.detectify.com" |
no |
| registry_username | Docker registry username for image pulls | string |
n/a | yes |
| route53_private_zone_id | Route53 hosted zone ID for DNS A records (scanner API). Required when api_domain is set. Can be a private hosted zone if the endpoint is only accessed internally. The zone must contain the domain used in api_domain. |
string |
null |
no |
| route53_public_zone_id | Route53 public hosted zone ID for ACM certificate DNS validation. Required when api_domain is set. IMPORTANT: Must be a PUBLIC hosted zone - ACM certificate validation requires publicly resolvable DNS records. |
string |
null |
no |
| scan_manager_autoscaling | Autoscaling configuration for Scan Manager | object({ |
{ |
no |
| scan_manager_replicas | Number of Scan Manager replicas | number |
1 |
no |
| scan_manager_resources | Resource requests and limits for Scan Manager | object({ |
{ |
no |
| scan_scheduler_autoscaling | Autoscaling configuration for Scan Scheduler | object({ |
{ |
no |
| scan_scheduler_replicas | Number of Scan Scheduler replicas | number |
1 |
no |
| scan_scheduler_resources | Resource requests and limits for Scan Scheduler | object({ |
{ |
no |
| scheduled_scans_poll_interval_seconds | Interval in seconds for polling the connector for scheduled scans. Minimum: 60 seconds. Lower values increase API calls to Detectify. | number |
600 |
no |
| vpc_id | VPC ID where EKS cluster will be deployed | string |
n/a | yes |
| Name | Description |
|---|---|
| acm_certificate_arn | ARN of the ACM certificate for scan scheduler |
| acm_certificate_domain_validation_options | Domain validation options for ACM certificate. Use these to create DNS validation records when managing DNS externally (create_route53_records = false). |
| alb_arn | The AWS ARN of the Application Load Balancer. |
| alb_dns_name | DNS name of the ALB created for scan scheduler |
| alb_zone_id | Route53 zone ID of the ALB (for DNS record creation) |
| api_endpoint | Scanner API endpoint URL |
| cloudwatch_observability_role_arn | IAM role ARN for CloudWatch Observability addon |
| cluster_certificate_authority_data | Base64 encoded certificate data for cluster authentication |
| cluster_endpoint | EKS cluster API endpoint |
| cluster_id | EKS cluster ID |
| cluster_name | EKS cluster name |
| cluster_oidc_issuer_url | The URL on the EKS cluster OIDC Issuer |
| cluster_primary_security_group_id | Cluster security group that was created by Amazon EKS for the cluster. Managed node groups use this security group for control-plane-to-data-plane communication. Referred to as 'Cluster security group' in the EKS console |
| cluster_security_group_id | Security group ID attached to the EKS cluster |
| kms_key_arn | ARN of the KMS key used for EKS secrets encryption |
| kms_key_id | ID of the KMS key used for EKS secrets encryption (only if created by module) |
| kubeconfig_command | Command to update kubeconfig for kubectl access |
| oidc_provider_arn | ARN of the OIDC Provider for EKS |
| scanner_namespace | Kubernetes namespace where scanner is deployed |
+-----------------+
| Route53 DNS | <-- Optional
+--------+--------+
|
+--------v--------+
| Internal ALB | <-- Optional
+--------+--------+
|
+--------v--------+
| Scan Scheduler | <-- API Entry Point
+--------+--------+
|
+--------v--------+
| Redis | <-- Job Queue
+--------+--------+
|
+--------v--------+
| Scan Manager | <-- Creates scan workers
+--------+--------+
| |
+--------------+ +--------------+
| |
+--------v--------+ +-----------v-----------+
| Scan Worker | | Chrome Controller |
| (ephemeral pods)|<------------------>| |
+-----------------+ +-----------+-----------+
|
+-----------v-----------+
| Chrome Container |
| (browser instances) |
+-----------------------+
Component Responsibilities:
- Scan Scheduler: Communication with Detectify, API entry point, validates licenses, queues scan jobs to Redis
- Redis: Persistent job queue for scan requests
- Scan Manager: Polls Redis, creates ephemeral scan-worker pods, reports results to Connector
- Scan Worker: Ephemeral pods that execute security scans
- Chrome Controller: Manages browser instances for JavaScript-heavy scanning
- Chrome Container: Ephemeral Chrome instances used by scan workers
The scanner may expose an internal API endpoint via an Application Load Balancer (ALB). This endpoint is used to:
- Start scans from CI/CD pipelines or manually
- Get scan status to monitor progress
- Get logs for support requests to Detectify
Network Access: Configure api_allowed_cidrs to allow access from:
- Your VPC CIDR (for CI/CD pipelines running in your network)
- VPN/corporate networks (for manual access and support debugging)
# Get kubeconfig
aws eks update-kubeconfig --region <region> --name <cluster-name>
# Check pods
kubectl get pods -n scanner
# Check logs
kubectl logs -n scanner deployment/scan-scheduler
kubectl logs -n scanner deployment/scan-manager| Issue | Solution |
|---|---|
| Terraform hangs or times out connecting to EKS | Terraform needs network access to the EKS API endpoint (port 443). Option 1 (no VPN): Set cluster_endpoint_public_access = true and restrict cluster_endpoint_public_access_cidrs to your IP. Option 2 (VPN/peering): Add cluster_security_group_additional_rules with an ingress rule for your network CIDR. See variable descriptions for examples. |
Pods stuck in ImagePullBackOff |
Verify registry credentials are correct |
| Certificate validation failing | Ensure ACM validation DNS zone is public |
Business Source License 1.1 (BSL 1.1) - See LICENSE for details.
- Documentation: Detectify Docs
- Contact: Detectify Support