docker step in K8s agent Pods.
We recommend setting up GCP VMs as build agents.
You will need the OWNER role fpr GKE, because we need ClusterRoles, which are only allowed to owners.
The following steps are deploying a k8s cluster with a node pool to GKE in the europe-west-3 region.
The required terraform files are located in the ./terraform/ folder.
You have to set PROJECT_ID to the correct ID of your Google Cloud project.
Login to GCP from your local machine:
gcloud auth loginSelect the project, where you want to deploy the cluster:
PROJECT_ID=<your project ID goes here>Create a service account:
gcloud iam service-accounts create terraform-cluster \
--display-name terraform-cluster --project ${PROJECT_ID}Authorize Service Account
gcloud projects add-iam-policy-binding ${PROJECT_ID} \
--member serviceAccount:terraform-cluster@${PROJECT_ID}.iam.gserviceaccount.com --role=roles/editorCreate an account.json file, which contains the keys for the service account. You will need this file to apply the infrastructure:
gcloud iam service-accounts keys create \
--iam-account terraform-cluster@${PROJECT_ID}.iam.gserviceaccount.com \
terraform/account.jsonYou can either use a remote state (default, described bellow) or use a local state by changing the following in versions.tf:
- backend "gcs" {}
+ backend "local" {}
If you want to work several persons on the project, use a remote state. The following describes how it works:
Create a bucket for the terraform state file:
BUCKET_NAME=terraform-cluster-state
gsutil mb -p ${PROJECT_ID} -l EUROPE-WEST3 gs://${BUCKET_NAME}Grant the service account permissions for the bucket:
gsutil iam ch \
serviceAccount:terraform-cluster@${PROJECT_ID}.iam.gserviceaccount.com:roles/storage.admin \
gs://${BUCKET_NAME}For local state terraform init suffices.
cd terraform
terraform init \
-backend-config "credentials=account.json" \
-backend-config "bucket=${BUCKET_NAME}\"Apply infra:
terraform apply -var gce_project=${PROJECT_ID}terraform apply already adds an entry to your local kubeconfig and activate the context. That is calling
kubectl get pod should already connect to the cluser.
If not, you can create add an entry to your local kubeconfig like so:
gcloud container clusters get-credentials ${cluster_name} --zone ${gce_location} --project ${gce_project}Now you're ready to apply the apps to the cluster.
Note that to be able to access the services remotely you either need to pass the
--remoteflag (exposes alls services asLoadBalancerwith external IP) or--ingress-nginx --base-url=$yourdomainand either set a DNS record or/etc/hostsentries to the external IP of the ingress-nginx service.
Once you're done with the playground, you can destroy the cluster using
terraform destroy -var gce_project=${PROJECT_ID}In addition you might want to delete
- The service account or key and
- the state bucket (if created)
You either delete the key or the whole service account:
Key:
gcloud iam service-accounts keys delete $(cat account.json | grep private_key_id | sed 's/".*: "\(.*\)".*/\1/') \
--iam-account terraform-cluster@${PROJECT_ID}.iam.gserviceaccount.comService Account:
gcloud iam service-accounts delete terraform-cluster@${PROJECT_ID}.iam.gserviceaccount.com \
--project ${PROJECT_ID}Bucket:
gsutil rm -r gs://${BUCKET_NAME}