Skip to main content

Deploying a production-grade Consul on GKE using Terraform Helm provider

In this blog we will show you how to use Terraform Helm provider to deploy Consul on Google Kubernetes Engine using Hashicorp’s official Helm chart. Consul will be installed with TLS and Gossip key encryption. Helm charts are used for packaging applications on Kubernetes. These charts can be installed using Helm tool but for this blog we will use Terraform for provisioning.

I’m assuming a cluster is already deployed in GKE. First we will prepare self-signed TLS certificates and keys for the use of Consul. We are using self-signed certs here to keep things simple, however in production you will likely want to load your own certificates.

#### Generate self-signed TLS certificates for Consul  resource "tls_private_key" "consul-ca" {   algorithm = "RSA"   rsa_bits  = "2048" }  resource "tls_self_signed_cert" "consul-ca" {   key_algorithm   = tls_private_key.consul-ca.algorithm   private_key_pem = tls_private_key.consul-ca.private_key_pem    subject {     common_name  = "consul-ca.local"     organization = "Arctiq"   }    validity_period_hours = 8760   is_ca_certificate     = true    allowed_uses = [     "cert_signing",     "digital_signature",     "key_encipherment",   ] }  # Create the Consul certificates resource "tls_private_key" "consul" {   algorithm = "RSA"   rsa_bits  = "2048" }  # Create the request to sign the cert with our CA resource "tls_cert_request" "consul-req" {   key_algorithm   = tls_private_key.consul.algorithm   private_key_pem = tls_private_key.consul.private_key_pem    dns_names = [     "consul",     "consul.local",     "consul.default.svc.cluster.local",     "server.dc1.consul",   ]    ip_addresses = [     google_compute_address.vault.address,   ]    subject {     common_name  = "consul.local"     organization = "Arctiq"   } }  # Now sign the cert resource "tls_locally_signed_cert" "consul" {   cert_request_pem = tls_cert_request.consul-req.cert_request_pem    ca_key_algorithm   = tls_private_key.consul-ca.algorithm   ca_private_key_pem = tls_private_key.consul-ca.private_key_pem   ca_cert_pem        = tls_self_signed_cert.consul-ca.cert_pem    validity_period_hours = 8760    allowed_uses = [     "cert_signing",     "client_auth",     "digital_signature",     "key_encipherment",     "server_auth",   ] }

Next we will store these certificate and keys as Kubernetes secret and later we will expose them to Consul.

resource "kubernetes_secret" "consul_certs" {   metadata {     name      = "consul-certs"     namespace = kubernetes_namespace.consul.metadata.0.name   }    data = {     "ca.pem"         = tls_self_signed_cert.consul-ca.cert_pem     "consul.pem"     = tls_locally_signed_cert.consul.cert_pem     "consul-key.pem" = tls_private_key.consul.private_key_pem   }    type = "Opaque" }

Now we will generate a random key for Gossip encryption and will store it as Kubernetes secret as well.

resource "random_id" "consul_encrypt" {   byte_length = 16 }  resource "kubernetes_secret" "consul_gossip_key" {   metadata {     name      = "consul-gossip-key"     namespace = kubernetes_namespace.consul.metadata.0.name   }    data = {     gossipkey = random_id.consul_encrypt.b64_std   }    type = "Opaque" }

At this stage certificates and keys are ready for TLS and Gossip encryption. Next, we will create a separate service account for the use of Helm and apply cluster-admin role to the new SA, the reason for this is the default service account used by Terraform has some permission issues which makes the Helm chart fail.

provider "kubernetes" {   load_config_file = false   host             = google_container_cluster.default.endpoint    cluster_ca_certificate = base64decode(     google_container_cluster.default.master_auth[0].cluster_ca_certificate,   )   token = data.google_client_config.current.access_token }  resource "kubernetes_namespace" "consul" {   metadata {     name = "consul"   } } resource "kubernetes_service_account" "helm_account" {   depends_on = [     "google_container_cluster.default",   ]   metadata {     name      = var.helm_account_name     namespace = "kube-system"   } }  resource "kubernetes_cluster_role_binding" "helm_role_binding" {   metadata {     name = kubernetes_service_account.helm_account.metadata.0.name   }   role_ref {     api_group = "rbac.authorization.k8s.io"     kind      = "ClusterRole"     name      = "cluster-admin"   }   subject {     api_group = ""     kind      = "ServiceAccount"     name      = kubernetes_service_account.helm_account.metadata.0.name     namespace = "kube-system"   }    provisioner "local-exec" {     command = "sleep 15"   } }  provider "helm" {   install_tiller = true   tiller_image = "gcr.io/kubernetes-helm/tiller:${var.helm_version}"   service_account = kubernetes_service_account.helm_account.metadata.0.name    kubernetes {     host                   = google_container_cluster.default.endpoint     token                  = data.google_client_config.current.access_token     client_certificate     = "${base64decode(google_container_cluster.default.master_auth.0.client_certificate)}"     client_key             = "${base64decode(google_container_cluster.default.master_auth.0.client_key)}"     cluster_ca_certificate = "${base64decode(google_container_cluster.default.master_auth.0.cluster_ca_certificate)}"   } }

Helm is now all setup, download Consul Helm chart here in an appropriate location. We will now customize Consul Helm chart as per the requirements.

resource "helm_release" "consul" {   name      = "backend"   chart     = "../helm/consul-helm"   namespace = kubernetes_namespace.consul.metadata.0.name    set {     name  = "global.bootstrapACLs"     value = "true"   }    set {     name  = "server.connect"     value = "true"   }    set {     name  = "server.replicas"     value = var.num_consul_pods   }    set {     name  = "server.bootstrapExpect"     value = var.num_consul_pods   }    set {     name  = "connectInject.enabled"     value = "true"   }    set {     name  = "client.grpc"     value = "true"   }    values = ["${file("consul_values.yaml")}"]    depends_on = [kubernetes_cluster_role_binding.helm_role_binding] }

We have set some of the parameters inside the resource definition, description for each of these parameters is available in the default values.yaml file that comes with the Consul Helm chart. The remaining parameters are defined in a separate file consul_values.yaml.

global:   enablePodSecurityPolicies: true   gossipEncryption:     secretName: "consul-gossip-key"     secretKey: "gossipkey" dns:   enabled: true ui:   enabled: true server:   storage: 20Gi   storageClass: standard   extraVolumes:     - type: secret       name: consul-certs       load: false     - type: secret       name: consul-gossip-key       load: false   extraConfig: |     {         "ca_file": "/consul/userconfig/consul-certs/ca.pem",         "cert_file": "/consul/userconfig/consul-certs/consul.pem",         "key_file": "/consul/userconfig/consul-certs/consul-key.pem",         "ports": {           "https": 8501         },         "performance": {           "raft_multiplier": 1         },         "log_level": "INFO",         "verify_incoming": true,         "verify_outgoing": true,         "verify_server_hostname": true,         "disable_keyring_file": true,         "disable_host_node_id": true     } client:   grpc: true   extraVolumes:     - type: secret       name: consul-certs       load: false     - type: secret       name: consul-gossip-key       load: false   extraConfig: |     {         "ca_file": "/consul/userconfig/consul-certs/ca.pem",         "cert_file": "/consul/userconfig/consul-certs/consul.pem",         "key_file": "/consul/userconfig/consul-certs/consul-key.pem",         "ports": {           "https": 8501         },         "log_level": "INFO",         "verify_incoming": true,         "verify_outgoing": true,         "verify_server_hostname": true,         "disable_keyring_file": true,         "disable_host_node_id": true     }

In the above config we have defined “extraVolumes” which are set to the Kubernetes secrets defined in the start for TLS and Gossip encryption. These volumes are exposed to Consul in the path ‘/consul/userconfig/’.

Now the Terraform code is ready to deploy, use terraform plan command and review the all the planned changes and then issue terraform apply to deploy Consul. After Terraform is finished making all the changes kubectl can be used to review the deployment.

$ kubectl get all -n consul
NAME                                                                  READY   STATUS      RESTARTS   AGE pod/backend-consul-connect-injector-webhook-deployment-f88dfb7zgh7s   1/1     Running     0          2m55s pod/backend-consul-jwpmt                                              1/1     Running     0          2m55s pod/backend-consul-qqwlp                                              1/1     Running     0          2m55s pod/backend-consul-server-0                                           1/1     Running     0          2m55s pod/backend-consul-server-1                                           1/1     Running     0          2m55s pod/backend-consul-server-2                                           1/1     Running     0          2m55s pod/backend-consul-server-acl-init-j7hlj                              0/1     Completed   0          2m55s pod/backend-consul-sfhns                                              1/1     Running     0          2m55s  NAME                                          TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                                                                   AGE service/backend-consul-connect-injector-svc   ClusterIP   10.0.91.167   <none>        443/TCP                                                                   2m55s service/backend-consul-dns                    ClusterIP   10.0.91.163   <none>        53/TCP,53/UDP                                                             2m55s service/backend-consul-server                 ClusterIP   None          <none>        8500/TCP,8301/TCP,8301/UDP,8302/TCP,8302/UDP,8300/TCP,8600/TCP,8600/UDP   2m55s service/backend-consul-ui                     ClusterIP   10.0.88.246   <none>        80/TCP                                                                    2m55s  NAME                            DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE daemonset.apps/backend-consul   3         3         3       3            3           <none>          2m55s  NAME                                                                 READY   UP-TO-DATE   AVAILABLE   AGE deployment.apps/backend-consul-connect-injector-webhook-deployment   1/1     1            1           2m55s  NAME                                                                           DESIRED   CURRENT   READY   AGE replicaset.apps/backend-consul-connect-injector-webhook-deployment-f88dfb76c   1         1         1       2m56s  NAME                                     READY   AGE statefulset.apps/backend-consul-server   3/3     2m56s  NAME                                       COMPLETIONS   DURATION   AGE job.batch/backend-consul-server-acl-init   1/1           35s        2m56s

Consul members status can also be checked using below mentioned command.

$ kubectl exec backend-consul-server-0 -n consul -- consul members
Node                                  Address          Status  Type    Build  Protocol  DC   Segment backend-consul-server-0               10.0.93.8:8301   alive   server  1.6.2  2         dc1  <all> backend-consul-server-1               10.0.92.7:8301   alive   server  1.6.2  2         dc1  <all> backend-consul-server-2               10.0.94.12:8301  alive   server  1.6.2  2         dc1  <all> gke-vault-default-pool-4f36e068-fkvg  10.0.94.9:8301   alive   client  1.6.2  2         dc1  <default> gke-vault-default-pool-598f6b87-hl4m  10.0.93.7:8301   alive   client  1.6.2  2         dc1  <default> gke-vault-default-pool-eb2ea058-sjds  10.0.92.6:8301   alive   client  1.6.2  2         dc1  <default>

If Helm tool is installed locally then the status of newly deployed Helm chart can also be verified using below mentioned command.

$ helm list
NAME    REVISION    UPDATED                     STATUS      CHART           APP VERSION NAMESPACE backend 1           Thu Jan  9 15:43:04 2020    DEPLOYED    consul-0.12.0               consul   

Helm makes it really easy to deploy and manage applications on Kubernetes as we just observed in the case of Consul deployment. In order to make this deployment production-grade we have made Consul communication secure by enabling end-to-end TLS for RPC and symmetric key encryption for Gossip protocol. This makes it possible to run Consul on untrusted networks and other shared hosting providers. We have also enabled ACLs with default deny to secure the UI, API, CLI, service and agent communications.

Share this story

Arctiq Team

We service innovation.