Kubernetes has become the de facto way to maintain and deploy containers. However, there are still some portions of our application that we manage by hand. The two I have noticed are maintaining DNS records and TLS certificates.

Shouldn’t we be able to define our desired DNS and TLS alongside our application manifests? Actually we can!

Two open-source Kubernetes plugins accomplish that. We have external dns that allows DNS management within your manifests and cert-manager that handles certificate management.

This blog post will guide you through the installation and configuration for these two plugins.

Things you’ll need

Here is a quick overview of what you will need.

  • Cloud Provider: For this guide, I will be using Vultr.
  • Kubernetes cluster: I’ll be using VKE (Vultr Kubernetes Engine).
  • Domain: This domain should be preconfigured to look at your cloud providers’ nameservers.
  • APIKey: This API key will be given to certain resources so they can communicate with your cloud provider on Kubernetes behalf.

Setup Domain

Depending on which cloud provider you are using you will want to add your domain entry.

Since I am using Vultr I will use the vultr-cli to create and validate my domain.

☁  ~  vultr-cli dns domain create -d devbytes.sh
DOMAIN    DATE CREATED      DNS SEC
devbytes.sh 2022-03-19T19:12:00+00:00 disabled


☁  ~  vultr-cli dns record list devbytes.sh
ID          TYPE  NAME  DATA    PRIORITY  TTL
87be33b9-24fb-4502-9559-7eace63da9f7  NS    ns1.vultr.com -1    300
de8edb75-7061-4c50-be79-4b67535aeb92  NS    ns2.vultr.com -1    300
======================================
TOTAL NEXT PAGE PREV PAGE
2

Cert-manager

Cert-manager at a high level is a custom Kubernetes resource that allows for certificate management natively within Kubernetes. Here is a more thorough explanation from the https://cert-manager.io landing page.

Cert-manager adds certificates and certificate issuers as resource types in Kubernetes clusters and simplifies the process of obtaining, renewing, and using those certificates. It can issue certificates from a variety of supported sources, including Let’s Encrypt, HashiCorp Vault, and Venafi as well as private PKI. It will ensure certificates are valid and up to date, and attempt to renew certificates at a configured time before expiry.

https://cert-manager.io/docs/ (Image taken from https://cert-manager.io/docs/)

With Cert-manager cloud providers can offer custom webhooks so that users can easily issue certificates as yaml manifests. In our specific use case for this guide, we will be using Vultrs cert-manager-webhook plugin which will handle our TLS certificates for our domains.

Note: Depending on which cloud provider you using you may need to refer to that specific provider instructions

Base cert-manager installation

To install the base cert-manager we can run a kubectl apply command that is provided from their docs. https://cert-manager.io/docs/. At the time of this guide, the installation command is as follows

kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.7.1/cert-manager.yaml

You should be able to inspect all of the related cert-manager resources in the cert-manager namespace.

Vultr cert-manager Installation

Now we will have to install the Vultr specific cert-manager-webhook. To accomplish this you will want to pull down the codebase from https://github.com/vultr/cert-manager-webhook-vultr

To start with the installation of the Vultr cert-manager webhook we will have to create a secret. This will contain your Vultr API key which the cert-manager will use to create DNS entries required for validation.

kubectl create secret generic "vultr-credentials" --from-literal=apiKey=<VULTR API KEY> --namespace=cert-manager

With the secret deployed we can install the vultr cert-manager-webbook.

helm install --namespace cert-manager cert-manager-webhook-vultr ./deploy/cert-manager-webhook-vultr

Same as the base cert-manager installation you can validate that the vultr webhook is running by inspecting the cert-manager namespace.

Issuing TLS certificates

With cert-manager deployed let us look at what our yaml definitions will look like for issuing certs.

We will start with deploying a ClusterIssuer. This will represent the Certificate Authority (CA) that will be used to create the signed certificates. In the yaml below we are creating our ClusterIssuer to LetsEncrypt staging environment. For production use, you will need to change this to https://acme-v02.api.letsencrypt.org/directory.

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-staging
spec:
  acme:
    # You must replace this email address with your own.
    # Let's Encrypt will use this to contact you about expiring
    # certificates, and issues related to your account.
    email: {YOUR EMAIL ADDRESS}
    server: https://acme-staging-v02.api.letsencrypt.org/directory
    privateKeySecretRef:
      # Secret resource that will be used to store the account's private key.
      name: letsencrypt-staging
    solvers:
    - dns01:
        webhook:
          groupName: acme.vultr.com
          solverName: vultr
          config:
            apiKeySecretRef:
              key: apiKey
              name: vultr-credentials

We also need to grant permissions for the service account to be able to grab the secret

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: cert-manager-webhook-vultr:secret-reader
  namespace: cert-manager
rules:
- apiGroups: [""]
  resources: ["secrets"]
  resourceNames: ["vultr-credentials"]
  verbs: ["get", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: cert-manager-webhook-vultr:secret-reader
  namespace: cert-manager
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: cert-manager-webhook-vultr:secret-reader
subjects:
  - apiGroup: ""
    kind: ServiceAccount
    name: cert-manager-webhook-vultr

With the ClusterIssuer and RBAC deployed. We will not be able to request TLS certificates from LetsEncrypt for domains hosted on Vultr.

Request a certificate

The Certificate resource represents a human-readable definition of a certificate request that is to be honored by an issuer which is to be kept up-to-date.

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: staging-cert-devbytes-sh
spec:
  commonName: devbytes.sh # REPLACE THIS WITH YOUR DOMAIN
  dnsNames:
  - '*.devbytes.sh' # REPLACE THIS WITH YOUR DOMAIN
  - devbytes.sh # REPLACE THIS WITH YOUR DOMAIN
  issuerRef:
    name: letsencrypt-staging
    kind: ClusterIssuer
  secretName: devbytes-sh-staging-tls # Replace this to have your domain

Before you apply this let us just go over what we have defined here

  • commonName: This is your base domain.
  • dnsNames: These are the certs that I am requesting. For wildcard (*) certificates you must encase them in ''
  • issuerRef: This references the name of the clusterIssuer. If you have a ClusterIssuer for production LetEncrypt you will want to match up the names here.
  • sercretName: This is the name of the secret you will store the TLS certificates in.

There are a few resources that will be created once the Certificate kind is created. They are as follows:

  • CertificateRequests: Namespaced resource in cert-manager that is used to request X.509 certificates from an Issuer.
  • Orders: Resources are used by the ACME issuer to manage the lifecycle of an ACME ‘order’ for a signed TLS certificate.
  • Challenges: resources are used by the ACME issuer to manage the lifecycle of an ACME ‘challenge’ that must be completed to complete an ‘authorization’ for a single DNS name/identifier.
☁  ~  k get certificates
NAME                        READY   SECRET                      AGE
staging-cert-devbytes-sh   False   devbytes-sh-staging-tls   47s

☁  ~  k get certificateRequests
NAME                              APPROVED   DENIED   READY   ISSUER                REQUESTOR                                         AGE
staging-cert-devbytes-sh-qvjvj   True                False   letsencrypt-staging   system:serviceaccount:cert-manager:cert-manager   55s

☁  ~  k get orders
NAME                                         STATE     AGE
staging-cert-devbytes-sh-qvjvj-3598131141   pending   59s

☁  ~  k get challenges
NAME                                                    STATE     DOMAIN          AGE
staging-cert-devbytes-sh-qvjvj-3598131141-1598866100   pending   devbytes.sh   61s

More information about these resources can be found here: https://cert-manager.io/docs/concepts/

This may take a few minutes but you can check on your certificate resources and viewing the “Ready” state.

The validation of the certifcate request may take a few minutes. You can check on the status by looking at the Ready state of the certificate.

☁  ~  k get certificates
NAME                        READY   SECRET                      AGE
staging-cert-devbytes-sh   True    devbytes-sh-staging-tls   5m11s

After all this, you will now have a valid TLS certificate from LetsEncrypt which will be stored in the secret name that was defined in the certificate yaml.

k get secrets | grep "devbytes-sh-staging-tls"
devbytes-sh-staging-tls    kubernetes.io/tls                     2      33m

Remember that our ClusterIssuer was created to point at the LetsEncrypt staging environment. For production use please use https://acme-v02.api.letsencrypt.org/directory.

There you have it automated TLS for your domain on Kubernetes!

External DNS

Let us move onto DNS. Whether you are using the ingress or loadbalancer services within Kubernetes you do not want to be configuring IP addresses by hand. This is where we will use ExternalDNS which will automatically create DNS entries for our domain and relate it to the service IP.

The installation for ExternalDNS is quite straightforward. If you are using a cloud provider other than vultr please check that provider specific installation instructions.

For Vultr, we can install ExternalDNS with the following yaml manifest.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: external-dns
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: external-dns
rules:
- apiGroups: [""]
  resources: ["services","endpoints","pods"]
  verbs: ["get","watch","list"]
- apiGroups: ["extensions","networking.k8s.io"]
  resources: ["ingresses"]
  verbs: ["get","watch","list"]
- apiGroups: [""]
  resources: ["nodes"]
  verbs: ["list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: external-dns-viewer
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: external-dns
subjects:
- kind: ServiceAccount
  name: external-dns
  namespace: default
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: external-dns
spec:
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: external-dns
  template:
    metadata:
      labels:
        app: external-dns
    spec:
      serviceAccountName: external-dns
      containers:
      - name: external-dns
        image: k8s.gcr.io/external-dns/external-dns:v0.10.2
        args:
        - --source=ingress  #service is also possible
        - --domain-filter=devbytes.sh # (optional) limit to only example.com domains; change to match the zone created above.
        - --provider=vultr
        - --registry=txt
        - --txt-owner-id=ddymko
        env:
        - name: VULTR_API_KEY
          value: "{API KEY}" # Enter your Vultr API Key

Before you apply this yaml lets go over some of the fields in the Deployment Spec args section.

  • --source: We will use ingress. However, you can also pair externalDNS with a regular service which will be of type `loadbalancer

  • --domain-filter: This definition restricts externalDNS to only filter for the supplied domain.

  • --provider: If you are using a provider other than vultr please update accordingly.

  • --registry: We use txt here so each record created by external-dns is accompanied by the TXT record.

  • --txt-owner-id: A unique value that doesn’t change for the lifetime of your cluster.

Once you apply the external-dns yaml you can make sure it’s running correctly by inspecting the pod.

kubectl get pods | grep "external-dns"
external-dns-8cb7f649f-bg8m5   1/1     Running   0          10m

With ExternalDNS running we will now have the ability to add annotations to our service manifests for DNS entries.

Tying it together

Deployment and service

So now with External-dns running and Cert-manager in charge of making sure we always have valid TLS certs. Let us deploy a simple application and expose it to the public internet on HTTPS.

We’ll deploy a single replica deployment of nginx along with a cluserIP service that will route to the deployment.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        name: nginx
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx
spec:
  selector:
    app: nginx
  type: ClusterIP
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80

Ingress

To expose this to the internet we will be using the Kubernetes ingress-nginx. You can also use a service type of loadbalancer instead of an ingress. If you do decide to use loadbalancer make sure that in your external dns yaml you change the type from ingress to service.

To get started with Kubernetes nginx ingress we will apply the prepared manifests that they have posted on their quick start guide.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/cloud/deploy.yaml

This will create a new namespace called ingress-nginx where all of the resources for the ingrses will live.

Let us go over the ingress entry to expose our nginx app.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-nginx
  annotations:
    # use the shared ingress-nginx
    external-dns.alpha.kubernetes.io/hostname: www.devbytes.sh
spec:
  tls:
    - hosts:
      - devbytes.sh
      secretName: devbytes-sh-prod-tls
  ingressClassName: nginx
  rules:
  - host: www.devbytes.sh
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
         service:
           name: nginx
           port:
             number: 80

This annotation external-dns.alpha.kubernetes.io/hostname: www.devbytes.sh defines what entry external dns should create. In this case, it will create an A record for www pointing to the load balancer deployed by the ingress.

The tls.hosts section defines which domain the ignress should treat as HTTPS. The secretName is the secret that has our TLS certificates that were created during the issuing a TLS certificate.

The rules.host section defines what URL should be routed to which service. So in the yaml defined above www.devbytes.sh/ should go to our nginx service/deployment.

Once deployed you can inspect this ingress by running.

kubectl get ingress
NAME            CLASS   HOSTS             ADDRESS           PORTS     AGE
ingress-nginx   nginx   www.devbytes.sh   173.199.117.108   80, 443   13h

Give Kubernetes/DNS a few minutes to propogate all of the requests and domain records but within a few minutes you should have a domain backed with HTTPS.

Wrapping up

We have gotten to the end let’s recap. You now can create TLS certificates for your application with cert-manager. Creating DNS entries and updating those records are part of your application’s manifest and don’t require adjusting records by hand. Finally, to expose these applications we can create an ingress resource that ties it all together. With these three tools at your disposal now you can define your application’s entire state just with yaml manifest and let Kubernetes handle the rest.

Useful links.