Think it, build it, ship it

Think it, build it, ship it

AKS ingress with nginx and Key Vault certificates

AKS ingress with nginx and Key Vault certificates

Intro

In this blog post I will show you how to deploy nginx ingress controller in AKS and secure the backends with TLS certificates that are stored in Key Vault.

Part 2 of this blog post can be found here, were we augment this setup with internal ip addresses and Azure Application Gateway

To securely access Key Vault from the pods we will create an User Assigned Managed Identity and install AAD pod identity and CSI Secret Store Provider in the cluster.

What we will build

  1. Azure Kubernetes Service
  2. Azure Key Vault
  3. Roles
  4. Deploy AAD pod Identity
  5. Deploy CSI Secret Store Provider
  6. nginx ingress controller
  7. Preparing the ingress

1. Installing AKS

The simplest way to create and AKS cluster is using Azure command line interface. For instructions on how to install follow this link.

After installing Azure cli it is also convinient to install kubectl to access the cluster, you can do that running az aks install-cli

First create a resource group and create the AKS cluster with the default settings.

With default settings you will create a cluster with Kubenet networking. This is enough for this blog post.

#Setup some variables that we are going to use throughout the scripts
export SUBSCRIPTION_ID="<SubscriptionID>"
export TENANT_ID="<YOUR TENANT ID>"
export RESOURCE_GROUP="<AKSResourceGroup>"
export CLUSTER_NAME="<AKSClusterName>"
export REGION="westeurope"
export NODE_RESOURCE_GROUP="MC_${RESOURCE_GROUP}_${CLUSTER_NAME}_${REGION}"
export IDENTITY_NAME="identity-aad" #must be lower case
export KEYVAULT_NAME="<KeyVault Name>"
export DNS_ZONE="private.contoso.com"

#create resource group
az group create --name ${RESOURCE_GROUP} --location ${REGION}
#create cluster
az aks create --resource-group ${RESOURCE_GROUP} --name ${CLUSTER_NAME} --node-count 1 --generate-ssh-keys

2. Azure Key Vault

Same as before we will use Azure cli to install Key Vault on the same resource group as AKS.

az keyvault create --name ${KEYVAULT_NAME} --resource-group ${RESOURCE_GROUP} --location ${REGION}

3. Roles

Remember when AKS was able to create Azure resources this was because AKS has an identity, in our case a Service Principal that has permissions to manage resources on the Node Resource Group.

This section explains various role assignments that need to be performed before using AAD Pod Identity. Without the proper role assignments, your Azure cluster will not have the correct permission to assign and un-assign identities from the underlying virtual machines (VM) or virtual machine scale sets (VMSS).

To follow best practices I will not be using the AKS Service Principal but will create a User Assigned Managed Identity and bind it to AAD Pod Identity.

We will also provide this managed identity access to get certificates from Key Vault

# Get the AKS service principal Id
export SP_ID="$(az aks show -g ${RESOURCE_GROUP} -n ${CLUSTER_NAME} --query servicePrincipalProfile.clientId -otsv)"
#Assign the roles
az role assignment create --role "Managed Identity Operator" --assignee ${SP_ID} --scope /subscriptions/${SUBSCRIPTION_ID}/resourcegroups/${NODE_RESOURCE_GROUP}
az role assignment create --role "Virtual Machine Contributor" --assignee ${SP_ID} --scope /subscriptions/${SUBSCRIPTION_ID}/resourcegroups/${NODE_RESOURCE_GROUP}

#Create an User Assigned Managed Identity
az identity create -g ${RESOURCE_GROUP} -n ${IDENTITY_NAME}
export IDENTITY_CLIENT_ID="$(az identity show -g ${RESOURCE_GROUP} -n ${IDENTITY_NAME} --query clientId -otsv)"
export IDENTITY_RESOURCE_ID="$(az identity show -g ${RESOURCE_GROUP} -n ${IDENTITY_NAME} --query id -otsv)"

#Assign the Service principal as an operator to the Managed User Identity
az role assignment create --role "Managed Identity Operator" --assignee ${SP_ID} --scope ${IDENTITY_RESOURCE_ID}

#Assign Reader Role to new Identity for your keyvault
az role assignment create --role Reader --assignee ${IDENTITY_CLIENT_ID} --scope /subscriptions/${SUBSCRIPTION_ID}/resourcegroups/${RESOURCE_GROUP}/providers/Microsoft.KeyVault/vaults/${KEYVAULT_NAME}

# set policy to access certs in your keyvault
az keyvault set-policy -n ${KEYVAULT_NAME} --certificate-permissions get --spn ${IDENTITY_CLIENT_ID}
az keyvault set-policy -n ${KEYVAULT_NAME} --secret-permissions get --spn ${IDENTITY_CLIENT_ID}

For next steps we will require

  • Helm to be installed, instructions here
  • Kubernetes context correctly configured; az aks get-credentials -g ${RESOURCE_GROUP} -n ${CLUSTER_NAME}

4. AAD pod identity

Now that we have our roles configured it's time to install AAD Pod Identity, which enables Kubernetes applications to access cloud resources securely with Azure Active Directory.

Since we are using kubenet and starting on version 1.7 of AAD pod identity we need to explicitly enable the support for it to be able to run in Kubenet clusters by enabling the flag --allow-network-plugin-kubenet=true. More info

#add the helm repo to our helm workspace
helm repo add aad-pod-identity https://raw.githubusercontent.com/Azure/aad-pod-identity/master/charts
#install aad pod identity with the Kubenet flag
helm install aad-pod-identity aad-pod-identity/aad-pod-identity --set nmi.allowNetworkPluginKubenet=true

Now that the crds are installed and the AAD pod identity pods are running we can deploy our identity and identity binding.

Create an AzureIdentity with type 0 (Set type: 0 for user-assigned MSI, type: 1 for Service Principal with client secret, or type: 2 for Service Principal with certificate. For more information, see here).

cat <<EOF | kubectl apply -f -
apiVersion: "aadpodidentity.k8s.io/v1"
kind: AzureIdentity
metadata:
  name: ${IDENTITY_NAME}
spec:
  type: 0
  resourceID: ${IDENTITY_RESOURCE_ID} #Resource Id of the Managed Identity
  clientID: ${IDENTITY_CLIENT_ID} #Id of the Managed Identity 
EOF

Now that the Identity is created we need to create an identity binding that defines the relationship between an AzureIdentity and a pod with a specific selector.

cat <<EOF | kubectl apply -f -
apiVersion: "aadpodidentity.k8s.io/v1"
kind: AzureIdentityBinding
metadata:
  name: ${IDENTITY_NAME}-binding
spec:
  azureIdentity: ${IDENTITY_NAME}
  selector: ${IDENTITY_NAME}
EOF

5. CSI Secret Store Provider (Key Vault integration)

Firstly lets create a certificate and store it in Key Vault

export CERT_NAME=ingresscert
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
    -out ingress-tls.crt \
    -keyout ingress-tls.key \
    -subj "/CN=${DNS_ZONE}"

openssl pkcs12 -export -in ingress-tls.crt -inkey ingress-tls.key  -out $CERT_NAME.pfx
# skip Password prompt

az keyvault certificate import --vault-name ${KEYVAULT_NAME} -n $CERT_NAME -f $CERT_NAME.pfx

You should see a new certificate in the Certificates window in Key Vault

image.png

Now let's install the Secrets Store Provider

#add the repo
helm repo add csi-secrets-store-provider-azure https://raw.githubusercontent.com/Azure/secrets-store-csi-driver-provider-azure/master/charts
#install chart
helm install csi-secrets-store-provider-azure/csi-secrets-store-provider-azure --generate-name

Now we have another crd available SecretProviderClass, this crd provides provider-specific (Azure) parameters to the Secret Score CSI driver.

NOTE: The SecretProviderClass has to be in the same namespace as the pod referencing it.

cat <<EOF | kubectl apply -n default -f -
apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
  name: ingress-tls
spec:
  provider: azure
  secretObjects:                                
  - secretName: ingress-tls-csi    #name of the secret that gets created - this is the value we provide to nginx
    type: kubernetes.io/tls
    data: 
    - objectName: $CERT_NAME
      key: tls.key
    - objectName: $CERT_NAME
      key: tls.crt
  parameters:
    usePodIdentity: "true"   #since we are using aad pod identity we set this to true
    keyvaultName: $KEYVAULT_NAME                        
    objects: |
      array:
        - |
          objectName: $CERT_NAME
          objectType: secret
    tenantId: $TENANT_ID      # the tenant ID of KeyVault            
EOF

6. Install nginx ingress controller

We are now going to create an ingress controller that will expose a public ip and will be able to serve TLS requests (It will give a warning because its a self-signed certificate)

#add the repo
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
#install nginx ingress controller
helm install ingress-nginx/ingress-nginx --generate-name \
    --set controller.replicaCount=2 \
    --set controller.podLabels.aadpodidbinding=${IDENTITY_NAME} \
    -f - <<EOF
controller:
  extraVolumes:
      - name: secrets-store-inline
        csi:
            driver: secrets-store.csi.k8s.io
            readOnly: true
            volumeAttributes:
              secretProviderClass: "ingress-tls"   #name of the SecretProviderClass we created above
  extraVolumeMounts:
      - name: secrets-store-inline
        mountPath: "/mnt/secrets-store"
        readOnly: true
EOF

This script will install nginx ingress controller and configure the Secret Store driver. We also set the label aadpodidbinding with the identity name for the identity binding to assign the correct identity to this pod.

it's also recommended to have at least 2 replicas of the ingress running. When you need to refresh the certificate the way you have to do it is by restarting the ingress controller pods. There is some work happening that will improve this by providing a better way of refreshing secrets.

This is the moment of truth if everything worked as expected this is when nginx ingress controller will ask Secret Store driver for a secret, and in turn Secret Store will use AAD pod identity to get that secret from Key Vault

The list of running pods

image.png

The list of secrets which contain the secret that was created by the Secret Store provider ingress-tls-csi, this is the secret we will use on nginx ingress

image.png

7. Preparing the ingress

Let's start by deploying a simple demo application, that exposes a ClusterIP service on port 80 and will serve as the backend to our ingress

kubectl apply -f https://raw.githubusercontent.com/hjgraca/playground/master/k8s/nginx/firstapp.yaml

Let's deploy the ingress

cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: ingress
  namespace: default
spec:
  tls:
  - hosts:
    - $DNS_ZONE
    secretName: ingress-tls-csi
  rules:
    - host: $DNS_ZONE
      http:
        paths:
          - backend:
              serviceName: firstapp
              servicePort: 80
            path: /
EOF

image.png

image.png

Now let's access that public ip to see if everything worked. Since private.contoso.com is not a valid domain owned by us and mapped to that ip address to be able to get a response we either change our local hosts file or use curl to do that for us.

curl -v -k --resolve $DNS_ZONE:443:20.71.67.138 https://$DNS_ZONE

#you should see something like this in the reply
* Server certificate:
*  subject: CN=private.contoso.com
*  start date: Nov 30 17:47:25 2020 GMT
*  expire date: Nov 30 17:47:25 2021 GMT
*  issuer: CN=private.contoso.com
*  SSL certificate verify result: self signed certificate (18), continuing anyway.

And if I change my hosts file, I can resolve the domain private.contoso.com but I get an expected error from the browser, if I check the certificate I can see it is indeed my self-signed certificate.

image.png

And that's it you now have a secure(ish) TLS endpoint using a certificate stored in Key Vault.

Final notes

  • Delete the ingress controller
helm delete ingress-nginx-****

This will delete the public ip address and the secret that contains the certificate.

  • If you are looking into storing Let's Encrypt certificates in Key Vault have a look at this project on GitHub
  • If you want to renew the certificate, after updating it in Key Vault you will need to delete the existing ingress controller pods, so they can ask for a new secret to be generated.
  • For the second part of this tutorial I will add a Private DNS zone and two ingress controllers one for public and the other for private traffic.
 
Share this