We can mount Azure file share to containers in AKS as explained in dcumentation here, and we can use static vloume mounting to use existing Azure file share. The documentation only explains how to setup static volume mount with Azure CLI. In this post let's look at stpes for using terraform provisioned Azure file share strage as a static volume mount in AKS, using kuberntes yaml.
First we can create a storage account and a file share in terraform as shown below. Code is available here in GitHub.
resource "azurerm_storage_account" "fs" { name = "${var.PREFIX}${var.PROJECT}${replace(var.ENVNAME, "-", "")}fsst" resource_group_name = azurerm_resource_group.instancerg.name location = azurerm_resource_group.instancerg.location account_tier = "Standard" # "Premium" account_replication_type = "LRS" account_kind = "StorageV2" access_tier = "Hot" allow_nested_items_to_be_public = false min_tls_version = "TLS1_2" cross_tenant_replication_enabled = false network_rules { default_action = "Deny" bypass = ["Metrics", "AzureServices", "Logging"] virtual_network_subnet_ids = [azurerm_subnet.aks.id] } } resource "azurerm_storage_share" "aks" { name = "aksfileshare" storage_account_name = azurerm_storage_account.fs.name access_tier = "Hot" #"Premium" quota = 200 # Size in GB }
Then we need to create a secret in AKS with the storage account name and the storage account key. Once workload identity support for staic provisioning available in AKS, we would be able to avoid creating kubernetes secret.
To create secret we can output the storage account name and key from terraform as shown below.
output "aks_fileshare_storage_name" { value = azurerm_storage_account.fs.name }
output "aks_fileshare_storage_key" { value = azurerm_storage_account.fs.primary_access_key}
With above information we can create a secret with below kubectl command or using yaml.
kubectl create secret generic azure-secret --from-literal=azurestorageaccountname=mysttorageaccountname --from-literal=azurestorageaccountkey=mystorageaccountkey
We can create yaml like below and apply it with kubectl.
---
apiVersion: v1
kind: Secret
metadata:
name: fsdemo-storage-secret
namespace: fsdemo
type: Opaque
data:
azurestorageaccountkey: base64encodedstoragekey
azurestorageaccountname: base64encodedstoragename
Then we can create a persistent volume and persistent volume claim as below.
--- apiVersion: v1 kind: PersistentVolume metadata: annotations: pv.kubernetes.io/provisioned-by: file.csi.azure.com name: fsdemo-storage-pv namespace: fsdemo spec: capacity: storage: 200Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain storageClassName: azurefile-csi # azurefile-csi-premium # Builtin storage class csi: driver: file.csi.azure.com volumeHandle: fsdemo-storage-pv # make sure this volumeid is unique for every identical share in the cluster volumeAttributes: resourceGroup: ch-azfs-dev-euw-001-rg # optional, only set this when storage account is not in the same resource group as node shareName: aksfileshare # refer iac\storage.tf nodeStageSecretRef: name: fsdemo-storage-secret namespace: fsdemo mountOptions: - dir_mode=0777 - file_mode=0777 - uid=0 - gid=0 - mfsymlinks - cache=strict - nosharesock - nobrl --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: fsdemo-storage-pvc namespace: fsdemo spec: accessModes: - ReadWriteMany storageClassName: azurefile-csi # azurefile-csi-premium # Builtin storage class volumeName: fsdemo-storage-pv resources: requests: storage: 200Gi
The deployments of containers can use the file share volume claim and mount it to a path specified in an environment variable of the container. The deloyment code here in GitHub.
volumes:
- name: fsdemo-data-volume
persistentVolumeClaim:
claimName: fsdemo-storage-pvc # PersistentVolumeClaim name in aks_manifests\prerequisites\k8s.yaml
containers:
- name: fsdemo-linux
image: chdemosharedacr.azurecr.io/fsdemo/chfsdemo:1.0
imagePullPolicy: Always
volumeMounts:
- mountPath: /fsdemo/data
name: fsdemo-data-volume
This allows the container to use local path /fsdemo/data which will use the mounted Azure fle share. The full example code is avalable at GtHub.
No comments:
Post a Comment